Search

Wednesday, December 18, 2013

Managing Dependencies

The inability to manage dependencies on a large game project can derail it quickly.  Dependencies must be managed.  The question is how.  Traditionally managed projects often attempt to identify and manage dependencies at the start of a project: often creating massive Gantt charts in the process that look very impressive, but fail to anticipate the uncertainties with schedule and allocation very well.

Large agile projects manage dependencies differently.  Identifying big dependencies and risks up front is a useful exercise, but it’s not enough.  Managing dependencies on a large scale agile project follows the pattern and mindset of most agile planning: to frequently inspect and adapt and respond to change.   Agile game teams do this through well formed user stories, cross-discipline feature teams, breaking through knowledge barriers, emergent practices, release planning and emergent production planning.

Well Formed User Stories
As suggested by the first part of the excellent INVEST mnemonic, user stories should be independent.  Independence means two things; First, try to keep stories from forcing an order of development.  Stories should be self-contained measures of value that stand on their own.  Second, stories should contain no hidden dependencies that would result in similar stories being wildly different in size and cost.  For example, consider the two stories below:
“As a player, I want to smash open doors to get inside buildings”
“As a player, I want to smash open crates to find any valuables they contain”

Both stories look similar, but let’s say there is no destructible prop technology in place.  That means the team that takes one of these stories first would need to implement that technology as well.  As a result, they’ll spend far more time implementing the story than the team that takes on the second one.  This kind of hidden dependency makes planning and forecasting more challenging.  There are two ways to address it:
  1. Combine the two stories, with the technology work, into one story.  This works if the resulting story fits into one sprint.
  2. Split the destructible prop technology work out into a separate story.  Now, you’re going to say “Wait!  That creates a dependency!”  Yes, it does.  I would prefer to include one of two props (door or crate) as part of that story, but there still is an order of implementation  created that explicitly calls out the technical dependency.  Sometimes it can’t be helped.
Good user stories is a starting place for the practices below.

Cross-Discipline Feature Teams and Breaking Through Knowledge Barriers
Cross-discipline teams contain everyone necessary to implement well-formed independent user stories.  These teams address cross-discipline dependencies on a daily basis.  Working closely with other disciplines to achieve a shared sprint goal leads to occurrences of “The Medici Effect”, where breakthroughs come from related areas of work.  For example, one team had a challenging dependency between the animators and audio composers that wastefully forced the composers to wait for completed and polished animation sets.  They tried to solve this between themselves with little improvements.  A tools programmer heard them talking about it and suggested a different workflow that could be supported by a quick tool change.  This completely eliminated the dependency.  I was most impressed not by the tool change, but by the fact that an idea for a better workflow for animators and composers came from a different discipline.

The focus on adding value to the game every sprint will also elevate knowledge within each discipline.  If I’m a programmer on a team that is having problems with the character controller, and I didn’t write it, I’ll seek out other programmers who know more and learn from them about how to solve such problems.

Cross-functional teams solve about 90% of their dependencies on their own.   These are mostly
smaller dependencies.  The following practices address the larger ones.

Emergent Practices
Detailed design documentation and technical architectural specs have their place.  Unfortunately, that place isn’t for areas that are uncertain.  We can’t plan away uncertainly, we have to learn and execute it away.

Dependencies proliferate from large documented plans.  We invest our faith in them and create “resource allocation” spreadsheets that attempt to faithfully implement a plan and this creates a “chain of dependencies” that fit together exactly, like the parts of a precision Swiss watch.  Unfortunately uncertainty erodes that faith through delay, rework, waste and complexity.  This knocks resource allocation plans further and further out of alignment.  As a result, the dependency chain starts to fail.  These failures are largely the source of the “how do we manage dependencies on large projects” angst.

Agile projects handle this by not falling into the trap of false certainty.  The primary means to this is though so-called emergent practices.  These practices such as Test Driven Development and their analogs for design and art, which, boiled down to their essence, mean:

You do the least possible to achieve a short term goal and as the true value and needs for a longer term goal emerge, you build upon the work, refactoring it to meet the emergent requirements and quality bar.  Through this, you find the shortest path to the best goal.

There are simpler ways of saying this.  A couple are:
The bottom line is that uncertainty can’t be planned away.  Our practices must reflect the level of certainty we actually have.

Release Planning
Most of the dependencies that cross-discipline teams can’t handle can be addressed in release planning.  These include dependencies that exist within large stories (epics) and specialist dependencies.

The example above identified  the destructible prop technology as a dependency for destructible doors and crates.  If the technology takes a full sprint, or longer to implement or needs to be implemented by another team or specialist, it creates a dependency.

Release planning can identify these dependencies and factor them into the release plan.  A release plan is a set of speculative sprint goal forecasts covering several months, which is updated every sprint as knowledge, cost and value emerge.  As at the word “forecast” implies, it’s never considered concrete and final, but has an accuracy which reflects the actual certainty present.

Even large teams will have a few areas that are only staffed by one or two specialists.  Examples include full-screen effects artists and low-level programmers, etc.  They might not even be needed full-time, but when they are, they are often needed by one or more teams.  Even finding ways to educate other developers to off-load some work, they can be the source of bottlenecks and dependencies.

Release planning can’t plan away the fact that something might not be completed on time or that you need to hire more specialists or cross-train more.  It just creates transparency, which exposes those problems for you to solve.

Emergent Production Planning
Video game development often has a major part that can’t be as agile as other products: asset production.  Whether your game has to ship with 32 sports teams/stadiums or a minimum of 12 hours of gameplay, you need to produce a certain amount of content.  This content is dependent on gameplay mechanics, metrics, technical and financial budgets and possibly a fixed ship date.  This creates a number of dependencies and risks.

This is why developers separate out a pre-production phase, which is highly fluid and agile, and production:  a more certain and predictable phase used to develop assets .
Unfortunately, we often treat the transition from pre-production to production as a fixed date.  It should be treated as crossing a boundary from uncertainty to less uncertainty.   This boundary is defined by the knowledge we have about production dependencies and risks.  These might include:
  • What is our polygon and texture budget for static geometry?
  • What are the budgets for dynamic objects, including NPCs?
  • What fits on the disc?
  • What gameplay mechanics are assets dependent on(e.g. jump height, AI count, etc)?
  • How much does each asset cost to make and how long does it take to make them?
These questions must be answered in pre-produçtion through the completion of user stories that bring us closer to the answer every sprint.  Examples of such stories for levels would be:

“As a level designer I want a low-resolution level that represents the full extent of a production level’s size so I can experiment with the pacing and some initial vocabulary of the mechanics.”

“As a level artist, I want to prototype a quarter of a level with production-quality assets, so I can learn about the cost and quality of producing full levels.”

Conclusion
With a lot of money and careers at risk, stakeholders often want all questions answered at the start of a game development project and I’ve never found it acceptable to tell them “I don’t know…we’re agile”.  I’ve found that explaining (yes, sometimes in a document) what all the known risks were, how we’re going to identify when each are triggered, and a resulting mitigation plan was a good replacement.  This book provides a good reference to that approach.

Dealing with dependencies on an agile team requires the same mindset that all uncertainty does: to attack it head-on through inspection and adaptation.  Reality trumps all plans, sometimes in nasty ways, but we have to embrace it and “waltz with the bears”.

Saturday, December 14, 2013

Scaling teams isn't very safe or easy

When I started creating games professionally in 1993, our largest teams contained about a dozen
developers.  The roles of producer and designer were new and some questioned the need of having either.  Games took a year or two to develop and cost upwards of five hundred thousand dollars.
The times have certainly changed.  Hardware capacity has followed the growth identified by Gordon Moore in 1965, and software and asset complexity have had to keep pace.  Staff sizes and budgets have grown almost logarithmically, while development cycle time has been kept to within a year or two.  This has created unique challenges and pressures that collide with human nature and project management realities:

  • Larger groups of people don’t communicate and work as effectively as smaller groups.
  • Adding money and people to keep a project on schedule is one of the worst things you can do to the project and its schedule.

Managers don’t ignore these realities because they are not understood, but because the market demands more games and businesses demand more profit.  Without better tools, these demands put enough pressure on them to make bad decisions.

But there exists tools to deal with these realities and provide answers.  How can we make games with small groups of developers that communicate well?  How can we maintain a schedule with a fixed ship date and not compromise quality?

The answers lie in leveraging the strengths of people and communication.  They require us to balance the planning and inspection of an emergent game rather than relying too much on a speculative, but highly detailed plan.  They demand that we focus on the most important parts of our game first and develop them to the point where we can answer questions that must be answered before we move on to less important parts.  They insist we include stakeholders in our frequent observation of the game and plan refinement, rather than pushing them off to a post-alpha wilderness, where their voice is too distant to have much influence.

The answer does not lie in feeling “safe” by creating an assembly line of rules and processes meant to be a one-size-fits-all approach applied to everything from banking software to interactive entertainment.  It requires constant vigilance to balance quality and market needs and to fight chaos and uncertainty head on.  It’s hard work, but it’s rewarding and we love it.

----------
References
http://en.wikipedia.org/wiki/Moore%27s_law
http://en.wikipedia.org/wiki/Dunbar%27s_number
http://en.wikipedia.org/wiki/Brooks_law

Monday, November 25, 2013

An Introduction to Lean for Game Development

What is Lean?
There are many definitions of lean and many different applications of it (the main being manufacturing and software development).   Here are three I’ve seen and liked:
  • It’s a way of building complicated stuff
  • It’s a system-driven approach to applying empirical process control
  • It’s a production practice that considers the expenditure of resources for any goal other than the creation of value for the end customer to be wasteful, and thus a target for elimination.
Lean practices, including Kanban, offer improved approaches to doing some work better than a pure approach using an agile framework, such as Scrum.

Why Use Lean?
The work better suited to lean:
  • Is more predictable in workflow and size
  • Requires less exploration and iteration
  • Has a chain of specialist work and handoffs that last longer than a typical sprint to produce something of value.
Work, such as level and character production, are good candidates for lean.  This is opposed to the work done in pre-production to explore what makes levels and characters fun and good looking, which is better suited to the cross-discipline swarming and iteration of a sprint.  The sprint time-box is great for limiting explorations to reasonable chunks of time and cost that can be used to inspect and adapt the game and the plan for it.  This time-boxed approach doesn’t fit as well with more predictable flows of work that have their own cadence, such as levels that might take a month or more to complete or bug fixes to an MMO that need to be tested and deployed daily.

What does Lean look like and how is it different from Scrum?
The major visible difference with lean is how the work is managed by the team.  For example, using Kanban practices, you’ll see task boards that are more detailed and have more policies to manage them.  A typical Kanban board for level production might look like this:


Each column represents discrete stages of workflow from backlog to done.  The numbers at each stage are called work-in-progress limits.  This setup has several benefits:
  • The work-in-progress limits increase the speed in which assets flow through.
  • The columns quickly show when the flow is piling up too much or if there is something blocking the flow that needs to be dealt with.
  • It allows improvements to be seen in the flow soon after they are introduced.
  • It gives everyone on the team instantaneous feedback on the big and small view of the work.
Rather than measuring how much is completed every sprint, lean practices measure the amount of time each asset or feature takes, from concept to done.  This time, called the cycle time, becomes the primary metric used to evaluate the pace of improvements and the date when the work is forecast to be completed.

Most studios using lean apply a combination of Scrum and Kanban practices commonly called Scrumban.  This typically means that the roles and most practices of Scrum are preserved.  The main difference is that instead of having a Scrum sprint, where the planning and review of a number of assets or features are done at a fixed time, Kanban planning and review is done on demand.  Teams will often still want to hold a stakeholder review of what was completed since the last review and hold a team retrospective.  This is referred to as a cadence and is usually set at two or three weeks.

Conclusion
Lean practices work best when there is more predictability and flow in the work.  Scrum and lean share a similar mindset of transparency, inspect-and-adapt practices and a focus on continuous improvement.  They describe different practices and tools that studios find complementary and ultimately customize as teams find better ways to work and make games.

References
Article on Kanban and asset creation
Chapter 7 in my book Agile Game Development with Scrum goes into more detail.

Wednesday, October 30, 2013

Defining Done


At the end of every sprint, a team must deliver a potentially shippable game that achieves a “definition of done”.

What does “done” mean at your studio?  When I ask this question in courses, we often share some anecdotes and definitions such as:
  • “It compiles on my machine.”
  • “I’ll know it when I see it.”
  • “It’s 90% complete”
  • “It’s coded” (or “it compiles”).
  • “You can see it in the game.”
  • “It’s first pass” (out of an undetermined number of passes)

There are all sorts of unique “cubicle level” definitions that each developer or group defines and one or two “project level” definitions that are honored occasionally over the course of a project (usually following lots of crunching and hacking).
One of the initial challenges in adopting agile is defining a more frequent and universal “definition of done”(DoD).  As with all agile practices, it’s emergent; Teams inspect and adapt the game and their work and continually improve the DoD.

Non-functional Requirements
A DoD is derived from a set of  non-functional requirements.  These are attributes of the game that are not unique features, such as:
  • The game runs at a smooth, playable frame-rate
  • The game runs on all target platforms
  • The game streams off disc cleanly
  • The game meets all Facebook API standards
  • The code meets the coding standards of the studio
  • The assets are appropriately named and fit within budgets
  • All asset changes must be “hot-loadable” into the running game
  • …and so on

Some of these requirements start out as user stories.   If the game is running at 10 frames-per-second, we’ll want to do some work to get it there and then ensure it stays as part of a DoD.  An example of that story might be:  
  “As a player, I want the game to run at 30 frames-per-second, so it doesn’t suck” 
This is a good early story to introduce (IMO, any game that runs at 15 fps sucks).

Creating a Definition of Done
Non-functional requirements aren’t a definition of done, since they are not a specific checklist of things for the team to do.  A DoD for the story above might start out with someone playing the game and telling the team if there are any lags.  Later, the team might automate this testing so that the game plays itself (all levels and missions) overnight, logging the time for every frame and pass/failing against a metric such as “the game must run at 33 milliseconds or better for 90% of the frames”.
While it might be tempting to create a massive DoD when a team starts using Scrum, it’s counter-productive because it’ll end up being ignored or gamed.  It’s best if teams build up the definition over time as they hold retrospectives and improve their practices.

Multiple Definitions of Done
Because potentially shippable doesn’t always mean shippable, there needs to be more than one DoD, including “done done”, which translates to “shippable” or “releasable”.  I’ve seen teams come up with four or more DoDs, which cover anything from “prototype done” to “demo-able done” to “done done”.  These identify the quality and content stages that large features, which take longer than one sprint to be viable (called epics), must go through.
I’m OK with one extra “done done” definition, but beyond that I prefer teams to use acceptance criteria for the progress of epics.


I’ll write about these acceptance criteria soon.  For now, I’m done with done.

Monday, October 28, 2013

Potentially Shippable

“At the end of every sprint, a game must be “potentially shippable”

Agile pundits use the phrase “potentially shippable” a lot, but don’t always agree on what it means.  
To me, it depends on the deployment cycle of the game we’re making.  Some games are deployed every sprint and “potentially shippable” for such games means they are ready to ship.  That’s easy!
Other games don’t release every sprint, perhaps releasing once every few years.  Requiring these games to be fully shippable every sprint is unfeasible for a few reasons:
  •  Games often have large minimally marketable feature sets (content and features) that take months to prepare.  Think of a sports title, with all the stadiums, teams and game features.  An NFL game with only six stadiums that only had running plays implemented wouldn’t sell very well!
  • First-party approval processes take weeks to pass.  Sony, Microsoft nor Apple would enjoy running your game through their testing group every few weeks.
  • Major features take more than one sprint to achieve a marketable quality.  Try shipping an FPS with two weeks of effort spent on online multiplayer!

This difference between potentially shippable and shippable leads to multiple definitions of done, each of which is a checklist of activities that developers need to perform every sprint.  These activities might include ensuring that the game runs at an acceptable frame rate, it doesn’t crash and the code has a high level of quality, etc.  The goal is to reduce the debt of unfinished work throughout development to a predictable level.  The reason is that debt (bug fixing, polishing, optimizing) grows with interest over time and can derail the best of plans.  For example, if your game has to stream levels off a disc in 40 seconds or less, wouldn’t it be best to test this before you’ve created all your levels?  Have you ever been on a team that discovered they had to chop half the textures out of their levels a month before they were supposed to ship?  I have, and it wasn’t fun!  Having such a test as part of a definition of done is a good idea.
My next blog entry will cover the definition of done and how it guides the creation of potentially shippable builds and evolves as the team’s development practices mature.

Friday, October 18, 2013

An Introduction to Situational Leadership


Imagine you are the leader of a team of developers creating a new mobile game.  The team has worked well together, but recently you’ve noticed that two of the senior members are butting heads.  Today, they are really getting into it, arguing in front of the rest of the team.  You can tell that this is having an impact on the morale and effectiveness of the team.

Do you:

  • Meet with both of them and tell them how they can resolve their conflict and make sure they do it?
  • Speak with them separately about the problem, and then get them together to discuss it.  Encourage them to work better together and support their attempts at improving collaboration?
  • Talk to them separately to get their thoughts, then bring them together and show them how to work out the conflict between their ideas?
  • Tell them you are concerned about the problem and the impact of it on the team, but give them time to work it out by themselves?
Which is the best answer?  It depends on the team.  Teams have different levels of maturity that must be matched with a leadership style best suited to it.  By “maturity”, I don’t mean how grown-up they act; we game developers don’t usually care much about that.  In this context, maturity means the skills the team applies to work out problems among themselves and their performance as a team.  We want teams with high levels of maturity because they are more effective and enjoy working together the most.  This is where situational leadership comes in.

Situational Leadership Theory

Developed in the late 1970’s by Paul Hershey and Ken Blanchard, Situational Leadership is a set of principles that help guide how leadership is applied to teams of differing maturity levels.  They defined four leadership categories:

  1. Directing - A leader defines roles and tasks for developers and the team.
  2. Coaching - A leader still sets the direction for the team, but coaches the team in the how roles and tasks are determined.  Allows the team more freedom in tracking their work.
  3. Supporting - A leader allows the team to make decisions about roles and tasks, but still shares in decision-making and progress monitoring.
  4. Delegating - A leader is involved in decision making and progress monitoring, but the team is fully self-organizing in their roles, practices and work.


The goal of a leader is to help teams adopt higher levels of maturity and change their approach to leading from directing through delegating:

The Situational Leadership Model



















This is just a model, not reality, but it is an effective model to help guide leaders to apply different standards to different teams, just as a parent applies different parenting principles to a teenager than they do a toddler;  at least I wish my parents had.

Situational Leadership Applied

Take a look at the answers to the question raised above and see how each of them might apply to teams at different maturity levels.  A newly formed team will most likely need the directing (S1) category applied, while others that have been together longer and are working well can have later categories applied.

The challenges to this are identifying the maturity level of the team, including what model to use, and coaching teams to reach higher levels of maturity.  Future articles will address those topics.

Wednesday, April 03, 2013

Mixed Asset Production Pipelines & Kanban

After my GDC presentation on Kanban for production, a question came up about having a board, which models a value stream that produces a variety of assets.  It was a good question, but I didn't have the time to answer it as thoroughly as I'd like, so I thought I'd do so here.

In the presentation, I started with describing a simple flow, such as the flow of drinks at Starbucks.

Starbuck's Kanban

If you are a coffee drinker who frequents Starbucks, like me, you probably appreciate that you don't have to wait for all the lattes, cappuccinos, etc ordered ahead of you to be made first.  This is because the barista is working on those exclusively and the cashier can directly pour your coffee for you.  Looking at the Kanban board above, my coffee goes from the "order" column to the "leave" column directly.

This benefits everyone.  As I mentioned in the talk, a key metric for Starbucks is the customer cycle time: the amount of time it takes between walking in the door and when you walk out with your drink.  The critical path for coffee drinkers and latte drinkers isn't the same, but it isn't entirely separate;  Much as I personally would enjoy it, there is no separate cashier line for coffee drinkers. Starbucks has chosen not to optimize specifically for us, for good reason.

This is similar to the approach you might use for mixed asset types.  Although every asset will have a large variation of effort needed and partially separate paths, measuring every asset's cycle time will still give us valuable information.  The goal isn't to achieve a uniform cycle time for all assets; Just as people who order lattes should expect to wait longer at Starbucks than us super-efficient coffee drinkers.

Let's look at the Kanban board that shows various assets going through a production pipeline:
A mixed asset Kanban board

This board includes assets that might need particle FX or animation applied to them, or neither.  The important principles apply.  We're going to measure the throughput and limit the work-in-progress (WiP) regardless of which steps are taken.  Some assets will skip some steps like me skipping the barista.  Doing this can improve the entire system.  As a coffee drinker, I don't care how quickly the barista can make a latte, but I greatly appreciate when the under-tasked barista helps fill coffee orders.

This can happen in an asset production pipeline as well.  As we measure throughput, we can create such policies in a production pipeline:  Starbucks has far shorter coffee cycle times than barista-drink cycle times and that is fine for everyone.  The key is to measure throughput for different asset classes and explore where and when improvements for classes can improve their cycle time without impacting the other classes.

Most production pipelines are far more complex than this, but the same principles apply.  Start by simply modeling what you're doing now.  Then measure throughput and reduce WiP.

...and don't be surprised that as you try to improve your existing overloaded hetrogenous pipeline that the conclusion you arrive at is that maybe the assumptions of the pipeline need an overhaul!

Saturday, March 23, 2013

Sprint commitments and forecasts

There has been much debate about whether a team, at the start of a sprint, commits to a sprint goal, or merely forecasts what they currently understand will be completed by the end of a sprint.

It's both.  In sprint planning, the team creates an initial sprint backlog, which is a forecast of the tasks, or bits of work, that they feel represents the best path to achieving the sprint goal.  The form this takes is up to them (hours, days, thrown chicken bone patterns, etc).  They will refine how they create the backlog over time to improve the value of their forecasts.

The commitment part is more about a commitment to do their best to achieve the goal while maintaining quality.

The problem is that very often this commitment comes into conflict with the initial forecast.  For example, one time I estimated it would take two days to implement drift-racing physics (with a handbrake control) into our vehicle dynamics model.  I was able to do this, but it took another week to make it "fun", much of that time sitting next to a designer.  This couldn't have been predicted and we could have stopped after two days and said "sprint goal achieved", but was it really?

At which point do we say, "it's good enough, time to move on"?  That can't come from sprint planning.  It has to come from the daily conversation with the team (including the product owner).  Sometimes this results in the forecast growing and the team delivering a part of the goal that meets the quality bar.

This definition can scare managers that first hear about it and it's where they and teams struggle at first.  This often comes from a culture that isn't prepared to trust developers to judge or achieve quality on their own and the inexperience of teams to be given this control.  So the forecast becomes the commitment and the teams focus on making the hours look good, rather than the game.   It takes time to establish the balance.

A commitment to quality at the expense of the forecast is the correct choice.  It's very easy to cut quality to look good on paper, but it will bite you in the tail end.  This doesn't mean we pursue the highest possible quality at all costs.  That quality has to be arbitrated by execution and measurement.  It has to be balanced with the needs of the customer.  It shouldn't rule at all cost.   My favorite example of "quality gone wild" comes from another driving game.  As the prototypical product owner, I encountered an artist modeling window flower boxes throughout the city players were to race in.  These required thousands of polygons and detailed textures to render.   The flower boxes were beautiful and added much color, but based on the cost of creating and rendering them, they couldn't be justified, especially from the point of view of the player, who would be passing them at over 90 MPH.

So, we killed the window boxes, but it was a good lesson on our team's path to learn how to build "95 MPH art.

Saturday, March 09, 2013

Agile in Embedded Software Development


In my spare time, I build various small devices using Arduino hardware or help my sons create small games.  I enjoy building devices because I had a background in hardware development as well as software development before I became a full-time game developer 20 years ago.

Embedded development benefits as much from agile practices as pure-software development.  I recently shared some tips on some of those:

  • Find ways to iterate the hardware as well as the software.  We found that reducing the time between software and bring-up paid dividends despite the cost of additional hardware development.  More breadboarding/prototyping was a big benefit.
  • Find ways to implement unit testing of the hardware as it's brought up and incrementally improved.  Using an example device that controls a lights brightness:  have a test that would send brightness commands to the hardware and allow someone to verify that the hardware is performing as expected at each level.  Automation of this is nice, but not always possible.
  • Find ways to ensure that interfaces are established, communicated and, if changed, easily and quickly communicated between hardware and software developers.  This is usually not a big problem on small teams, but it is with larger teams.  So, for example, if hardware changes the brightness control from an analog interface to a digital one, the change is reflected in the code and tested quickly.
  • Encourage hardware and software engineers to overlap as much as possible.  I like the phrase "generalizing specialists".  One tip: don't play paintball as a team building exercise.  We did that.   The hardware engineers teamed up, figured out how to increase the shot velocity of their paintball markers, and gave all of us software engineers painful welts.  It wasn't a good team building exercise. ;)
  • If you have any sensors, motor controls, transmitters, receivers, etc that have to interface with the real, noisy world, try to test these as subsystems as early as possible in the target environment.  One time we went out to sea on the first test of an underwater modem which we had simulated in Matlab and an enclosed water tank.  The temperature inversion layer, multi-path and Doppler effects of the actual ocean environment demonstrated that we were very much farther away from completion than we thought.  It was a bad day.


Monday, February 04, 2013

Leading Creative, Self-Organizing Teams

On March 1st, I'll be participating in a full-day master workshop in Montreal called "Leading Creative Teams for Productivity, Innovation, and Happiness" with Christopher Avery, Jason Della Rocca and Scott Crabtree.  My part will be describing how creative teams, which have been shown to be the most productive when intrinsically motivated, can have effective leadership.

Agile frameworks emphasize "people over process"; treating people not as machines on an assembly line, but as individuals motivated by autonomy, mastery and purpose.  These motivations cannot be commanded.  They must be cultivated.

This leads to a useful metaphor to me, that a good leader of creative people is more like a gardener than a process mechanic.

A gardener cannot force a tree to grow.  They nurture it's growth.  In this metaphor, the roots are the intrinsic motivations.  Leaders cannot change these.  They cannot be forced.  They are beneath the ground: always there.

The trunk is the organization's culture.  Like a tree's trunk, this cannot be pruned or changed, overnight.  Some cultures are weak and will not support good growth.  Some are strong; enriched from the start and will provide connection between motivated people and the fruits of what they build.

The branches are process and systems.  These result from the culture, but are also directly manipulated by the leader.  Unchecked, process can grow into an unwieldy tangle of practices that sap the productivity, motivation and passion of developers.

Gardeners cannot force proper growth.  Most of what they do is subtractive (pruning, weeding, etc) and indirect (there's got to be many good fertilizer jokes here).  The actual growth results not from the gardener, but nature.  This is where the concept of self-organization and complex adaptive systems come in.  Like all other organisms,  human organizations are complex adaptive systems.  We self-organize constantly, and mostly unconsciously.  We respond to processes, system, and rules in complex, often surprising ways.  This is why there are not hit game factories and it should be no surprise that the studio that is the closest to pumping out hit games like widgets off an assembly line embraces self-organization.

So if you are in Montreal on March 1st, please join us to explore how leadership, innovation, happiness and productivity can all grow together.