Thursday, January 27, 2011

Design as questions, development as answers

In Scrum workshops, I often ask developers if they have ever compared the game that they last shipped against the original design document.  The usual answer is that, except for the title, much of the design changed during development.

We are all accustomed to the idea that design documentation is a starting place or a way to win over the stakeholders, but  they are poor maps for development.  The problem is that, lacking a better map, we set off on a development journey that is often longer than planned or takes us to places we didn't want to go. 

Studies have shown that excessive documented specifications can create a false sense of certainty and lead teams astray.  A better approach is to is to acknowledge that designs are speculative and need to be proven out. 

Other industries have faced this problem long before us.  Take, for example, the Boeing 777, the largest twin engine plane built, that revolutionized airplane development and production at Boeing (it was the first airliner fully developed on computers).

One of the many innovations in the development of the plane was to create more concurrency between design and manufacturing (see concurrent engineering).  Before, manufacturing had to wait for all the designs to be complete before they started.  In traditional airplane development, designs are done years in advance of metal bending.  Often a problem with design wouldn't be detected until lots of metal was bent and subsequently had to be scrapped, costing millions.  This problem should sound familiar to game developers!

To address this classic problem, the 777 program integrated manufacturing closely with design.  Before the 777 there were only two states for a drawing: released and unreleased.  It was either completely done or not at all.  What they did was add  states in between.  Each of these states had to correspond to levels of manufacturing release before they could move the next drawing state.  This reduced the amount of uncertainty that existed with a design going in steps; catching problems and introducing improvements far earlier.  This maximized performance with design and manufacturing and contributed to the 777 being developed in record time.

For games, big designs up front (BDUFs) have a similar problem.  A design is either implemented or not, and most of the key design decisions are made without all the information.  We often discover, in production, that we've overlooked some key technical issue that won't allow a feature design to be realized as we had hoped.  We waste effort, throwing assets away and rewriting code to "make it work".

Can we do something similar to Boeing?  Yes.  As discussed earlier, we need to explore our designs and seek out what works and what fails.   Rather than introducing a fixed number of stages in our design, design should introduce questions for core features than then must be answered by development before we move on.

What are some of the questions about core features:
  • Is it something we want?
  • Is it fun?
  • Where is it used?
  • What are the technical risks?
  • What is it's development cost?
  • What does it cost in production?
When a design document is written, we don't know the answer to most of these questions, but the true answers that emerge have scuttled many schedules and budgets.  The start of a project is the worst time to try to answer such questions because that is when we know the least!

Take for example, "fully destructible environments".  How many times have we seen this feature in games that don't live up to its promise?  As a developer on a project with this feature goal, my experience was that the surprising amount of production time this feature added ended up killing it and wasted all the development effort we spent in pre-production.  We should have explored the question of production cost well before production started.

Much of development is investing in gaining knowledge about what to build and so making the most of development effort is about gaining the right knowledge, or, asking the right questions.  In the context of Scrum, these questions lead to the priority of items on the backlog, the order of work done in a release and a specific goal for each sprint.

Tuesday, January 25, 2011

Agile Game Development - An introduction

A one-hour introduction webinar to using agile for game development. The PDF version of the slides are here and the Powerpoint source for them are here.

Sunday, January 23, 2011

Justifying Research Teams (or not)

Have you ever been on a project that was significantly delayed because the new technology or gameplay took longer than expected?  Many of us have, and it should come as no surprise that new tech, mechanics or complex, exploratory asset creation schedules are hard to predict.  Yet they are part of projects with tight delivery dates.

These elements are often seen as the "critical path" of a project, which means that if they are delayed, the whole project is delayed.  Competent management of projects includes identifying such critical paths and prioritizing the work on them.

However, if there are many possible solutions and therefore a lot of uncertainty along a critical path, or if a number of projects share the same solution, the risk is often very high.

A common way to address this is to have a separate research team explore a solution well ahead of the start of a project or at least the point where the path becomes critical.  There are some advantages of this.  The first is that these research teams can have the time or resources to fully explore solutions and pick one that is best for the product and is not rushed into production due to schedule pressures.

My favorite example of this is the Toyota Prius' development.  Toyota's goal was to create a car that would appeal to the "green" consumers.  However, the choice of engine technology was not certain.  Either a hyper-efficient gas engine, an electric engine or a hybrid were all candidates.  Delivering the car to market in half the time that a typical car was developed was the challenging goal.  It was their critical path.

Toyota could have chosen an engine technology up front and focused their efforts on making it work.  Most companies, faced with this tight schedule, would have done so.  Instead what Toyota did was inaugurate three research teams to study each engine technology.

Months later, the all-gas engine team found that although they could improve the efficiency of the engine, they couldn't deliver enough efficiency to attract the green market.  The electric engine team found that  they could achieve the efficiency, but they couldn't deliver a car with a low enough price to entice buyers.  Although people want to be green, there is a limit to how much they will spend.

The hybrid engine team were able to build an engine within the mileage and cost targets, and it was  chosen and refined for production.

When hearing this story, many managers are unmoved. "We can't afford to have multiple teams researching multiple solutions!" they say.  In response, we might ask "how much does it cost to delay the entire project by months because the critical path was stretched out?"  You can easily spend 10 times the cost on a delay for a 80+ person team than you would have been researching up front with a much smaller one.

The concern about cost is not the only barrier to such teams.  There are at least three others:
  1. Proper focus - Insuring that the team is working, in a very focused way, on the right thing and not just an interesting project that will not directly benefit products.
  2. Raiding - Guess what happens when a project runs into trouble and wants to add bodies (often a bad idea in itself).  They raid the research team!
  3. Transferring the solution.  How does the project adopt the solution?
Each problem has a solution that is unique to a company, but often the best solution to the last problem is to have the researchers join the project until the new solution is in place and the project team can take it over.

Research teams are not always the answer.  There are pros and cons to having them.   If uncertainty is low, it's often better to have research be part of the project.   You have to judge how much uncertainty you have on your critical path and therefore how much you are willing to gamble on a delivery date. 

Saturday, January 15, 2011

Valuable failure

When exploring a new design, we want to generate information about its value.  Is a design element going to add to the product?  Is it going to be something the customer wants?  Is its cost offset by a greater value?  All of these are uncertain until we try it.

Uncertainty points to a lack of information.  As a result, the work we do should focus on generating the highest quality information possible.  This isn't done in many cases.  What is done is that we focus  on avoiding failure and in doing that, we limit the information generated.

Take a simple example of the high/low game where one person has to guess a number, say between 1 and 100, and the other person, who knows the number, tells them if each guess is too high or too low.  What is the first number guessed by the "expert" high/low player?  It's 50.  Every time.  Do they expect to pick the right number the first time?  No.  So why pick 50?  As it turns out, guessing 50 generates the highest quality information.  Picking 90 on the first guess would have an equal probability of success or failure, but it generates far less information than guessing 50.

Not all designs ideas are as uncertain as picking a number between 1 and 100, but there is always uncertainty.  Often however, we create goals that ignore the uncertainty and try to prove the first guess as correct and as a result generate less information.  This is usually because our work cultures reward correct guesses and punish incorrect ones.

A better approach is to welcome information-generating failure as much as success.

(read  about organizational design and risk management in "Managing the Design Factory", by Donald Reinertsen)