Tuesday, November 29, 2011

Why we should stop saying "vertical slices"

The other day I came across the this blog post by Ron Gilbert called The Vertical Slice in which he rails  against the creation of vertical slices.  The following quote struck me:

"Vertical slices might work in a medium where you start at the beginning and grind though in a fairly linear fashion and what comes out is 90% complete.  Maybe writing a novel works this way, but making movies and games do not.  They are an iterative processes.  You build foundations and the build up from there."

I love his image of the Mona Lisa's vertical slice.   But Ron is using a different definition of vertical slice than I've always used.  To me a vertical slice means is that we develop a feature to the point of knowing its value and use that knowledge to adjust the plan.  The point being that the plan won't tell you how fun something is: the game will.

Ron's definition is that vertical slices emerge from a plan that defines all the slices up front.  This might be a better approach from an engineering point of view over waterfall (fixing bugs along the way, etc), but  it abandons the benefit of iterating on a plan with a working game.  It doesn't surprise me that he's against that.

So maybe we should stop using this confusing phrase.  Maybe we should call it a "game increment", or something.  I'm open to suggestions.

By-the-way, here is how portraits were iterated on:

Do a Google image search on "unfinished portraits" and you'll see a lot of these, all with the heads nearly completed and little else in the portrait done.  Can you guess why?   It has something to do with prioritizing risk, and stakeholder value....things often spoken about in agile circles centuries after this was painted.

Also, da Vinci iterated on the Mona Lisa as well.

Tuesday, November 01, 2011

More on specialization

My recent post on specialization has ignited a bit of argument with people on one side saying that I've "come out against learning" and specialists who didn't like my use of their UI/database specialties as a poor example (if I die in a suspicious UI/database related incident, my post is to blame).

The point of the post was that we should encourage "multi-learning" or more cross-specialization without seeking to homogenize all specialization.  The point of the "anti-specialists" is that a graphics programmer should learn more about physics programming.   There are benefits, personally and professionally, to doing this.  There are common solutions in both and insights that crossing boundaries can create.  I agree.

On the other side, many deep specializations take, as Malcolm Gladwell wrote, over 10,000 hours of practice and an innate skill to achieve mastery.  I use the example of musicians because as a untalented amateur musician, there is no way I am going to become skilled enough to even write 10% of the music that a game needs.  However, having worked side-by-side with them and learned their workflow, I can still help them do their job.  Having learned about mixing and composition a bit, I have widened my world and appreciate game sound tracks a bit more.

We often specialize far too much.  For example, some studios have specialized QA to the point where a programmer just has to hand off barely compiling code and someone else integrates it.  QA should be the more the responsibility of everyone creating the game.   Everyone on the team should care about quality.  It shouldn't be a role.

We should also learn more about every surrounding role, every day.  We each shouldn't be just a specialist cog in a development machine.  We should be "game developers" first and [fill in the blank] specialists second.  How many games have you seen or worked on where its apparent that one area of specialization dominated and ignored the rest of the game?

Saturday, October 22, 2011

Scrum prohibits all specializations?

A recent conversation about a team staying together long-term, etc prompted me to ask: "what if the team no longer needs a musician?"  The responses stunned me.  Some insisted that the role "musician" is not part of Scrum and that they are not part of the team.  Everyone should learn to make music, write code, create art, etc.

Now, I understand that Scrum has been applied mainly to software products and that the elimination of "specialties" means that the database programmer, UI programmer and QA engineer should, ideally[*], be able to perform each other's roles equally.  This is valid.  But the idea that that this extends to separate functions such as music, programming and drawing makes no sense.

In "The New New Product Development Game", the landmark 1986 Harvard Business Review article that first coined the word "Scrum" for product development,  authors Takeuchi and Nonaka observed the benefit of cross-fertilization and multifunctional learning across specializations.  These principles directly apply to mixed teams of artists, musicians, designers and engineers working together to create better solutions than using a "relay race" of handoffs.  E.g. If I share a sprint goal with a musician, I become aware of their workflow and needs.  I can help them solve a technical problem with the audio code and they can make the game sound great.

You can be sure that in my book and courses, I don't teach that functions should homogenize on a team.  There is a role for a musician on a "Scrum team".  It's called a "team member".

[*] I've added the term "ideally" after posting this the first time.  The reason for this "ideal" overlap of specialization is to promote multi-learning across all specialties.  The goal is to have a shared understanding of each other's roles so that the team can continually improve how they work together but not to homogenize all specialization.

Saturday, September 24, 2011

Homebrew build status traffic light

Have you ever used build light indicator  (a light, software utility or sound emitter) to inform the team about the current status of the build?  Our team found using one of these, in conjunction with continuous integration and automated testing,  to be a great help towards achieving 98% build stability.  I have a little hobby/side-project going to build a better one starting with a $30 toy traffic light (see picture) and embedding some Arduino electronics that will allow it to be easily setup and used.

What the lights mean:
  • Green means the build is working
  • Red means that it is broken
  • Yellow means whatever you want.  The server is building or a test has failed, but we're giving you X minutes to fix it before the status goes to red.
I'll open source the hardware and software.  I might even sell a few at cost while I'm having fun making them.

Currently, the interface is an ethernet connection that uses DNS & DHCP to plug into your current network with little setup required.  The traffic light will appear as a server that accepts simple commands that change its state.  An optional audio cue will sound indicating a status change.

Potential future features:

  • Data logging the events and overall build stability metrics
  • Server page statistics (graph of stability, etc).
  • Simple display for setup options.
Please send suggestions about what you'd like such device like this to do and how you would like to interface it to your build server software.

Saturday, September 03, 2011

Accountability vs. Responsibility

[Cross-posted from Gamasutra]

I read this great quote from Gabe Newell in this Gamasutra article:
“Yeah, nobody can ever say "that's not my job." Nobody ever gets to let themselves off the hook. If there's a problem, you've gotta fix it.”

It demonstrates a high level of accountability at Valve, not just responsibility. The words accountability and responsibility are used interchangeably, but they don’t mean the same thing and the difference is important for game development teams.

Responsibility is assignable and forward looking.  For example, as an artist, I might be responsible for creating a model.  As a programmer, I might be responsible for making the character jump.

Accountability is backward looking.  Both the artist and programmer should be accountable for the character correctly jumping over the model.  Unfortunately, a lack of accountability might lead both to ignore the problem as “not their responsibility”.

Accountability isn’t as easily assignable as responsibility.  It’s more intrinsic.

Focusing exclusively on the assignment of responsibility tends to tell developers that those making the assignments will take care of all the cracks that will happen between areas of responsibility.  A balanced approached of delivering a part of a game and being accountable for its fun of it is harder.

The hard part is how is to grow accountability in a studio culture.  How do you grow it in yours?

Friday, August 19, 2011

What makes a good “visionary”?

[Cross-posted from Gamasutra]

There is a lot of talk about the visionary for a game, the person who creates and guide the vision through development.  Who is the visionary and what do they need to do to make their vision come to life?  I’ve been a project manager…not a product visionary, but I’ve worked with great visionaries and poor visionaries.  These are my impressions and questions:

The role of a visionary on a creative project is an essential and demanding one.  Many companies that consistently produce great products owe much of their success to their visionaries;  Apple has Jobs,  Pixar has Lasseter,  Nintendo has Miyamoto, etc.  But visionaries are nothing without talented teams to realize their vision.  Vision needs to be communicated, reinforced, inspected and adapted to the emerging reality of the game.  This is the visionary’s fundamental responsibility to the team.

A visionary must be demanding. They have to:
  • Unflinchingly question and test their vision early against the actual game.
  • Not allow work-in-progress (partially completed or unproven work) to drag out too long.
  • Call out substandard work immediately.
  • Ensure that the stakeholders are aware of progress and are able to safely air feedback.
  • Own a black turtleneck sweater
Successful visionaries have often been described as demanding, uncompromising, even brutal in their rejection of work that does not fulfill their vision.   Lasseter resets movie projects late in development because the story or character aren’t right, Job's throws tantrums when a design isn’t intuitive and Miyamoto cancels games that don’t "find the fun fast".  These are all examples of strong reactions to an emerging product that doesn’t live up to a vision.  Does this mean that a visionary must be a tyrant? I hope not.   There are as many different styles than there are personalities.  The key, it seems,  is to maintain integrity to a vision and to “course correct” towards the best game. 

Good visionaries should be willing to compromise because no vision is perfect.  Compromise is necessary to refine a game or to react to the unexpected, but compromise seems to go bad when the integrity of the vision is the thing being compromised: when the visionary assumes that some poor-performing part of the game "will be fixed later" or a bad mechanic "will be fun someday"; if the story-line isn't working, if the animation doesn't look right or if the system is sluggish, the visionary must demand correction.  However, when the visionary is afraid to hurt the team's feelings or needs to hit an arbitrary milestone date, then the wrong game is created and the team must eventually face the mad scramble to cobble something together when time runs out and a vision is sacrificed for a ship date.

Things we know:
  • Black turtleneck sweaters aren’t enough.
  • Methodology can’t automate the role.  Games will never come from an assembly line.
  • Vision, alignment, talent and leadership are all necessary elements of any great game and can’t be separated. 
  • Do visionaries have to be mentors?  Some of the best visionaries work with creators to demonstrate how to best work.  I’ve read about Lasseter sitting down with individual animators to teach them how to animate the eyes of a particular character.  Not all visionaries do this though.
  • Do visionaries need a big job title?  It would seem they do in most cases, but it is it absolutely necessary?
  • How thin should they be spread?  Can vision be taught?  Pixar production is limited to how many visionaries they have working for them.  Brad Bird couldn’t even take a few days of vacation before he was recalled to head up Ratatouille.  Steve Jobs’ illness has everyone wondering whether Apple will tumble when he leaves.
  • How does a studio identify and handle a poor visionary?  It’s easy to promote the wrong person to the visionary role, but hard to remove them.  A bad vision will kill a game or even a studio.  A bad visionary will blame the team and not the vision.

Monday, June 06, 2011

E3 and agility

Ah E3. I haven't been to E3 since 2006, after which it was downsized due to how bloated and expensive it had become.  I had attended every since it started in 1995, during the exciting early days of 3D hardware, through its steamy times in Atlanta and then back to Los Angeles, where it continued to grow beyond reason.

E3 was always preceded by a time of focused crunch: trying to make a box of gameplay parts come together into a working, fun game, followed by days of overwhelming noise, nights of excess and then a period of recovery, where we tore out all the bailing wire and duct tape that held the game together.

There were a lot of bad things about preparing for E3, but the best thing about it was that your game had to demonstrate fun to potential customers.  It was a time when the entire team focused on making the game fun, rather than keeping up with a schedule for its own sake.

I always felt that we needed more E3-like goals, but without the crunch, hacks and recovery.

Then I learned about Scrum and the sprint goal: to demonstrate a game that's more fun than the one shown from the previous sprint.  This sounded like having an E3-like build every 2-3 weeks without all the bad parts.  However, this isn't always the case for Scrum teams.  There are a couple of main reasons they use to explain why:

"It takes too long to create a stable build that runs at decent frame rates"

Very often, teams that start Scrum have an existing process that requires weeks of time to lock-down new feature additions and apply testing and optimization to achieve a playable build.  Scrum puts pressure on them to find ways of reducing this overhead.  It can take months or years, but eventually better practices and automation help the team keep the build stable and optimized enough to demonstrate value every sprint.

"It's impossible to always show improved value every sprint.  Sometimes it takes months for all the parts to come together to make a better game"

This  is true.  Often it takes multiple sprints to demonstrate playable value on core mechanics with decent assets (i.e.  running around shooting capsules in a flat shaded Lego block style environment isn't a lot of fun).  However, many times this attitude is taken too far.  As a result, releases become a series of fragmented sprints, which produce parts that are integrated at the end and the fun only emerges once every three months:


There are a few main problems with this:
  • Late integration often shows that some design assumptions were incorrect, but the team is out of time and can't revisit them.
  • The team often lacks a shared vision of what they are building during the release.  They focus more on completing tasks than adding fun.
  • The end of the release starts to look like the weeks leading up to E3: Lots of crunching and hacking just to "get the build working".
Scrum teams (including the ScrumMaster and Product Owner) should continuously push to shrink this once-a-release play and adapt cycle.    If they do, then sprints and integrations during the release will start to look more like this:

Eliminating TPS Cover Sheets

In fact, this cycle should never be short enough for a Scrum team.  This is not to say that this will eventually lead to the game being improved every hour of every day.  This attitude continuously influences the team to eliminate artificial process barriers (such as excessive baking time, long build practices, etc): all those expenditures of time and effort that are not spent directly improving the game.

E3 goggles

Producing an "E3 ready" build every sprint might not always be possible.  Tying up all the loose ends --making sure the build runs as long as possible on the target platform--takes a lot of effort, but teams should explore ways to find fun, build knowledge, reduce risk, refine cost projections every sprint.  We benefit from looking at the game through "E3 goggles" more often, which lets us look at the game from the point of view of E3 attendees.

Monday, March 07, 2011

Team motivation and the role of the ScrumMaster

Before reading this article, watch this 20 minute presentation by Dan Pink at TED (if you haven't already):

Take-away: Motivated teams perform FAR better than unmotivated one and creative workers respond best to intrinsic motivation.

Research has shown that the following three factors influence intrinsic motivation the most:

Autonomy - The urge to direct our own lives
Mastery - The desire to get better at something that matters
Purpose - The yearning to do what we do in service of something larger than ourselves

We want motivated teams.  We want to be on motivated teams.  We want to be motivated ourselves.

Motivation and the daily stand-up meeting

The daily stand-up is a window into the motivation level of the team.  Stand-ups with motivated teams are noisy, complex, often chaotic, and information rich.   They often seem like football huddles when the score is tied and there are no timeouts left in the final two minutes of the game.  There is humor, intensity and a sense of "being in it together".

The stand-up meeting for an unmotivated team is different.  It's common to sense the boredom or an impatience for it to end.  It often feels like a group of individuals reporting to the ScrumMaster, who is writing everything down, or even worse, with everyone sitting at a conference table looking at a projection of a spreadsheet on a wall and only paying attention when it's their turn to report.

A tedious daily stand-up meeting is a symptom that the team lacks one or more of the factors of intrinsic motivation.  It's often easiest to examine the team's autonomy and the practices of the ScrumMaster, whose role is to foster and grow autonomy.

Old dog, new tricks.

Often, ScrumMasters are recruited from the ranks of management.  They have years of experience managing people by creating estimates, assigning resources and tracking progress daily.  Scrum is meant to shift  these responsibilities to the team.  The practices for doing this are simple but, like  learning chess, the mastery of them takes time.  A common barrier is that the manager doesn't know how to trust the team and the team doesn't trust management's motivation.  This cultural clash was captured in a mockumentary we made years ago at High Moon:

Often new ScrumMasters think their job is to manage  details for the team and let them focus on  their  coding, asset creation or tuning tasks.  However, we all know that plans are fuzzy.  Even a two-week plan to create something compelling won't cover every conceivable bit of work (e.g. how many hours of tuning do you plan to make sure a mechanic "feels right"?) .  We need everyone on the team thinking about and examining what they are doing on a daily basis, not just following the plan made at the start of the sprint.  A sense of ownership, even for a mere two weeks, greatly benefits the goal. 

Emboldening the team to take ownership

Teams rarely take ownership at the start of a Scrum adoption and management rarely hands it out.  It takes time for roles, practices and trust to shift.   It's the ScrumMaster's role to insure that this shift occurs and that the inevitable collisions with studio culture are managed.

This includes emboldening the team to accept the risk of occasionally taking on too much and seeing their goal as primarily one of adding value, not simply reducing all their task times to zero.  Discovery and innovation are fueled by passion and motivation, but these both increase risk.  Playing it safe, padding out tasks so they are never late, or punishing teams whenever they challenge themselves too much kills motivation, and therefore innovation, fast.

A ScrumMaster is like a parent, in this regard.  All parents have some apprehension when their toddlers are ready to take their first step.  They pad every hard edge in an enclosed area and try to make it as safe as possible for their child to learn to walk, but parents have to let go at a certain point and let the risk of a bump or bruise outweigh their desire to protect their child from every possible mishap.  Growth is necessary.   Soon enough, the padding and gates are taken down and we marvel in pride that our children are strolling up and down the stairs we once considered life-threatening!

Similarly a ScrumMaster emboldens the team to take more ownership, but creates an environment where it's safe to fail (yet desirable not to).  They  shift layers of management responsibility to the team, always with the goal of coaching the team to higher levels of autonomy, which leads to more motivation, greater performance and, as a result, a better working experience.

Thursday, January 27, 2011

Design as questions, development as answers

In Scrum workshops, I often ask developers if they have ever compared the game that they last shipped against the original design document.  The usual answer is that, except for the title, much of the design changed during development.

We are all accustomed to the idea that design documentation is a starting place or a way to win over the stakeholders, but  they are poor maps for development.  The problem is that, lacking a better map, we set off on a development journey that is often longer than planned or takes us to places we didn't want to go. 

Studies have shown that excessive documented specifications can create a false sense of certainty and lead teams astray.  A better approach is to is to acknowledge that designs are speculative and need to be proven out. 

Other industries have faced this problem long before us.  Take, for example, the Boeing 777, the largest twin engine plane built, that revolutionized airplane development and production at Boeing (it was the first airliner fully developed on computers).

One of the many innovations in the development of the plane was to create more concurrency between design and manufacturing (see concurrent engineering).  Before, manufacturing had to wait for all the designs to be complete before they started.  In traditional airplane development, designs are done years in advance of metal bending.  Often a problem with design wouldn't be detected until lots of metal was bent and subsequently had to be scrapped, costing millions.  This problem should sound familiar to game developers!

To address this classic problem, the 777 program integrated manufacturing closely with design.  Before the 777 there were only two states for a drawing: released and unreleased.  It was either completely done or not at all.  What they did was add  states in between.  Each of these states had to correspond to levels of manufacturing release before they could move the next drawing state.  This reduced the amount of uncertainty that existed with a design going in steps; catching problems and introducing improvements far earlier.  This maximized performance with design and manufacturing and contributed to the 777 being developed in record time.

For games, big designs up front (BDUFs) have a similar problem.  A design is either implemented or not, and most of the key design decisions are made without all the information.  We often discover, in production, that we've overlooked some key technical issue that won't allow a feature design to be realized as we had hoped.  We waste effort, throwing assets away and rewriting code to "make it work".

Can we do something similar to Boeing?  Yes.  As discussed earlier, we need to explore our designs and seek out what works and what fails.   Rather than introducing a fixed number of stages in our design, design should introduce questions for core features than then must be answered by development before we move on.

What are some of the questions about core features:
  • Is it something we want?
  • Is it fun?
  • Where is it used?
  • What are the technical risks?
  • What is it's development cost?
  • What does it cost in production?
When a design document is written, we don't know the answer to most of these questions, but the true answers that emerge have scuttled many schedules and budgets.  The start of a project is the worst time to try to answer such questions because that is when we know the least!

Take for example, "fully destructible environments".  How many times have we seen this feature in games that don't live up to its promise?  As a developer on a project with this feature goal, my experience was that the surprising amount of production time this feature added ended up killing it and wasted all the development effort we spent in pre-production.  We should have explored the question of production cost well before production started.

Much of development is investing in gaining knowledge about what to build and so making the most of development effort is about gaining the right knowledge, or, asking the right questions.  In the context of Scrum, these questions lead to the priority of items on the backlog, the order of work done in a release and a specific goal for each sprint.

Tuesday, January 25, 2011

Agile Game Development - An introduction

A one-hour introduction webinar to using agile for game development. The PDF version of the slides are here and the Powerpoint source for them are here.

Sunday, January 23, 2011

Justifying Research Teams (or not)

Have you ever been on a project that was significantly delayed because the new technology or gameplay took longer than expected?  Many of us have, and it should come as no surprise that new tech, mechanics or complex, exploratory asset creation schedules are hard to predict.  Yet they are part of projects with tight delivery dates.

These elements are often seen as the "critical path" of a project, which means that if they are delayed, the whole project is delayed.  Competent management of projects includes identifying such critical paths and prioritizing the work on them.

However, if there are many possible solutions and therefore a lot of uncertainty along a critical path, or if a number of projects share the same solution, the risk is often very high.

A common way to address this is to have a separate research team explore a solution well ahead of the start of a project or at least the point where the path becomes critical.  There are some advantages of this.  The first is that these research teams can have the time or resources to fully explore solutions and pick one that is best for the product and is not rushed into production due to schedule pressures.

My favorite example of this is the Toyota Prius' development.  Toyota's goal was to create a car that would appeal to the "green" consumers.  However, the choice of engine technology was not certain.  Either a hyper-efficient gas engine, an electric engine or a hybrid were all candidates.  Delivering the car to market in half the time that a typical car was developed was the challenging goal.  It was their critical path.

Toyota could have chosen an engine technology up front and focused their efforts on making it work.  Most companies, faced with this tight schedule, would have done so.  Instead what Toyota did was inaugurate three research teams to study each engine technology.

Months later, the all-gas engine team found that although they could improve the efficiency of the engine, they couldn't deliver enough efficiency to attract the green market.  The electric engine team found that  they could achieve the efficiency, but they couldn't deliver a car with a low enough price to entice buyers.  Although people want to be green, there is a limit to how much they will spend.

The hybrid engine team were able to build an engine within the mileage and cost targets, and it was  chosen and refined for production.

When hearing this story, many managers are unmoved. "We can't afford to have multiple teams researching multiple solutions!" they say.  In response, we might ask "how much does it cost to delay the entire project by months because the critical path was stretched out?"  You can easily spend 10 times the cost on a delay for a 80+ person team than you would have been researching up front with a much smaller one.

The concern about cost is not the only barrier to such teams.  There are at least three others:
  1. Proper focus - Insuring that the team is working, in a very focused way, on the right thing and not just an interesting project that will not directly benefit products.
  2. Raiding - Guess what happens when a project runs into trouble and wants to add bodies (often a bad idea in itself).  They raid the research team!
  3. Transferring the solution.  How does the project adopt the solution?
Each problem has a solution that is unique to a company, but often the best solution to the last problem is to have the researchers join the project until the new solution is in place and the project team can take it over.

Research teams are not always the answer.  There are pros and cons to having them.   If uncertainty is low, it's often better to have research be part of the project.   You have to judge how much uncertainty you have on your critical path and therefore how much you are willing to gamble on a delivery date. 

Saturday, January 15, 2011

Valuable failure

When exploring a new design, we want to generate information about its value.  Is a design element going to add to the product?  Is it going to be something the customer wants?  Is its cost offset by a greater value?  All of these are uncertain until we try it.

Uncertainty points to a lack of information.  As a result, the work we do should focus on generating the highest quality information possible.  This isn't done in many cases.  What is done is that we focus  on avoiding failure and in doing that, we limit the information generated.

Take a simple example of the high/low game where one person has to guess a number, say between 1 and 100, and the other person, who knows the number, tells them if each guess is too high or too low.  What is the first number guessed by the "expert" high/low player?  It's 50.  Every time.  Do they expect to pick the right number the first time?  No.  So why pick 50?  As it turns out, guessing 50 generates the highest quality information.  Picking 90 on the first guess would have an equal probability of success or failure, but it generates far less information than guessing 50.

Not all designs ideas are as uncertain as picking a number between 1 and 100, but there is always uncertainty.  Often however, we create goals that ignore the uncertainty and try to prove the first guess as correct and as a result generate less information.  This is usually because our work cultures reward correct guesses and punish incorrect ones.

A better approach is to welcome information-generating failure as much as success.

(read  about organizational design and risk management in "Managing the Design Factory", by Donald Reinertsen)