Search

Tuesday, September 30, 2008

Announcement: Certified Scrum Master for Video Games Course in Montreal, November 20-21

I'll be teaching a CSM course aimed at video game developers in Montreal on November 20-21. Here was the announcement sent out today:

Attend a workshop at the Montreal International Game Summit proposed by Clinton Keith, one of the speakers, and obtain a ScrumMaster for Video Game Development certification. The workshop will be given in English. But hurry; there are only 50 seats available!

Inscriptions are reserved to members until October 3rd. Afterwards; it will be opened to all until October 15. For more information (location, prices, etc.), click here.

Alain Lachapelle
Director, Montreal International Game Summit
alachapelle@alliancenumerique.com
T : 514 848-7177, 224

Friday, September 26, 2008

Phases in an agile game project?

In most agile projects outside the game industry there are no phases of development. Projects start with releases and every release delivers a version of the product to customers. Think of applications like Firefox. There is a new release of the browser on a regular basis.

Eliminating phases is a big benefit of agile; phases such as “a testing phase” force the critical activity of testing to be postponed to the end of the project where fixing bugs is the most costly. “Planning phases” at the start of projects attempt to create detailed knowledge about what features will be fun and the work associated in creating them. Unfortunately the best knowledge comes from execution which is why highly detailed pre-planning fails.

For many games however, there is often still a need to have phases within the game. There are two major reasons for this:
  • There is a minimum bar to the content being delivered regardless of the quality. $60 games must deliver 8 to 12 hours of gameplay. This represents the major portion of the cost of development and occurs after the gameplay mechanic is discovered.
  • Publishers have a portfolio driven market model. This constrains the goals of the games that they fund. In order to gain publisher approval (which includes marketing and often franchise/IP owner approval), developers need to create a detailed concept treatment at the start of a project.
The first reason compels us to separate pre-production and production activities into separate phases. Pre-production allows more freedom to iterate on ideas and explore possibilities. During production, we are creating thousands of assets that depend on what we have discovered during pre-production. These assets create a cost barrier to change. For example, consider a team in production on a platformer genre game. Platformer games challenge the player to develop skills to navigate treacherous environments (such as Nintendo’s Mario series). The team will create hundreds of assets that depend on character movement metrics such as “how high the character can jump” or “the minimum height that the player can crawl under”. Production assets depend on these metrics.

If these metrics are changed in the midst of production, it can wreak havoc. For example, if a designer changes the jump height of the character, hundreds of ledges or barriers would have to be changed. This can create a great deal of wasted effort during the most expensive phase of development. It’s critical to discover and lock those metrics during pre-production.

This doesn’t mean that we can’t be agile during production. How we are agile does change. Instead of using an iterative and incremental process such as scrum, a more incremental process such as Lean is more applicable. More later on Lean....

Wednesday, September 24, 2008

Black Rock Tricks Out, Gets Agile With Pure

Interesting article in Gamasutra. Jason Avent talks about their experiences with agile:

And they define how they do the work, and what work they do, and what order. That means that because they're closest to the work, closest to the jobs that need doing, they make better decisions than the people who are further away.

So it makes people happier, it makes people much more productive, it means they don't have to work such long hours, and the product is better. It's such a hands-off management method, it does make you bite your fingernails a bit, initially, but once you see the results, it's huge.

----
It can take a bit more bravery than "biting your fingernails a bit". Giving autonomy to teams, even for a couple weeks at a time, is too much for some managers.

Another cause of adoption failure is hiding from transparency; Scrum, done correctly, exposes everything that is wrong with your development environment. Some managers will see this flood of exposed flaws as being caused by Scrum. Instead of addressing the actual problems, they will stop the experiment with Scrum and everything goes quiet again. Problem solved!

Whenever I visit a studio to talk about agile and Scrum, there is always that person or small group of people that are at the center of the adoption. They are taking a chance introducing and championing a new way to think about how people work together. I'm always impressed by them.

Tuesday, September 23, 2008

Early Consumer Testing

There is a good article on Gamasutra about the merits and precautions of early consumer testing from the experiences of NetDevil. More developers are using Scrum and have builds that show value very early in development. This creates an opportunity to test with consumers early to inspect what is resonating with your potential customers much sooner.

NetDevil is experienced Scrum developer in Colorado. They have fully committed to adopting agile and sent several people to the first Certified Scrum Master for Video Games course in Austin four months ago. In the article Ryan Seabury, the lead producer for Lego Universe, cautions about taking things too far with reacting to what the consumers report. This reflects High Moon's experience with early testing as well.

At best consumer testing is a gut check for the developers and especially the Product Owner on the direction being taken with the game. It also has another great benefit; When the team knows that their build is going to consumer testing, their attention to detail and polish is raised. The team looks at the build wearing their "consumer goggles".

Monday, September 22, 2008

Shared Infrastructure Teams

Shared Infrastructure (SI) teams provide low level support such as engine, audio, online, etc services that multiple games rely on.

A frequently asked question is how shared infrastructure (SI) teams should organize themselves in an agile project environment. Since they support multiple teams, they receive requests for features that cannot easily prioritized as they are for a single team. This can create confusion and conflict between the SI team and their "customers", the games that depend upon them.

I've found that a number of practices are valuable for these teams:
  • SI teams require their own backlog and product owner (PO). Having more than one backlog and one PO is a recipe for disaster. The team should have every benefit that other agile teams have in an understandable backlog and single vision.
  • Customer teams should identify priorities during release planning and include the SI team (or at least their lead and PO) in their release planning. SI teams usually need a longer planning horizon than a single sprint.
  • SI teams should factor support into their velocity whether it is identified for tasks or not. Setting aside a certain percentage of your bandwidth for unexpected maintenance is critical.
  • Loaning SI team members out for a sprint is OK, but it should be identified in customer release planning. It's very valuable to have SI team members see how their "product" is being used.
  • The SI PO should ideally be at the executive level (or in frequent contact with them) to arbitrate conflicting product priorities. Deciding to support one game over the other is a company level (strategic) decision and should have the input from the people that run the studio. For example, the CTO should be the PO for the SI team (how's that for an acronym loaded sentence?).
With their own PO and backlog, an SI team can feel like a real team and take ownership of their work.

Monday, September 15, 2008

A Better Planning Method

I'll never forget the day I experienced my first "publisher flame-thrower phone call". At the time I was the Director of Product Development at Angel Studios. We were six months away from the ship date of Midnight Club 1, a PS2 launch title. I called an executive at Rockstar to tell him that we needed to drop one of the three cities we were planning to ship for the game. I can't describe the phone call other than asking you to imagine calling Leonidas (from the movie 300) to tell him you had just sacked Sparta behind his back.

I really didn't expect such a strong reaction. We had originally written the design document to include six cities. We had cut that down to three as we discovered how difficult it was to create full cities for the PS2. During the first phone call, they accepted the cut calmly, I later learned that the final cut from three cities to two had come one week after a large marketing blitz which told the world that Midnight Club would have three cities. Had I called them a couple of weeks earlier, it would have been a much different call. Of course, had we put two cities in the original design document, we would have been heroes by shipping on time and budget AND with the original scope. As it was we shipped on schedule and budget with one third of the scope we had "promised".

The uncertainty of how accurately we can plan decreases with how far we plan ahead. We often get into trouble by relying on our plans too much and ignoring reality. We base complex and highly interdependent schedules on a plan we assume is comprehensive; a plan that anticipates every potential risk. We then discover that reality doesn’t follow our plan and the amount of work we need to spend is far greater than our budget allowed for.

One reaction to this problem is to ask “why plan at all?”. Indeed, many projects have launched with little planning. A project without a plan or schedule can be appealing at first glance, but there are problems this raises. Maintaining a vision becomes a problem, especially with larger teams. Strong leadership can help overcome this, but that leadership will often become a bottleneck. Developers of sequels to smash hit games will often announce that the game “will be done when it’s done”. Unfortunately even this formula doesn't always insure that the success of the sequel will match that of its predecessor.

There are often schedules outside the team that need to be coordinated with key project deliveries. Publishers often have marketing budgets that drive the rate at which projects can be released. Portfolios of games are balanced around key selling seasons or movie co-release dates. Few developers have the luxury or of ignoring these pressures.

The reality is that most hit games released have missed their original ship date, budget and scope. Detailed planning, bloated budgets, staffing and crunch inflicted on the developers haven’t proven to be a cure.

So what value does planning have? It isn’t so much about creating an accurate schedule, cost and feature set for a project up front. It’s an ongoing quest to refine those values over time through iteration. It's used by the team and customers to balance schedule, cost and features. Any planning method has to acknowledge the uncertainty of these three elements of a project up front and focus on refining the plan continuously by doing the following:

  • Reducing risk - You can’t plan away uncertainty. A plan should acknowledge risk first and foremost. Addressing risk requires visibility and transparency.
  • Creating knowledge - A “Big Design Up Front” (BDUF) usually fails because we don’t know enough to make the decisions that we make up front. A planning process has to build on what we learn through iteration.
  • Communicating information - A good planning method has to communicate changing information properly. BDUFs fail at this because teams and customers don’t reread the document for changes. Frequent effective meetings between the team and stakeholders to update the plan are essential. Instead of spending months planning up front a good planning method spreads the planning time across the project.
  • Supporting better decision making - Too often bad games are released because the decision to cancel it came too late in the schedule. Publishers simply hope to recoup some of its budget through selling the game to a hundred thousand unsuspecting players. A proper planning process would allow better decisions to be made earlier which would steer the game to profitability or cancellation before too much money is spent.
  • Reflecting realistic progress - We would prefer to know early in the project whether our plan is realistic or not. Say we have to ship our game 12 months from now and our current feature set requires 18 months based on our current progress. Knowing this information can help us pick and choose a smaller set of the most valuable features or decide that we have to slip the release date. Conversely if we realize we are going to miss the ship date 3-6 months before we are scheduled to slip, we may not have the same range of choices: We might be in the middle of production on a fixed set of mechanics. A good planning process should have a feedback mechanism built in to reflect reality. It should build trust between the publisher and team.

The goal of agile planning provides all these areas of support. If you want to find out more, buy this book.

At the very least, you can avoid a phone call like the one I had!

Tuesday, September 09, 2008

Tools for building the product backlog

I’m opposed to using tools to automate the daily scrum. These tools detract from full team collaboration that needs to take place to be successful with Scrum. However there are strong benefits to using tools to help facilitate the gathering of user stories into a product backlog. Epics and stories are not a scattered collection of ideas for a game. They form a hierarchy of requirements that are disaggregated and updated throughout the life of the project.


Many tools can store and maintain a hierarchy of data. A simple database can be constructed to do what you need quickly and inexpensively. However, there are a number of features to look for when choosing or even designing a tool for this purpose:
  • Graphical display - It’s best to display the hierarchy on a projector during a story gathering workshop. This shows the big picture of the product backlog and where new stories are being added in the hierarchy.
  • Dynamic editing and display of branches - Sometimes entire branches of the tree will be moved or deleted during the workshops. It’s best if this can be done by right-clicking or dragging the branch you wish to change. Sometimes the group will want to focus on a single branch in detail. It’s very useful to be able to collapse all the other branches and just display the branch the group wants to discuss.
  • Graphical options for individual stories - Stories may be prioritized, flagged for attention or have additional information attached to them during the meeting. A tool that is extensible and allows meta-data attached is beneficial.
  • Flexible - The tool must not impose too much structure. It should allow the creation of sections such as a “parking lot” for future story ideas, etc.
  • Powerful export capabilities - The tool should be able to export data to a variety of popular and readable formats for sharing with customers who do not wish to buy or learn the tool. Export formats like Word or Excel are a must.
The one thing to be careful of with any tool is that the person with the mouse who is making the changes does not take charge of the meeting. Don’t let the product owner near the laptop in the meeting. If the person with the mouse starts to filter everything they hear, then their voice will naturally dominate the discussion. The mouse gives them the illusion of control. It will dampen the contribution of everyone else at the meeting.

I have used mind map tools such as MindManager and FreeMind for building and tracking product backlogs in the past and can recommend them.

Friday, September 05, 2008

Player roles and user stories

Many game development projects don’t put much thought into the various kinds of players who buy the game. They usually add three levels of difficulty towards the end of development as a means of adding replay-ability and accommodating a range of player skills. The levels are differentiated by a simple scaling of challenges in the game such as the number of opponents, the damage from their hits or the damage your hits cause. This reflects the amount of effort we think it’s worth.

Would we benefit from considering a broader range of players and placing more importance in their roles throughout development? Some games do this, especially some online games. An example is the popular Battlefield games which allow players to equip themselves based on specialties. If you are not familiar with the games or the specialties, they are usually divided across these roles:

  • Assault specialist - Equipped with an assault weapon and grenades for close quarter combat.
  • Sniper - Carries a high power sniper rifle and a sight that can they can use to call in precision strikes.
  • Engineer - Has a bazooka, mines and can repair vehicles.
  • Special forces - Carries a light automatic weapon and C4 explosives for sneaking around behind enemy lines causing problems.
  • Support - Totes a heavy automatic weapon and a radio to call in mortar strikes.
These specialties require different behavior from each player who assumes each role. They aren’t as limited as difficulty levels because players can try each specialty in any order and with any skill level. These specialties cannot be added at the end of development. They need to be developed somewhat in parallel during preproduction. They have an impact on level design and should be added well before production starts.

User stories provide a mechanism for identifying these roles and clearly communicating features related to each. The template for user stories I like is:

“As a <>, I want <> [so that ].”

A good method for identifying and differentiating goals is to phrase the user story in terms of those roles. So instead of saying:

"As a player, I would like to have a bazooka so I can blow up tanks"

the story becomes:

"As an engineer, I would like to have a bazooka so I can blow up tanks".

What's the difference? It's mainly one of value and priorities. For a generic player, the bazooka is one of a host of weapons, many of which are more important to the game. However for the engineer, the bazooka is probably the most valuable weapon. I wouldn't play the engineer without it. There's nothing more gratifying than taking out a tank with a well placed shot.

Even if your game isn't going to have specialties, like Battlefield, there is a lot of value in brainstorming the various roles of players early in development. Who is buying your game? Are you going after a largely casual market? If you are, it would benefit you to identify the "casual player" role in some of your stories. It will lead to many small decisions such as simplifying the controls or adding more checkpoints so the casual gamer doesn't become frustrated.

Tuesday, September 02, 2008

How Pixar Fosters Collective Creativity

HBR is posting the article free for a short period of time.

What's equally tough, of course, is getting talented people to work effectively with one another. That takes trust and respect, which we as managers can't mandate; they must be earned over time. What we can do is construct an environment that nurtures trusting and respectful relationships and unleashes everyone's creativity. If we get that right, the result is a vibrant community where talented people are loyal to one another and their collective work, everyone feels that they are part of something extraordinary, and their passion and accomplishments make the community a magnet for talented people coming out of schools or working at other places. I know what I'm describing is the antithesis of the free-agency practices that prevail in the movie industry, but that's the point: I believe that community matters.

Thanks to Clarke Ching for pointing to the article.

Overtime works with waterfall?

Many Scrum teams have found that excessive overtime reduces productivity. The frequent inspection of work done on a daily basis makes measuring productivity far easier with Scrum. One of the reasons for this is that a Scrum team can adapt their practices and see what effect those changes have on their effectiveness.

Jeff Sutherland reports that one of the companies he coaches has measured the productivity of teams using Scrum and waterfall-like practices under different overtime conditions. They produced a graph they call the Maxwell curve:


This is hardly a scientific study (e.g. I'm real curious about how they measure story point velocity in a waterfall environment) but it is a very strong visual argument for what is intuitive about how people work:

- Teams of people who take ownership of their work and make a commitment are more productive, but this high level of productivity cannot be sustained for 60 hours a week.

- When people are treated like cogs in a machine (handed estimated tasks that have to be completed to a predetermined schedule), they can indeed produce more at 60 hours a week that 40. However the productivity of cog teams is not nearly as high as committed teams because their intensity is not nearly at the same level.

Think of a runner sprinting and a jogger. The sprinter will be faster, but cannot maintain that pace as long as the jogger.

The question is "who covers the greater distance?". Does the team that "jogs" go farther in 60 hours than the team that "sprints" for 40? Maybe, but which is sustainable? Which team would you rather be on? Also, is it the same progress? Consider Jeff's comment on overtime with waterfall:

Overtime doesn't work in waterfall. It introduces technical debt. It works short term for the project leader as long as no one discovers he is damaging the code base. Velocity gets slower and slower with overtime but it may be years before management realizes they have to pay for the technical debt. By then the project leader has been promoted.

Monday, September 01, 2008

Jidoka, TDD and asset validation

Jidoka
Jidoka is a Japanese term used at Toyota which means "automation with a human touch."

It's origins lie in the early philosophy of Toyota (at the time it was called Toyoda). Part of this philosophy was to minimize labor costs by reducing labor waste. A example is the creation of "Type-G automated loom" in 1924. Before then, each loom was watched for thread breakage by a single operator. If a broken thread wasn't caught quickly, it would ruin an entire run of cloth. The entire process was very wasteful in operator time (90% waiting) and ruined cloth.

The innovation of the type-G loom is that it would automatically stop whenever the thread broke. This allowed a single operator to support dozens of machines and virtually eliminated bad production runs of cloth due to broken threads. Quality went up.

When Toyota started making cars, the philosophy of Jikoda was carried over to the manufacturing process. On the Toyota factory floor, a problem can potentially stop the entire line until it is fixed. Once the fix is identified, the standard process is improved to prevent a recurrence of it in the future.


TDD
Those of you using TDD (Test Driven Development) should recognize this flow. Unit tests are introduced for every function in the codebase. These unit tests provide validation of those functions that are run when changes are integrated into code base (by a continuous integration server).

When we discover a bug, we must solve it immediately or end up stopping the line (stopping the commits). When the bug is identified, a fix and a unit test, to catch further recurrences of that bug, are checked in and work continues.

Why solve bugs immediately?
  • Bugs can cause the entire team to lose work. A build that crashes can waste hours of work across the entire team.
  • Bugs cost the least to fix immediately after they are created. Bugs fixed months later in "alpha" can cost 10-100 times more.
The practices for TDD are well established. Tools like Cruise Control allow an easy integration of TDD into any development environment.

Asset Validation Jikoda-style

What we need are similar tools and practices for a version of asset TDD.

Unit testing for assets should:
  • Catch assets which break the build
  • Catch assets with naming convention errors
  • Catch assets which violate budgets (texel density, memory footprints, poly counts, bone counts, etc).
  • Identify and track approved assets versus unapproved assets that need art lead approval
Bad assets are another form of debt, like bugs. They cost more to fix later. A more automated approach to checking assets will help keep this debt, which is waste, low.


I've always been on teams that do some of these steps, but they weren't complete and they were always implemented later than they should have. It would be great to have some more standardized tools. Hey Autodesk!.....