Search

Sunday, December 30, 2018

Improving Live Game Feature Flow

A studio I visited had a live mobile poker game that had been very successful. Money from in-game purchases of poker chips had been pouring in for years, but the competition had been heating up and revenues were slowly falling. The root cause of the problem was the success of the game. The might sound strange, but a successful game often hides problems. It’s human nature to tell yourself not to change anything when things are going well. Ironically, that’s the time when you should explore improvements: you make better decisions when it’s not an emergency.

The other reason success is so dangerous is that it leads us to splurge. For example, it’s easier to staff up by 50% when you’re making a lot of money. But all those new hires need something to do and often that extra work can slow things down. This was the main problem for the poker game. They had 160 people on the team and it was taking six months to implement major new features.

The six-month development cycle created problems:
  • Taking that long to determine if a major new feature will be successful in the market is a very expensive gamble. 
  • The competition that has shorter development cycles can beat you to market. 
Although the team was using Scrum to implement features, there was a lot of waste outside of Sprints that had to be eliminated (and a few challenges for the Scrum teams as well).

The first thing to do was to map the flow that an idea for a new feature went from concept to player delivery. Doing this exposed the following:
  • New feature ideas were huge. They were captured in 20-30 page documents. 
  • Because features were so large, they went through numerous revisions and reviews which took one to two months. 
  • It was easier to shoot down a revision because of the risk and cost of the feature than to approve it. 
  • It took time to gather all the principles for a review (at best twice a month). 
  • The Sprints were not addressing debt as well as they could have, which resulted in a lot of rework in subsequent sprints after a QA team found problems trying to validate builds for submission.
To address this, we implemented two main changes. One hard, one harder:

The hard change: Improving the “definition of done” with the Scrum teams to address debt. This required some automated testing, QA joining the teams and improved practices such as Test Driven Development. This was hard because it was taking for developers to change how they work and for teams to reorganize a bit. However, the benefits were easy explain, which made the coaching easier.

The harder change: Weaning management off of their need to make big gambles with major features, which led to the big design documents. The new approach was to create smaller experiments that could be tested with players and could inform product ownership enough to evolve better features.

An example of this would be a tournament feature for a single-player puzzle game. The original approach would be to design the entire tournament, with ladders and player ranking systems. This could take months and be subject to many design discussions before even the first UI element is created. The new experimental approach was to introduce a simple head-to-head mechanic where the game could experiment with how players could use the puzzle game in a competitive way. This greatly reduces risk as well, because a tournament that took months to develop would likely be a failure if the core of tournament play—the head-to-head mode—wasn’t fun.

This change in design culture took a while to optimize, but seeing the metrics of the flow improve became a motivational token for the leads and they embraced being part of the solution.

Ultimately, the business metrics saw a benefit from this new approach. The throughput of new features (albeit smaller, incremental features) increased from one every three to six months to every four to six weeks. Revenues started increasing. A side-effect of these smaller features was that the team could be smaller. Most of the team moved to other games and the poker team size was reduced to about 60 people.

Tuesday, February 13, 2018

Life Threatening Production Risk


I’ve been legitimately threatened with death by publishers twice in my career. These weren’t your run-of-the-mill “I’m going to kill you if you miss your milestone date” threats that publisher producers often make. These were Michael Corleone style kiss of death promises from publisher CEOs who probably “know who to call”.

I obviously avoided death but I also learned a few valuable project management lessons.

Many games face a set of constraints:
1. Hard deadlines set by stakeholders
2. A minimum amount of gameplay content that must be produced
3. Only so many people on staff that can build this content

The combination of these constraints isn't ideal, but an agile approach is best for tackling them. These are risks and the definition of a risk is that we don’t fully know the answer up front. The key is to not create a detailed plan that fully answers these unknowns but to execute to solve these risks as early as possible.

The First Lesson Learned
One of these games my life was threatened over was a console launch title. The initial plan was to ship with six cities. Since launch dates are fixed, we considered the production of those cities as the primary risk. So we prioritized creating a prototype city which came as close to “shippable quality” first. The goal was to understand how much it would cost to make cities and a game with at least 20 hours of gameplay.

This experiment told us we couldn’t build six good cities by launch.  We believed we could build three at best.

We were still a year away from the launch, but I still had great fear when I called our publisher to inform them that we couldn’t ship six cities with quality, that could only ship with three “good cities”.  To my surprise, they weren’t upset.  They just wanted assurance that we could still have 20 hours of gameplay.

This was the first lesson:bad news delivered early is not so bad. Not only was it easier for the publisher to absorb, but knowing we had to fit 20 hours of gameplay into three cities guided our designers towards creating more options for gameplay in each city: more shortcuts, more branches, etc. The publisher wasn’t fixated on six cities. Many times these numbers are guesses made early that somehow become written in stone later.

The Second Lesson Learned
Although our first prototype city taught us a lot, it wasn’t enough.  Six months later we had another choice:  we could ship three “crappy” cities or two good ones. Production costs and the difficulty of the actual console hardware became greater than we had assumed, even with a prototype city.  Once again, I had to make the call to publisher, this time a bit less fearful due to the experience from the last such call.

This time the reaction was violently worse.  We were mere months away from shipping.  The publisher had announced to the world that we were shipping with three cities.  They had also gone so far as to film scenes in these cities at great expense. For example, they had paid New York City to shut down Times Square in the middle of the night to film a scene for marketing. To say they were extremely upset is an understatement.  I briefly considered an escape across the nearby Mexican border.

We survived the threat and ultimately shipped a launch title that despite suffering quality impacts from stuffing 10 hours of gameplay into each of those remaining cities, was a hit.

That second lesson can be summed up as “bad news delivered late is less welcomed”.

Summary of What I Learned

  • Execute uncertainty away.  Don’t rely on the written plan solving risk. Building cities early revealed more truth than any design document could have.
  • Keep your options open. Decisions made later are often more informed.  Just don’t make them too late. By keeping the number of cities flexible, we could better balance quality and cost, but the late decision to cut the extra city was very costly.
  • Don’t confuse scope with quality. While six cities “sounds better” on paper than two, it came down to what you did as a player in the cities we shipped. I have no doubt that if we kept production fixed on six cities, the game quality and sales would have greatly suffered.
  • Death threats can be very motivating, but I don't recommend them.


Sunday, January 14, 2018

Continuous Delivery of Games

The traditional overhead of "getting a game ready to release" can take quite a bit of time and reduce how quickly you can release new features to your players and respond to what the market is telling you.

Take, for example, the steps a mobile game development team I recently visited goes through for a release every few months:

  1. Merge the development branches
  2. Fix the merge-caused bugs
  3. Optimize, polish and debug for release quality
  4. Send to QA
  5. QA does their regression tests
  6. If bugs are found, return to step 3
  7. If quality is good, submit
  8. If the submission is rejected, return to step 3 to fix it
This took a lot of time! As a result, the team could only release major features several times a year, often behind those of their competitors who captured many of their impatient players.


Continuous delivery is a series of practices that ensures your code and assets can be rapidly and safely released to players by delivering every change to a production-like environment and ensuring that the game functions as expected through painstaking automated testing.

As more games move to more continuous delivery, tools and practices for supporting the ability to deploy with less hassle become more valuable.  Examples are:

  • Feature toggles
  • Continuous integration
  • Unit testing
  • Automated gameplay testing
  • Blessed build indicators
  • Form a stability team
  • Integrate QA
  • Etc
Most of theses practices can be found in my latest book Gear Up!