Search

Monday, February 08, 2016

Agile Game Development: The Essential Gems















In a week, I'll be launching my first online training course on Agile Game Development.  This first course is an overview of Scrum and Kanban for game development with a focus on the values and principles (gems) of *why* we do it.  My aim was to provide broad training to many of the developers who don't get a change to attend onsite or offsite training.  

The training is hosted thought FrontRowAgile.com, which hosts training for other areas of agile (such as agile estimating and planning training by Mike Cohn).  

Members of the mailing list will receive discounts for training.  

Check out the free portions of the training below:




Friday, December 11, 2015

"Why doesn't planning poker work?"

Planning Poker is a very common practice used on agile game teams, but teams often struggle to make it work on teams that have a wide variety of disciplines.  A common question is "when a programmer estimates 4, and the artist estimates 11,  how do you reconcile them when they are thinking of different work?"

There are a number of reasons that Planning Poker might not function well for this scenario.  Among them:
  1. The feature being estimated doesn’t have much uncertainty.  If we are in content production, there is more of a flow of hand-offs from one discipline to another and we should probably focus on the flow rather than a single size estimate.  When this is the case, both the programmer and the artist are correct for their parts.
  2. The work being estimated has been dis-aggregated to the point (discipline-centric) where planning poker doesn’t make sense.  If you've broken down backlog items to the task level, just estimate how you would normally do it in sprint planning.  If you have backlog items such as "implement this function" or "add this model", you've probably broken down your backlog too far.
  3. The implementation has already been decided (too early perhaps?) so that the planning poker discussion is less meaningful.  I find planning poker leads to great design and goal discussions between disciplines.  If the decision of "how" a feature is going to be implemented has already been made, then planning poker falls into the trap of #2.
In the past 8 years of professional coaching and training I’ve helped teams use not only planning poker, but affinity sizing, t-shirt sizing, etc. and even abandoning backlog estimation altogether.  

The challenging part of agile is the “people over process” value.  Finding what makes sense for a team on their agile journey is key.   When I coach a team, I sit with them and we come to an agreement about what the ideal level of planning might look like.  We want to be able to respond to change, to have a plan that is not so detailed that it’s obsolete the week after we create it and not so burdensome that we ignore its maintenance.  Most game developers know that detailed plans are never accurate and are usually so optimistic that they result in a death march to hit a schedule.  We come to an agreement that we want to avoid a death march and we want to make a better game.

When we have this shared agreement, they become partners in exploring planning practices and eventually innovate what works best for them.

Monday, November 02, 2015

Device Research, the Agile Way

New device development doesn't always start out with a single clean vision of the final product.  Often the product is a bit fuzzy because the knowledge of all the capabilities is uncertain.  There can be different visions fragmented by the different domains.  For example, marketing's vision isn't the same as the vision from software engineering, which is different from electrical/mechanical engineering's vision.  There are unknown areas of overlap as well as areas of non-overlap.

Still, there are questions about the overall vision that have to guide the product's R&D.  At first they can be expressed as questions that research seeks to answer:
  • Can we put the capabilities into a small enough package to be marketable?
  • Will the necessary processing power allow our cost, battery duration and heat dissipation to be below an acceptable level?
  • Etc...
There are many mutually dependent questions that need to be answered and some are critical.  How many products have you seen that have failed because, although they may have done most things right, they did a few crucial things wrong (like battery life).

Unfortunately for hardware-based products, we usually can't iterate rapidly on the entire product from the start, at least not well enough to discover these issues.  We still want a cross-discipline approach to our vision, even if  development doesn't support it.

Consider a simple scenario:  We have a set of new technology that we want to leverage into a
new product, for example the first generation iPod.  A key technical development that allowed the iPod was the famous "click wheel".  The click wheel allowed for a tactile intuitive user interface.  It was a big part of the iPod's success, but not the only part.  The design aesthetic, storage space and battery life were all part of the device's success.

(note: although Apple has been a client of mine, I did not work with the iPod team or know anything about the iPod development.  This example is speculative or based on published descriptions from employees).

Before the iPod, the market for mp3 players was saturated by hard-to-use, cheap players.  The vision for the iPod started by addressing what the current market lacked.  So the team explored design aesthetics, batteries, small, high-capacity storage devices and interfaces.  All of these areas of exploration overlapped with the vision of a small, easy-to-use player that could store many songs and which the user would be proud to own.

There was a certain amount of research that went into exploring each area of the iPod.  1.8 inch hard drives had been out for awhile, but newer 1-inch drives showed promise eventually.  Cost and capacity factors led to the 1.8-inch drives being chosen.  This had an impact on all the other areas.

So how do we work with separate groups researching separate areas of a new device, when it's too early to precisely define the device and impractical to iterate on a nearly-shippable version?

Can Agile/Lean Be Used?

Agile & lean practices are designed to explore emerging new products.  They aren't restricted to software products.  Its benefits can be applied for research as well.  However, implementations of agile for software development focus on a few areas that might not be available to most new device developers:
  • We can't have "potentially shippable" versions of the device every 1-3 weeks.
  • We often don't have a clear vision of the device we want to build until we do some research.
  • Stakeholders can be very nervous that research is open-ended and want detailed plans.
  • Researchers have trouble fitting their efforts into 1-3 week time-boxes that produce something that meets a "definition of done".
The concerns raised about applying agile can come from both the stakeholders and researchers as well:
  • Stakeholders: We don't want to have open-ended research with no end in sight.  We want to use more traditional project management techniques to put limits on the cost and time spent.  We need more control on a day-to-day basis.
  • Researchers: We can't estimate iterations.  They are too short to produce any "valuable" result that meets any definition of done.
To overcome these limits and concerns, I list some proven tips for using agile for R&D work:

Align your vision with research goals

Research has to align with the ultimate product's vision.  But sometimes a single product's vision depends on the results of research.   How do we reconcile these mutually dependent things?

The vision for a new device can start with a set of capabilities in a concept that we assume will change as we learn more.  It's critical that the people in R&D have a shared broad vision of the product they are researching.  This is where chartering techniques can help.  These techniques help create a shared vision far more effectively than passing around a large document:
  • Building "look-and-feel" mock-up devices
  • Creating a hypothetical demo video of the future device
  • Short customer-oriented presentations
Check out some of the pitch videos made for crowd-sourcing campaigns.  Many of these show what their devices might look like and how they would be used so they  generate enough excitement to draw millions of dollars of funding.  Isn't that level of excitement as valuable for those making your product?

Use Spikes

Spikes are user stories that limit the amount of time spent working on them.  They also are meant to produce knowledge that can't be estimated in scope up front.

An effective way of using spikes is called "Hypothesis-Driven Development".  One template for for HDD spikes is:
An example of this is:

We believe that implementing algorithm X will result in sufficient accuracy.  We will know we succeeded when we get 95% accuracy from our test data in the lab.  We will give up when we spend a full sprint without seeing this accuracy improve beyond 50%".

Set-Based Design

Set-based design is a design approach for considering a wide range of separate domain research that have overlapping and non-overlapping areas:

The approach is to explore an emerging product vision by exploring the range of separate domains and areas they overlap (orange area).   The idea is for research activities to refine the entire domain and converge on the best-shared solution.   This is fundamentally different from "point-based design" where the solution is chosen up front and the domains are forced to implement that point.  Knowledge of what works and what doesn't usually emerges as deviations from the point-based plan and considered as negative impacts to cost or schedule.

For example, suppose the iPod team had decided that the first iPod would have a touch-screen driven interface with solid state memory.  That's a potentially better product than the first generation iPod (in fact it's what the iPod eventually became), but in 2001, due to the existing technology, the memory may have been limited and the touch screen too battery draining.  Having gone down the long path of designing this device, Apple might have released a compromised or much-delayed product.

The Cost of Set-based Design

Set-based design can cost more in the short-term, but it can save your product in the long term.  To illustrate, if we had several contenders for a technical solution--all with various risks and costs associated--how would we work?  If each took a month to evaluate, we could be pushing out the ship date by many more months.

The answer is to research the solutions in parallel and to focus on failing each fast.  For the example of a touch-screen vs. click wheel on the first iPod, we'd focus on the areas of risk first.  How does each feel?  What is the cost of implementing each?  What is the power consumption?  We'd try to get these answers addressed  before making any further decisions (an iPod example is the creation of dozens of prototype cases, which Steve Jobs would choose from).

This tactic of avoid decisions made without sufficient knowledge is referred to as "deferring solutions until the last responsible moment".  We make better decisions when we know more, but we don't want to be in manufacturing when we decide to change the case.

These days, with on-demand rapid prototyping, 3D printing, emulation, etc. we can shorten the experimental cycle on hardware dramatically allowing us to do set-based design far more effectively.

Aim to Learn by Failing Fast
Imagine we are playing the high/low game, where I have secret number, between 0 and 100, that you must discover by guessing numbers and having me tell you whether your guess is higher or lower than my secret number.

What do you usually guess first?  '50'.  How many times is '50' the right answer?  Never!  So why do you guess it?   You guess '50' because it gives you the most information about what range my secret number is in.   It eliminates 50 numbers with a single guess.  No other guess eliminates an equal amount of numbers.  You didn't guess '50' because you thought it was the right answer.

The same goes for research.  We don't aim for the right solution, but the one that gives us the most knowledge in the solution domain.  We setup our experiments to give us this knowledge as quickly as possible.  When we aim for the right answer, it often takes longer to plan and execute.  Given our game above, it would be like taking 10-minutes to analyze the likely correct answer to the secret number from 0 to 100 and then announcing that '38' is the correct choice.  It's just as likely to be wrong and gives us less knowledge than the '50' guess.

Simplified stage gates
Other Useful Practices

Stage Gates
A project to create and ship a new device will change states and practices as it progresses.  Work will transition from researching foundation forms and technologies to prototyping the whole device to designing the production flow and moving into production.  These are stages can't be combined into each iteration as well as software-only products can, but the traditional problems that stage-gate development encounters can be mitigated with lean practices (this will be addressed in future articles).

Critical Path Management
The emerging vision of the device and the knowledge of what's possible will lead to the identification of a series of dependent activities or goals that need to occur before the device is ready for the prototype of production stage.  Identifying these paths and focusing efforts on improving flow through them starts early.

Parallel Experiments
When designing the Prius, Toyota didn't decide on a hybrid engine at the start.  They didn't know whether an all-electric, super-efficient gasoline or hybrid engine would be marketable (the "green market" wanted something environmentally friendly (efficient) and affordable).  So they started three parallel research projects.  The all-electric engine experiment showed that the cost of such a vehicle was too high.  The efficient gasoline engine was low cost, but didn't hit the efficiency that the market wanted. The hybrid engine experiment satisfied both aims.  As a result, the Prius, a revolutionary vehicle, was designed in half the time it took to design a conventional vehicle.

Having a number of independent fail-fast experiments can be a cultural challenge.  It can be hard to convince the bean-counters that is actually cost-efficient.

Conclusion
Many advances in hardware development practices (such as 3D printing, field-programmable gate arrays, etc.) have allowed teams developing hardware to benefit from the practices that software developers have been exploring for over a decade.  While iterating on an electrical or mechanical feature isn't as rapid as recompiling code, it's allowing more and more iteration and exploration into making better products.

It's a challenge in some organizations to admit that "you don't know" the solution to a problem.  It can easier to design a solution and, when it fails months later, blame it on the implementers.  Companies that embrace risk, learning and transparency have a better chance at creating revolutionary products.



Wednesday, September 30, 2015

The evil of tracking tools in the sprint

After 8 years of training and coaching teams I've noticed some very obvious patterns and anti-paterns.  One of them is the impact of bringing tracking tools into the sprint.

These tools are primarily used to:
  • Build the sprint backlog
  • Update the estimates during the sprint
  • Display sprint backlog details during the daily scrum
  • Spit out the sprint burn down
I understand that some teams, that cannot avoid being distributed, want to use such tools, but I rarely see them as positive for collocated teams.  It's not the fault of the tools, but how they are used.

One of the primary aims of Scrum is the boost in productivity seen with teams that have a sense of ownership in how they run their sprints.  Teams with ownership are accountable for their commitment.  They enjoy their work more and they explore ways to improve.

Sprint tracking tools can get in the way of that by:
  • Not allowing an "all hands" approach to managing the sprint backlog: one mouse, one keyboard, one operator.
  • The data in the tool is only brought out once a day because there are not enough licenses, developers don't want to learn the tool, or the extra effort isn't worth it.
  • The team's ability to customize the artifacts (task board, burn down, etc) are limited by what the tool can do.
  • The metrics of the sprint (sprint velocity, burn down), and even individual developer progress can be monitored by management.   Guess what happens when a developer or team is asked why their burn down is not diagonal enough?
  • Making the daily standup a status reporting meeting by focusing on individuals.
Even if the tool is being used in benevolent ways, the team can suspect otherwise.  In organizations that are trying to grow trust, this can be a killer.

Also, teams need their sprint backlog to be radiated at them.  Tools refrigerate the backlog by storing it in some cloud.

When I bring this up, I often hear from the Scrum Master that the tool makes it easier to do their job; tracking the sprint and producing the burn down or that there is no wall space.  It's often news to such Scrum Masters that their role is not to be efficient creating the artifacts, but helping the team build ownership and trust...or just tracking down solutions, like a portable task board.  I don't blame them necessarily.  Adopting a tracking tool for sprints are often an interpretation of how Scrum should work in an amber or orange organization.  Teams in these orgs who are new to Scrum might even want tools in the sprint so that the "Scrum fad" has minimal impact on them.

The sprint backlog is owned by the developers on a team to help them organize and manage their shared commitment and forecast for achieving the sprint goal.  It serves no purpose external to the team.  

Courage, commitment, focus, openness and respect aren't always easy to instill, but it can start with a bit of tape and some index cards.