Search

Wednesday, April 30, 2014

Drowning in Scrum

This might have been the actual pier
When I was three years old, my father tried to teach me how to swim using the same method his father had used: He threw me into a local lake off the end of a long pier and expected me to learn.  Sadly, that day I wasn’t going to learn to swim before drowning and he had to rescue me.  I’m told that I avoided the water for the next few years.

I didn’t use this approach with my own sons.  Instead my wife and I brought them to a swimming instructor, who started them off in shallow water, learning to put their face in and blow out bubbles.  Later lessons grew skills, while managing fear.  I wanted my sons to have a healthy respect for the dangers of water, but I didn’t want them paralyzed with fear and not experience the joys of surfing, diving or just cooling off on a hot summer's day.

Unfortunately, I didn’t apply this approach with my first Scrum team.  I applied dad’s approach of tossing them off the pier.  This involved handing them copies of a book and telling them to “follow the rules”.  Fortunately, the team didn’t “drown”, but they were terrified of all this “agile stuff”.  They clung to the rules, deathly afraid of what book I might read next.  They learned the equivalent of  “treading water” and that was all.

I’m a slow learner, but I eventually correct my bigger mistakes.  For the next team adopting Scrum, I behaved less like my father and more like the swimming instructor.  I had them attend some training to get the big picture, learn the vocabulary and to acquire a vision of what we hoped to achieve using Scrum.   We (management) worked on basic, “safe” practices, such as having leads sit with junior developers to help them estimate a sprint.  The team didn’t commit to a certain number of features for their first sprint, but accomplished what they could.  We measured those features that met a reasonable definition of done and used that metric as a basis for planning future sprints.  In carefully facilitated retrospectives, we listened to their feedback about what worked and what didn’t and agreed on some easy-to-achieve actions.  Slowly, we backed off our involvement and let them assume more responsibilities.

The results for this team were far different than those of the first.  The second team felt empowered with a sense of ownership and were less fearful of change.  This team frequently changed their environment and practices.  They even made demands on management, such as having someone from QA join their team.  The only intervention we found necessary was when, in deciding to tear down cubicles, they exposed live electrical wires and were about to fix the problem directly!

There are too many leaders that don’t understand how to approach fundamental change.  It has to be done with safety and respect.  I count myself as one of those who has hopefully mended his ways.  I don’t blame others who are stuck in a command-and-control mode because, like my father and myself, they come from a long line of “doing things the way they learned how to do them”.  Breaking this cycle is hard, but rewarding.

Friday, April 11, 2014

Feature Cards

My previous article described Feature Boards, a variation of task boards that some agile game teams find more useful.  The main difference between a by-the-book task board and a Feature Board is that task boards contain task cards, which are markers for individual tasks the team foresees will be needed to complete a set of features,  while a Feature Board contains Feature Cards, which are markers for the features themselves.  This article describes a typical Feature Card.


An example Feature Card

The above card shows some typical elements:

  • Card Color: Using a simple color priority scheme (e.g. red, yellow, green) can help teams decide the order of features to work on based on their priority in the backlog or any dependencies that exist between them.
  • Title:  In this case “Jump”.
  • Description: Very often written in a User Story format.  This briefly describes the feature from the point of view of the user, such as the player .
  • Acceptance criteria: While planning an iteration, teams will capture some unique aspects of the feature that will need to be tested.  For example, it might be important to the product owner that the blending of animations be smooth and without popping.  Someone, usually a tester, will write this criteria the back of the card to remind the team to test the feature to ensure there is no popping while jumping.
  • Start and End Dates: It’s useful to capture the day when the team starts working on a feature and when it’s done.  As mentioned in the article about Feature Boards, it’s not best to work on too many features in parallel during an iteration.  Using the dates will allow a team to measure how much work in progress is occurring during the iteration.  I’ll describe how this is used in an upcoming article on Feature Board Burndown Charts.
  • Estimates: As a part of iteration planning, each discipline will discuss the plan for implementing their portion of the feature and may want to capture some time estimates.  Some teams use hours, some days, while some teams will skip estimates altogether.  Capturing discipline estimates rather than individual estimates increases collaboration among members of the same discipline.  These estimates are used to track whether an individual discipline is at capacity or not while features are planned.  For example, if a team is finding that the sum of all animation work across all features adds up to more time than their one full-time animator can handle, they’ll find another animator to help them or they’ll change which features they commit to.
  • Tasks: Tasks captured during planning or during the iteration can be annotated on the card with smaller post-its or, by using larger cards, be written on the cards themselves.  If you use post-its, make sure you use the brand name ones with the good glue!
  • Card Size: Feature cards are usually written on a 3x5 or 4x6 index card.  

Variations

The sky is the limit on how to format these cards.  For example, some teams record the story points or t-shirt sizes on the cards and don’t use time estimates.  Some teams will break down larger features into sub-features and track the completion of those.  The key is to collect meaningful metrics that are part of an ecosystem to help the team better plan, execute and deliver features at ideal quality.

Next

We’ll look at an optional burndown or throughput tracking chart.


Monday, April 07, 2014

Feature Boards

A challenge in agile game development is managing cross-discipline teamwork where a wide range of skills is required.  As a result, following some practices and artifacts by-the-book doesn’t work as well as it does for software-only teams.  One of these artifacts is the task board.  These boards are used for teamwork transparency during an iteration and are a tool for the team to manage their day-to-day progress.  However, for a highly diverse game team, task boards are often modified.  This article is about one such modification, called a “Feature Board”.

Task Boards


A strength of the physical task board is that the team owns it.  A physical task board is always visible and requires no licenses and little training to use.  The only limitations are that it requires wall space and team collocation.  The photo above demonstrates some example modifications made by the team using it, such as adding columns for verification or colored cards for tracking different types of work (e.g. emergent work, bugs or impediments).

A problem that boards like this encounter is the lack of visibility of cross-discipline dependencies.  For example, let’s say a team is working on a number of player mechanics in an iteration, such as jumping, crawling and ducking.  There are three programmers and one animator among the team.  Halfway through the iteration, the programmers have the code working, but can’t fully test it because the animator is backed up with too much work.  Perhaps the animator is trying to work on all three mechanics at once or the task of animating one mechanic is harder than they thought, or there is a problem exporting the animations.  Creating awareness of these problems is one of the functions of the daily stand-up meeting and a mature team will address them, but sometimes a problem like this doesn’t surface until it’s too late in the iteration to do anything about it.  This is one instance where a Feature Board can help.

What is a Feature Board?

A simple Feature Board looks like this:

Instead of tracking tasks, a Feature Board visualizes feature development state.  The iteration (or Sprint) goal is on the left side and contains cards identifying features the team committed to implementing over the next 2-3 weeks.  The cards move around the board (not just left to right) during the iteration, depending on the work that is being done.  Each column on the board indicates the type of work being done on the feature at the moment.  The rows show which features are in progress and which features are waiting for work to be done.  As features are completed, they exit to the right of the board into the “done” column.  In this particular example, the features are prioritized in value.  Red features are the most important.  Green features are the least important and yellow are in-between.

How is a Feature Board used?

A Feature Board serves the team.  It shows them the state of each feature on a daily basis and leads to conversations that are necessary for a cross-discipline game team.

Take a look at the board above.  The board is telling us that the jump feature is having some coding done, while melee is getting some animation.  There are a few bug fixes that need to be done and one of them is being worked on by both a designer and programmer.
The board also raises questions.  For example:

  • Are the testers overwhelmed?  It looks like they have test work piling up.
  • What is the modeler doing?  Have they run out of work?
  • Why are lower priority features ready for test?  Why are they ahead of higher priority features?

The board won’t answer those questions, but quickly highlight them for the team to answer.

What are the benefits of a Feature Board over the by-the-book Scrum task board?

Feature Boards focus on feature progress, not task progress.
Too many times, teams get focused on completing tasks in any particular order and don’t worry about features being “done” until the end of the iteration.  Postponing “done” means features don’t get enough polish and tuning time.  As a result, teams release lower quality features.  Feature Boards focus the team on completing prioritized features over merely completing tasks in any particular order.

Feature Boards help balance cross-discipline teams.
Unlike software-only teams, a cross-discipline team of game developers can’t share as much of their work (e.g you really wouldn’t want me creating a texture or a sound for your game).  So it’s important to quickly identify problems with the flow of work.  Task boards don’t do this well;  They don’t visualize stalls and backups.  As a result, individuals end up solving problems by working on more things in parallel, leading to late integration, testing and tuning.  Feature boards show, dramatically and frequently, where the stalls are occurring, or soon to occur, which focuses a team's response.

Feature Boards help limit Work-in-Progress (WiP).  
In order to complete features sooner in an iteration, a team needs to work on fewer features in parallel and address issues or workflow problems that are stalling work or wasting time.  By separating the “In Progress” features from the “Waiting/Blocked” features, a team will see where work is piling up quickly enough to be able to do something about it.  Teams can even institute “WiP Limits” on the columns or rows to stop any new features being added when work is piling up too much.  This leads to different behaviors.  For example, when the team sees that testing work is piling up above, others in the team can help out.  There is no law of nature that says “only testers can test”.  By embodying WiP limits, Feature Boards can be a tool for encouraging such “out of the box” thinking.

“This looks like Kanban!  Why not just use Kanban?”
Some of these ideas come from Kanban (like WiP limits), but it’s not Kanban by-the-book.  While Kanban by-the-book is great for some work—work that is less exploratory, or follows a predictable flow of hand-offs from one discipline to the next—the Feature Board is better for teams that are working on unpredictable new features and “first-time” content.  Feature Boards fit best within a time-boxed iteration which has a lot of “swarming”.  “Swarming” means that there is an unpredictable flow of work between disciplines to create a feature and the feature will bounce around or iterate unpredictably before the fun is discovered.

Own It

The bottom line is that I don’t care what label you use (Kanban, Scrum, etc).  You need to find what works regardless of where it comes from and keep finding ways of making it work better.  That’s the agile mindset.

See Also

Friday, April 04, 2014

Situational Leadership and the ScrumMaster

After I recently posted an article called "An Introduction to Situational Leadership", I was contacted by Randy Baker.   Randy lived nearby and had worked with Dr. Paul Hershey, the developer the initial Situational Leadership (SI) model, for 20 years.

In the original article, I shared an illustration of the levels of team maturity:

I described to Randy, how the ScrumMaster role is meant to help teams reach higher levels of maturity by, at first, directing less and then delegating more.  Randy then drew the following diagram (it was, literally, on a Starbucks napkin so I've redrawn and added it to the diagram above):

The diagram shows the relationship between a leader in the SI model and the team, based on their maturity.  This diagram resonates because it matches the formations you'd observe in a Daily Scrum based on the maturity of the Scrum Team.

Scrum attempts to bootstrap a Scrum Team into the "high supportive behavior" side (top half).  Very often a studio's culture will be more directive going in, and you'll initially find teams reporting to the ScrumMaster, who is running the meeting (the "Coaching" quadrant in the upper right) at the center of a group of developers.  Over time, a good ScrumMaster will improve their facilitation skills and support (upper left quadrant) the team as an equal member, standing as a part of the team's circle.

As a Scrum Team develops their maturity and becomes more self-organizing, the ScrumMaster will delegate more of their day-to-day duties, but always observe and support the team (lower left quadrant).

Any team that finds themselves in the lower right quadrant is not "doing Scrum" yet.

Thursday, April 03, 2014

The Stages of Agile Success

Achieving the full benefit of agile requires a long-term commitment to change.  Applying the basic practices, such as answering three questions for 15 minutes a day, is simple, but adopting the mindset of self-organization, continuous improvement and building a culture of engagement, trust and respect is challenging.

The Three Stages of Agile Adoption

Over the past decade, I’ve witnessed common patterns of successful agile adoption and have grouped them across three stages[1] shown below, which I call the apprentice, journeyman and mastery stages.


When speaking about agile, we're talking about the agile values and principles.  How these are embodied in your day-to-day practices depends on your culture and the framework for your process.  Scrum is one such popular framework, Kanban another.  These frameworks are great scripts for starting agile, but over time teams change the practices and even blend them.  In this article, I'll mainly use Scrum terms, but your terms may vary.

Apprentice

Teams that are newly formed and/or are new to agile aren't expected to apply agile practices very well at first.  They need constant coaching as they struggle to figure out who does what and when[2].

Apprentice teams learn how to deliver features at a regular cadence and apply new roles such as Product Owner and ScrumMaster.

I prefer to see apprentice teams follow the practices, such as those of Scrum, “by the book”, because changes at the start of Scrum adoption are usually done to conceal problems Scrum exposes.

Effective Daily Stand-ups
One example of abandoning a practice too early is when teams skip the daily stand-up meeting because they feel an electronic tool is better for reporting task status.  Reporting task status is a part of the stand-up, but it is not the primary purpose and so something is lost.  The daily stand-up is meant to align a cross-discipline team to their daily priorities, revisit their commitment to their sprint goal and recommit their accountability to delivering quality features.  Thinking that it's all about task reporting is a vestige of a task-assignment culture and it takes a while to change this mindset.

Cross-Disciplined Teams
Forming cross-discipline teams that can commit to goals is critical to early adoption.  Departmental or siloed organizations will resist the formation of cross-discipline teams.  Not only do department managers fear losing control, but developers themselves will resist being teamed with those outside their own discipline.  Yet, if cross-discipline teams don't form, then the weight of dependencies and the inability to deliver a demo-able game every iteration, will prevent most of the benefits found with agile.

Well formed cross-discipline teams can take more ownership of their goals, which leads to more effective problem-solving and higher productivity.

Established Roles
Roles, such as Product Owner and ScrumMaster, are meant to balance the need to build the right features in the right order and in the right way.  The roles are separated, because they often come into conflict with one another and need to be balanced for there is a fine boundary between building features faster through improvements to productivity and building features faster by dropping quality.

Realigning existing roles with the new ones take a while to establish and even longer for the teams to understand how they are meant to function.

Journeyman

When agile teams create a definition of done and start altering their practices to achieve it every iteration, it's a signal that they are entering the Journeyman stage.   The following areas of development maturity are common with Journeyman teams.

Leaner Designs
Through close collaboration, Journeyman teams reduce handoffs of written design documents and concept art.  For example, instead of a dozen storyboards for a level, a concept artist will draw half as many and spend more time working directly with the level artists and designers.  In this way, all disciplines can more effectively collaborate and iterate on more emergent and higher quality levels than a one-way handoff of concept art would allow.

Faster Integrations
As cross-discipline teams iterate more, the inefficiencies of practices such as branch-and-merge and hand-testing everything are replaced by practices such as continuous integration, test servers, unit testing, etc.  These practices allow more iterations on up-to-date assets and code throughout the day.  The ability to iterate more frequently contributes to higher quality.

Better Testing
As mentioned above, faster integration requires improved testing practices, such as test servers.  Additionally, Journeyman teams will increasingly integrate members of QA onto their team.  This will lead to further refinements of the definition of done.

Release Planning
As journeyman teams improve their definition of done, agile estimation techniques can be employed to produce a more reliable forecast about the schedule, cost and scope.

Mastery

Mastery is a stage occupied by so-called hyper-productive teams.  These teams hold themselves accountable to their results, practices and organization.  For teams to enter this stage, they must have an established level of trust within the organization and be allowed to achieve higher levels of self-organization.

100% Customization
Sometimes, when I visit a team that has been effectively inspecting and adapting agile and Scrum practices for a while, it’s hard to say “they're doing Scrum” anymore.  They completely own their work, and take much pride in it.

Self-organization
The goal of Scrum is to leverage the power of small, self-organizing teams to maximize the value they create.  By giving teams and individuals more freedom to make decisions about their work and providing structure, coaching and mentoring, their work becomes far more focused and meaningful.

Studio culture and structure are often barriers to this “low constraint” form of self-organization.  Fortunately, as more studios emerge that demonstrate the value of this approach, the acceptance and proof of it will continue to grow.

Continuous Improvement
Following a set of prescribed practices isn’t the goal.  The goal is continuous improvement.  Markets and technologies change.  Individuals change.  Processes have to change as well.  There are no "best practices" because the word "best" implies that there are no better.

It's Not Easy, but it's Rewarding

The agile state of mind challenges our beliefs and assumption and insists that we continue to challenge them.  It flat-out defies the persistent superstition that we can treat creative people with the same tools that make machinery more productive, that we can add more people or add more hours they work or write bigger planning documents, which will result in more output.  People are far more complex.

It makes us coach, listen, challenge, inspire, care, respect, and collaborate in ways we were never trained to do.  It's a lifetime challenge with no finish line in sight.


[1] These are not unlike the three stages of martial art mastery called “ShuHaRi”
[2] This is not inherent to Scrum teams, all new teams go through a settling phase.  Google “Tuckman’s stages of group development” for more on this.

Tuesday, April 01, 2014

Agile & Levels of Planning

Recall the last time you took a long driving trip.  You probably came up with a plan for the trip.  You used Google Maps to explore various routes and distances.  If the trip took a couple of days, you reserved a decent motel along the way.  Before the trip began you filled the gas tank and printed a map.  You even might have planned to leave your home earlier or later to avoid rush hour traffic along the way.

When the day to travel arrived, you reset the odometer, set up the GPS and off you went.  However, the job of planning wasn’t complete.  There might be detours along the way.  Frequent glances at the odometer or GPS might inform you that you are ahead or behind schedule.  Also, the map and GPS aren’t enough.  You monitor your speedometer from time-to-time and constantly look out the windshield and at the mirrors in case another driver does something unexpected.

We use all these different measures and methods to maintain a plan and respond to change for something as predictable as driving.  We need similar tools for the far more complex and uncertain endeavor of making a game!

Figure 1 - Planning Onions for Driving and Games

The figure above shows “the planning onion”, which represents the different layers of planning frequency.    The inner layers of the onion are the more frequent inspection tools/cycles, while the outer layers encompass tools applied less frequently.

Layers of Planning
Let’s examine the layers of planning from the inside out using the driving example for comparison with how a Scrum-developed game would plan:

  • Daily - The team will meet every day in a daily Scrum to discuss the progress and issues which are affecting their Sprint goal.  This includes conversations about bugs, impediments, dependencies, or merely points about the quality of the game.  This is like looking out the windshield of the car during your trip.
  • Sprint - The team, product owner and domain experts forecast what is possible to accomplish in the next one to three weeks.  The duration of the sprint largely depends on how much certainty the team has with the work and how long the stakeholders can go without seeing something.  A team will forecast and track the work in any way they want for the sprint.  Some teams use hours, others days, while some teams will come up with their own units[1].  This compares to using the speedometer in your car to measure you velocity.
  • Release - The team, stakeholders, marketing and leads identify stretch goals for the game that they hope will be demonstrated in a “shippable build” at the end of the release.  Product owners might alter these goals during the release as quality, cost and true velocity emerges.  Forecasting and tracking during a release is usually done using metrics that size the features on the backlog (e.g. story points).  This level of planning and tracking is comparable to using an odometer during your drive.  The odometer gives you a more long-term view of velocity when miles are measured against larger units of time, such as hours and days.
  • Project - For a cross-country drive, you’ll likely have a map and a rough idea of your progress and stops along the way.  You may have a gas budget and looked online to see where major construction delays  might occur on your path.  If asked where you expect to be tomorrow night, you’d have an answer like “Denver”.  If asked where you would be at 2:15 PM tomorrow, you might not be much more precise.  Long-term project planning should be like this.  It focuses on the major cost, schedule, and feature goals and identifies significant areas of risk.  Similarly, long term planning will break out epics and define some constraints based on experience.  An example for “online multiplayer gameplay” might be:
    • Cost: 10 people for six months to implement and 20 people for six months to produce content.
    • Schedule: Starting in June, finishing in 12 months.
    • Risk: Online technology risks and core gameplay risks.

Just as we don’t forecast our speed down every mile of road while planning a cross-country drive, a long project shouldn’t forecast what teams will be working on every day for a year or more[2].  Instead, as the planning horizon extends out, the metrics for planning become larger grained and less precise.  There are a few reasons for this:
Reason 1: The further out our plan goes, the more uncertainty there is with the details.
This is best illustrated with the cone of uncertainty shown in Figure 2:

Figure 2 - The Cone of Uncertainty

This figure illustrates that the further we are from shipping the game, the more uncertain its cost, scope and schedule.  Everyone who has shipped a game should recognize that the great concept art and ideas and plans expressed in the initial design documents usually don’t resemble the shipped game very closely.  This is not a bad thing.  This is the nature of creating complex games that have a subjective and emotional dimension.  We simply can’t “plan away” this complexity and uncertainty.  This doesn’t mean we can’t plan.  It means we need to plan at a level appropriate to the level of certainty that exists.

Reason 2: Simply breaking down a long term plan into finer details won’t give us more certainty, it’ll give us less uncertainty and waste our time doing it.
 This is the hardest to convince people of.  The assumption is that a 300 page design document creates twice as much certainly as a 150 page design document.  When I worked in the defense industry, this assumption led me to create a 40-pound specification document for an Air Force contract.  The Air Force wanted so much demonstrated certainty from their contractors, that they received documents so big they couldn’t read them[3].

However, numerous studies have shown[4] that not only is uncertainty not reduced equally with more planning effort but, beyond a certain point, the attempt to plan away uncertainty with more effort (documents and spreadsheets) produces forecasts with less accuracy (see Figure 3).  The reason that a bigger plan creates less accuracy is due to human nature.  It’s in our nature to see a big document and turn off our critical thinking, just as the Air Force did when they saw our 40-pound document.  Had they read that document with a critical eye, they would have quickly found out that it was a rushed cut-and-paste job by a junior engineer.  Instead, they simply weighed it and awarded us the contract.


Figure 3 - The Accuracy of Planning Based on Effort

The tools that we choose, based on where we are in the planning onion, help us stay in the ideal range of precision, which gives us the best amount of accuracy without wasting effort.

[1] I’ve seen some teams estimate in “NUTS”, which stand for “nebulous units of time”.
[2] Although I’ve seen some producers try to do this!
[3] Ever wonder why new fighter aircraft are years late and billions over budget?
[4] One of many quote is http://tinyurl.com/leasydl