Saturday, March 23, 2013

Sprint commitments and forecasts

There has been much debate about whether a team, at the start of a sprint, commits to a sprint goal, or merely forecasts what they currently understand will be completed by the end of a sprint.

It's both.  In sprint planning, the team creates an initial sprint backlog, which is a forecast of the tasks, or bits of work, that they feel represents the best path to achieving the sprint goal.  The form this takes is up to them (hours, days, thrown chicken bone patterns, etc).  They will refine how they create the backlog over time to improve the value of their forecasts.

The commitment part is more about a commitment to do their best to achieve the goal while maintaining quality.

The problem is that very often this commitment comes into conflict with the initial forecast.  For example, one time I estimated it would take two days to implement drift-racing physics (with a handbrake control) into our vehicle dynamics model.  I was able to do this, but it took another week to make it "fun", much of that time sitting next to a designer.  This couldn't have been predicted and we could have stopped after two days and said "sprint goal achieved", but was it really?

At which point do we say, "it's good enough, time to move on"?  That can't come from sprint planning.  It has to come from the daily conversation with the team (including the product owner).  Sometimes this results in the forecast growing and the team delivering a part of the goal that meets the quality bar.

This definition can scare managers that first hear about it and it's where they and teams struggle at first.  This often comes from a culture that isn't prepared to trust developers to judge or achieve quality on their own and the inexperience of teams to be given this control.  So the forecast becomes the commitment and the teams focus on making the hours look good, rather than the game.   It takes time to establish the balance.

A commitment to quality at the expense of the forecast is the correct choice.  It's very easy to cut quality to look good on paper, but it will bite you in the tail end.  This doesn't mean we pursue the highest possible quality at all costs.  That quality has to be arbitrated by execution and measurement.  It has to be balanced with the needs of the customer.  It shouldn't rule at all cost.   My favorite example of "quality gone wild" comes from another driving game.  As the prototypical product owner, I encountered an artist modeling window flower boxes throughout the city players were to race in.  These required thousands of polygons and detailed textures to render.   The flower boxes were beautiful and added much color, but based on the cost of creating and rendering them, they couldn't be justified, especially from the point of view of the player, who would be passing them at over 90 MPH.

So, we killed the window boxes, but it was a good lesson on our team's path to learn how to build "95 MPH art.

Saturday, March 09, 2013

Agile in Embedded Software Development

In my spare time, I build various small devices using Arduino hardware or help my sons create small games.  I enjoy building devices because I had a background in hardware development as well as software development before I became a full-time game developer 20 years ago.

Embedded development benefits as much from agile practices as pure-software development.  I recently shared some tips on some of those:

  • Find ways to iterate the hardware as well as the software.  We found that reducing the time between software and bring-up paid dividends despite the cost of additional hardware development.  More breadboarding/prototyping was a big benefit.
  • Find ways to implement unit testing of the hardware as it's brought up and incrementally improved.  Using an example device that controls a lights brightness:  have a test that would send brightness commands to the hardware and allow someone to verify that the hardware is performing as expected at each level.  Automation of this is nice, but not always possible.
  • Find ways to ensure that interfaces are established, communicated and, if changed, easily and quickly communicated between hardware and software developers.  This is usually not a big problem on small teams, but it is with larger teams.  So, for example, if hardware changes the brightness control from an analog interface to a digital one, the change is reflected in the code and tested quickly.
  • Encourage hardware and software engineers to overlap as much as possible.  I like the phrase "generalizing specialists".  One tip: don't play paintball as a team building exercise.  We did that.   The hardware engineers teamed up, figured out how to increase the shot velocity of their paintball markers, and gave all of us software engineers painful welts.  It wasn't a good team building exercise. ;)
  • If you have any sensors, motor controls, transmitters, receivers, etc that have to interface with the real, noisy world, try to test these as subsystems as early as possible in the target environment.  One time we went out to sea on the first test of an underwater modem which we had simulated in Matlab and an enclosed water tank.  The temperature inversion layer, multi-path and Doppler effects of the actual ocean environment demonstrated that we were very much farther away from completion than we thought.  It was a bad day.