Search

Saturday, May 06, 2006

Great Dr Dobbs Article

I read this great article on selling XP to traditional (risk cautious) organizations. It addresses some of the reality vs. rhetoric out there. What really caught my eye was the cost of change curve on page 2. I've always thought of the XP CoC over the entire project cycle vs. the waterfall CoC curve. What he shows here simplifies that to one curve with the x-axis not the project time frame, but the "length of feedback cycle". He writes: "Comparing the cost-of-change "curves." The agile and traditional cost-of-change curves are identical. The difference is that agilists follow techniques that keep us at the low-cost end of the curve."

I like that it applies the CoC curve to all of Agile, not just XP.

2 comments:

Anonymous said...

Hi Clinton,

I've found another potential level to use when attempting to "infect" a traditional organization can be the earned value chart. Agile types often refer to this as a burn up chart.

An earned value chart, if done honestly, really illustrates the differences between an agile and not-so-agile organization. In a waterfall project, earned value is flatlined during analysis and design and only begins to climb during implementation. That is, it only begins to climb if you ignore that most of the so-called completed work isn't really tested yet. When testing begins in earnest, you'll see earned value flatline again, or even drop, as defects are uncovered.

In constrast, an earned value chart for an agile project should show consistent rates of earned value and that value should start to show up after the first iteration. And since our earned value is running, tested functions (Ron Jeffries term), it's not an illusion.

Clinton Keith said...

Interesting. I've heard of it and have always wanted to have one. The release burndowns (from Mike Cohn's planning book) are valuable as well.

The "ideal" burn-up would be "fun" or the dollar value you'd have to put on the game to sell a million, etc. These are difficult of course.

The main problem I have with more of the FDD approach of counting functions is that they don't often show game value. The game with the most functions isn't necessarily the best. Also, as with counting SLOC, people on the team end up gaming the system, even if unconciously. As you point out it has to be done honestly, but one of Ken's sayings is that "people act how they perceive they are measured". I want to find the best way to measure the commercial success potential of the game.