Agile Zone is brought to you in partnership with:

I am a programmer and architect (the kind that writes code) with a focus on testing and open source; I maintain the PHPUnit_Selenium project. I believe programming is one of the hardest and most beautiful jobs in the world. Giorgio is a DZone MVB and is not an employee of DZone and has posted 638 posts at DZone. You can read more from them at their website. View Full User Profile

Lean Tools: Measurements

08.06.2012
| 3491 views |
  • submit to reddit

The last two Lean tools we will explore are related to Systems Thinking: concretely, a way for modeling your team or company as a whole, without the usual disaggregation in tasks and departments.

System thinking is more diffused in other enteprises, such as factories: in industrial process engineering you can even perform simulations on how material moves through a plant and is transformed (there is plant simulation software that does this for you). These kind of models are handy when there are thousands of Euros (or dollars) tied up into inventory and materials, and even improvement of a few percentage points are significative and justify the investment.

But we can think about a development team or a company as a whole system too: stories and tasks move from specificationa and analysis towards development, testing and deployment. Therefore we try to optimize the whole value stream and not the development or requirements blackbox.

Patterns

The Poppendiecks cite a few (anti-) patterns that you can encounter in complex systems:

Limits to growth are common in a system that has just evolved without control. You can only handle a few stories a week with any team before defining an (even informal) process and start improving it.

The Theory of Constraints has as one of its goals to identify the bottleneck in the system that turns user stories into money. Do you have too few testers? Does a release takes too much time? The theory and its applications are a large topic by themselves

Shifting the burden is an expression that stands for optimizing one's own knitting by sucking resources out of other people or by causing them problems (even with good intentions, and without noticing). Shifting the burden points to the presence of suboptimizations: the team is maximizing the features coded or analyzed, but not the subset sold to a customer.

5 Whys session are group meeting where everyone involved in a failure or a problem is present: their goal is to find the root cause of a problem and eradicate it instead of making a local fix.

Measurements

Adapting measurements to an high-enough context is a tool to avoid suboptimization. For example, suppose that I pay you a bit for each line of code you write in my team; in little time you will be able (with a local maximization) to flood the source code repository with duplicated code or commented lines.

We can try to improve the quality of the measurement by eliminating duplication in the count, or not counting comments; yet it is already almost impossible for static analysis to find all dead code, let alone measure the importance of the lines you commit.

Masonry-related measurements don't cut it here: to achieve global optimization you need to introduce higher-level measurements. Performance contracts tie bonuses on the developers salary to operational results (how many money the product makes) or higher-level goals like internal quality objectives (code coverage is raised up to X%).

Plainly speaking, a measurement of quality can be the amount of bugs opened after each release. The amount of rework needed for stories is difficult to cheat as they are visibly brought back from the dead on your Scrum board.

In fact, Scrum and Kanban boards try to visualize the whole system under the team's control, tracking stories from requirements gathering and estimation to release or deployment. If used honestly and consistently, these tools bring up problems (but they not fix them: that's your job.) Their focus is on getting stories out by passing through all the stages, so for example in my team I am responsible for bringing a story to completion in the last deployment column. The story doesn't pass to someone else after coding or testing, and a cross-functional team is a prerequisite for this system to work.

Yet we have to be careful in not relying on measurements as the sole quality that drives our team: I assure you that code coverage can be raised to an almost unlimited extent without actually testing anything, with the right (actually wrong) incentives to write tests. Every metric can be gamed.

Information measurements

A simple way to avoid suboptimization is to aggregate performance measurements. The Poppendiecks say as an example to count the bugs opened for a feature, not for the work of each person.

Responsibility for a story is empowering but blame is not: if you shift the burden and make the others do the hard work or commit errors, you should not be considered the winner of the game. You should only win if the whole team is winning: therefore, aggregate metric to reflect at least the performance of a team and not of a person.

Consider this example: if you're incurring in technical debt and break the build by introducing an hack to make your feature work, you're slowing down the rest of the team at the same time (who has to figure out why the build isn't working, how to fix it and if their integration problems are the fault of their new code or not.)

Aggregation instead makes clear what is the team's performance. The stimulus is on doing the right thing to lower the bugs count or increase the value of features that have been shipped. It is not on just writing more lines of code or hacking in static calls. Aggregation provides you with information to answer different questions from blaming the author of a commit; is our debt increased or decreased? What is our average lead time? Are we chronically underestimating the time it takes to deploy? I argue that these are most interesting issues than finding the culprit.

Gaming measurements

Going back to cheating, consider for example story points: they measure the effort necessary for building a feature. In theory, you could use the story points completed to get a rough benchmark of productivity.

However, the moment you start to abuse a tool developed by the team for the team's internal usage, you lose its benefits. It's easy to game a measurement system and optimize for just a metric and not for the overall goals that it was supposed to achieve.

In our case, the team members would just pad estimates by inflating their story points. Before starting to use them as a metric, you could estimate basing on velocity; after that, you're back to square one.

Thus I think that the members of an Agile team must share the same objective (oriented to deliver working software and providing value) and can't be trained into the right behavior with metrics. These are only Lean tools, not a recipe for a way of life...

Published at DZone with permission of Giorgio Sironi, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)