How to measure quality
Everyone has heard horror stories about pointy-haired-bosses counting lines of source code to track the progress of a project. We roll our eyes and laugh at their stupidity. But before you laugh too much, you might want to find out whether you’re really any better.
Most of what software projects measure are not things we care about. Not things we really care about. Do you really care about the number of coding standard violations in your code? About compiler warnings? About test coverage? About cyclomatic complexity?
No, you only care about the presumed effects of these metrics. What we really care about are things like how much it costs to change the system, and how likely we are to introduce bugs. Hopefully, most of the numbers you collect with your fancy tool will help predict what your really care about. But tools can be fooled.
You can have 100% test coverage without asserting a single thing about your code. And you can have maintainable, bug free code with super-high cyclomatic complexity and not a single line of code comments.
A leading indicator is a term investors use for measurements that change before the economy changes. A company’s stock prices is an example of leading indicators.
On the other hand, the term lagging indicator is used for a measurement that change after the economic reality changes. A company’s reported earning is an example of a lagging indicator.
Leading indicators, like stock prices and code coverage are useful because you can get hold of them early. However, they are not the real thing we’re after. The real thing we’re after is often a lagging indicator, like reported earnings or code maintainability.
Use leading indicators on your project to identify trouble areas early. But don’t be religious about them. Use lagging indicators to prove that you’ve met your commitment to quality. If your commitment to quality is reduced to code coverage, cyclomatic complexity or compiler warnings, you’re no better than the pointy-haired-boss counting lines of code.
Comments:
Anders Schau Knatten - Sep 24, 2010
Interesting post Johannes! I’m looking forward to your follow up on which lagging indicators you how found useful, and how to set yourself up to measure them. :)
Johannes Brodwall - Sep 24, 2010
I agree totally: Leading indicators are useful in so far they can give you advance warning of lagging indicators. They are only dangerous when they become a goal in themselves.
Johannes Brodwall - Sep 24, 2010
Lagging indicators are generally not measured directly from the code, but from the process around it.
A good example is the % of effort spent in an iteration fixing bugs from previous iterations. To measure this, you have to have people enter this time in the time reporting software for the project.
[Bjørn Nordlund] - Sep 24, 2010
Agree! Typical example i’ve seen a lot is goals to increase testcoverage by some percents during next iterations.
Johannes Brodwall - Sep 24, 2010
Here is an interesting video:
http://blog.thecodewhisperer.com/post/1178516821/how-do-you-believe-mccabe-complexity-helps
McCabe Cyclomatic Complexity is a leading indicator. The experts interviewed on the video have experienced that it correlates very well with bug count (a lagging indicator).
If their experience experimentally matches yours, cyclomatic complexity could be a good leading indicator for you to use. Otherwise; don’t use it.
Morten Andersen-Gott - Sep 24, 2010
A bit of an optimistic view. What about when lagging indicators prove that the commitment to quality have _not_ been met…? ;-)
Other than that, I did like the way you used the terms lagging and leading indicators on code quality.