I went to Simula Research Lab’s seminar on estimation today. My conclusion is that despite many years of practice and research, we don’t know how to make estimates for even moderate projects correct within an order of magnitude. I think a new approach is needed!
I wish I could’ve said that I learned something fundamental, but instead, I got my preconceptions confirmed. I think I understood the underlying causes better, however.
I will start by talking about the last of the presentations, Magne Jørgensen’s presentation about choosing between intuitive (“gut-oriented”) estimation techniques and and analytic (“head-oriented”) techniques.
What fascinated me most about Magne’s talk was when he pointed that people have a strong dichotomy between their rational and their intuitive thinking process. Evolutionary research seems to point to the intuitive part of our mind as being much older than the analytic. The interesting thing is that we trust the methods of our analytic mind much more than the methods of our intuitive power. However: We trust the results of our intuitive thinking much more. A result of this is that when using a model for estimating work, we will often tweak the inputs to the “rational” process so that the results match our intuitive expectations. Even when we’re being rational, we’re not!
I was reminded of Malcolm Gladwell’s book “Blink“, where he talks about the powers of intuitive “snap judgment” decisions, and how it seems like we’re pretty good at making quick, intuitive decisions when there’s a lot of confusing information. However, as Magne pointed out, our intuitive reasoning is vulnerable to certain biases. Some of these biases lead to systematic errors, and some of them, we’re able to eliminate by being aware of them.
Some biases that are pretty common were covered by other talks or by the short survey question on that was given in the seminar. These were: The Anchoring effect, preferring a close analogy more than a more complete data set when looking for relevant comparisons, and the tendency to consider irrelevant or unimportant information much more strongly we should.
Magne pointed out that in some fields, mathematical models have been developed to where they are more effective than nearly all experts. This holds true for a lot of diagnostics and financial forecasting. (I remember a “model” for predicting heart attacks that used only four of the nearly twenty variables that experts considered important and that was considerably more reliable than nearly all experts).
It seems to me, however, that software project estimation will never be one of these areas. All the areas that have successfully developed mathematical predictive models operate with a fairly simple and closed set of inputs (e.g. symptoms) and outputs (e.g. what sickness does the patient have). The set of inputs that we know much be considered when we estimate a project is large and extremely hard to quantify. Two of the most important factors are the expertise of the project team and the size of the task. We have no good objective measure of either. The output is a number of hours that can vary wildly, and indeed, we often misestimate by a factor of five or ten.
Most of the research material that Simula presented was Stein Grimstad’s research on the effect of distractions on estimates. The results were very disheartening: The same people will vary their estimates of the same task by 70 % if asked to give the estimate again a while later. The variation can be to higher estimates or lower. Irrelevant information will affect estimates with a large effect. Two groups asked to estimate the same task, where one group is given extra irrelevant information will give measurably different estimates. The estimates of the group with the irrelevant information will have a greater internal variation. This effect persists if the group is made aware of the irrelevant information. It persists if the group is asked to highlight relevant information. There is just a small drop in effect if they are asked to black out all irrelevant information. A similar effect was even present if the specification was formatted to take up more pages with the exact same text.
Stein’s research and that of Nils Christian Haugen also pointed to a few things that could be done to increase your odds of succeeding. Having a separate group black out irrelevant information in a specification before giving it to the estimators had a positive effect. Splitting up the task in a structured way seems go give more correct estimates. Some initial studies also show good effect of structured group estimation (planning poker or wideband delphi<), of estimating relative size of tasks instead of expected time, and of posting progress information on an information radiator. However, these are all micro-estimation techniques. No solution was proposed for the macro problem of estimating the cost of a multi person-year project. My experience of reading requirement specification is that there is often more irrelevant than relevant information. And my experience is that most project estimates are grossly wrong (unless you take the Dilbert approach: The estimates were right, the project members were just too lazy). I don't think the solution lies in better estimation, but in realizing that estimates are always going to be extremely inaccurate. We need to use project management techniques that can survive in this setting. Personally, I would like to further pursue a process based around building to budget: "We don't know what realizing this specification will cost. But we do know that we can make something interesting within a budget of X. And we can prove it by delivering real progress before we've spend more than 10 % of that." Researchers have been searching for the holy grail of how to improve estimation for years. Maybe they should answer another question: How can we manage projects despite poor estimates?
Copyright © 2007 Johannes Brodwall. All Rights Reserved.