put things in context before the guessing game part
Making range estimates to express uncertainty is an improvement over discrete value estimates because we can stop making assumptions, and more easily focus on what we’re most uncertain about. Paying attention to the 90% Confidence Interval goal keeps us honest about our uncertainty. All this is good to have under our belt, but what we still need are some techniques for validating our confidence in our estimates.
To reduce uncertainty, frame the question
A good grounding in measurement scales is essential to knowing how to leverage what we do know into useful estimates. For example, knowing when and how to use ordinal scales helps us avoid false comparisons, and knowing how to use nominal scales gives us a way to move beyond basic quantitative estimates to make qualitative evaluations as well. Understanding how the arbitrary zero of interval scales works helps us understand why it’s so easy to make a mess of Story Points.
Let’s walk through some techniques for validating your confidence in your stated range estimates. These mental exercises are useful in gauging your uncertainty.
The Absurdity Test
When you have no idea what estimate to give for a task, then start with an absurdly wide range, something like “between a day and a year.” If it seems absurd, that’s the point. Just reduce the range until you get to the widest range that no longer seems ridiculous; that gives you a starting point. The idea is not to try to impress someone else with your estimate, but to take a gauge of your uncertainty. Ending up with a wide range that reflects what you don’t know about the work is better than committing to an overly optimistic target because that’s what you think that some else things that you ought to be thinking.
Imagine that you're wrong
Once you’ve stated your estimate range, before committing to it, assume for a moment that you’re wrong; that the actual value falls outside your range. Explain to yourself why you were wrong. This is an exercise where you’ll list the potential problems you might run into or things you’ve overlooked. This exercise helps you develop a better understanding of the work, by exploring risks and unknowns.
An Equivalent Bet
Another technique to try to validate your Confidence Interval is to test it against an equivalent bet. There’s a lot of research that indicates that betting with imaginary money has almost the same emotional impact as betting with real money, so, image a carnival prize wheel, with a 10% slice marked in red and the remaining 90% in green. We’re going to test your confidence in your stated range against a spin of the wheel. Given a choice, which would you prefer: to win $1000 if the correct answer falls in your stated estimate range, or spin the wheel for a chance to win $1000 if it lands in the green? Choosing the wheel is an indication that your estimate range is not your 90% Confidence Interval. Ideally, you should be ambivalent between your stated range and the wheel.
Test for Mid-Point Anchoring
Sometimes when people do range estimation, they start with the point estimate, what they consider to be the most likely value, and then add and subtract to produce the range. If they think the most likely value is two days, then they state a range of one to three days. This introduces the risk of anchoring, where once a number has been stated, it’s psychologically harder to move off of it. To test for midpoint anchoring, try testing the upper and lower bounds of the range independently, and think of them as separate binary questions. A 90% CI means that there is a 5% chance the actual value could be above the upper bound and a 5% chance that it could be below the lower bound. Are you 95% certain that the actual value will not fall above your upper bound? Then do the same for the lower bound. Are you 95% confident that the actual value will not fall below your lower?
Running arbitrage on your estimates is an exciting team sport that at least helps to illustrate what we’re trying to get out with a 90% confidence interval. Once you’ve stated your estimate range, you offer to buy or sell contracts with your teammates that actual value will fall above or below your stated range. If the stated range is actually your 90% confidence interval, then, like a currency trader, you should be ambivalent whether you are buying or selling contracts.
If these techniques seem useful then you’ll appreciate this book, which develops all this material and more in greater detail, with extensively researched supporting material to demonstrate the foundation of estimation as science:
If your estimation process is one where domain experts read a story summary and offer up a guess at how long it will take, it is a bit worse than not estimating at all, because it lends the appearance of professional credibility to an unreliable and ultimately arbitrary process. Compared to all of the things that a software engineer has to master to lead in the workplace, the amount effort one needs to make to learn how to take a scientific approach to estimation is no big thing, but the benefits are outstanding.
Let's agree to define productivity in terms of throughput. We can debate the meaning of productivity in terms of additional measurements of the business value of delivered work, but as Eliyahu Goldratt pointed out in his critique of the Balanced Scorecard, there is a virtue in simplicity. Throughput doesn’t answer all our questions about business value, but it is a sufficient metric for the context of evaluating the relationship of practices with productivity.