random .NET and web development musings

We all suck at estimating, regardless of how experienced we are. This is a fact that you should accept. Most of us are either ignorant to this or in denial. There are many ways we try to hide our inadequacies, mostly revolving around mathematical transformations of the form:

E’ = mE + c

I.E. make an arbitrary estimate (E), multiply it by some amount (m) and add a bit (c).

I’m not denying that there is some sense to this, you can spend considerable time and effort refining your favourite m value by tracking your velocity, having regular retrospectives and reflective analyses etc.

This method alone however is ignorant to significant mental “quirks” which affect the way you think and reason.

The effects I am talking about are:

  • The “halo” effect
  • Framing effects
  • Overconfidence
  • Attribute Substitution
  • Base-rate neglect
  • Anchoring

The “halo” effect

The “halo” effect is defined as “the influence of a global evaluation on evaluations of individual attributes”. What this means in the realm of software development estimation is that you are likely to estimate the individual parts of a project with a bias towards how you feel about the overall project.

If you’ve formed an opinion that overall the project will be easy, all your estimates for the component parts are likely to be lower than if you viewed the project as difficult (known as the “devil” effect).

Pro Tips:

  • Ignore prejudices
  • Judge tasks independantly
  • Don’t “do the easy ones first”

Framing effects

Framing effects refers to the way our mind perceives data differently depending on how it is presented. For example, food which is “90% fat free” sounds much better than food described as “10% fat”.
When estimating tasks, we are very likely to bias our judgement based on how the requirements are presented. For example, requirements which are positively worded/presented and which sound easy/appealing are much more likely to receive lower estimates.

Pro Tips:

  • Has the way the requirements been worded affected your interpretation?
  • Are your judgements of a specific problem being clouded by its surroundings?

Overconfidence and Substitution

Little weight is given to the quality or quantity of evidence when we form a subjective confidence in our opinions. Instead, our confidence depends largely on the quality of the story we can tell ourselves about the situation. What this means is that we are very likely to be confident in an estimate if we have convinced ourselves that we know what we’re talking about. This may sound obvious, however the devil lies in the detail. Do we really know what we are talking about? Our brains do not like doubt and uncertainty, we are much happier answering questions positively rather than negatively. When estimating a task, we are very likely to jump to a conclusion (underestimate) if the task is familiar to us. How many times have you said “oh yeah that’s simple, it will take X hours” without _really_ thinking it through? This is known as the mere-exposure effect.

This is where another problem creeps in, attribute substitution. When our brains are faced with a complex question, our sub-conscious often substitutes the problem for a more familiar, easier problem. This often happens without us realising. This leads to misunderstandings of the problem domain and therefore inaccurate estimates.

Pro Tips:

  • Ask yourself why you are confident
  • Are you biasing because of familiarity?
  • Have you really understood the problem?

Base Rate Neglect

Base rate neglect or base rate bias is an error which occurs when assessing the probability of something and failing to take into account the prior probability. I use the term here partly in the strictest sense (as defined by Wikipedia above) and partly in a more general sense.
When we estimate tasks, we often fail to account for the “surrounding” or “prior” cost of the task, such as the complicated merge that will be required after the change, or the reliance on the third party delivering on time, the API documentation being adequate etc.

Pro Tips:

  • Consider all the implications
  • What assumptions have you made? – Are they sensible? Really?
  • Refuse to estimate unknowns

Anchoring

Anchoring is an effect that causes you to bias your estimate based on estimates you’ve already seen or produced. If two developers are discussing an estimate and the first says “10 days”, the second developer is more likely to produce a number closer to “10 days” than if they hadn’t spoken previously. This is one of the main benefits of using planning poker. By avoiding the influence of others until you have produced your estimates, you are much more likely to get a broader range of estimate values.

You may think broader means worse, but this is not necessarily the case. If one dev thinks a task is “1 day” but another thinks its “10 days” – you’ve identified a problem. Either you have a huge skill disparity, or there has been a fundamental misunderstanding by one or both parties!

Pro tips:

  • Try to view each estimate in isolation, dont let previous numbers skew future ones
  • Don’t confer with other estimators until you all have your own value, then justify why they are different

Summary

When producing estimates be aware of biases and make an conscious effort to spot when you might be making them. Being aware that you are likely to be biased is the first step in producing more accurate estimates, actually counteracting the biases in practice can be much harder ;)

Remember:

  • Estimate alone at first
  • Get 2nd (or more) opinions on estimates, but be careful not to cause framing or anchoring biases
  • Make a conscious effort to not misinterpret a requirement based on its wording
  • Be sure you’ve not jumped to conclusions because of familiarity of the problem
  • Review the estimates you produced last. Are they biased based on the estimates your produced first?
  • Estimate each task in isolation. Dont let your opinions of other tasks or the whole project effect individual parts

If you are interested in learning more about the psychology of decision making and biases and how you can make personal improvements (not just in development estimation) then I highly recommend you get your hands on a copy of the following:

23 COMMENTS
ad

… Estimation in software project management is often an issue. This blog post discusses some psychological aspects of estimating in software development. It explains the following effect that impact our estimation activity: The “halo” effect, …

Jackson B.
April 13, 2012
ad

I find the overestimation effect more common. Inevitably followed by Parkinsons law.

PM Hut
April 15, 2012
ad

Hi Andrew,

I would like to republish this post on PM Hut – I’m sure that a lot of project managers will appreciate it (especially those who constantly miss their estimates).

Please email me or contact me through the contact us form on the PM Hut ( http://www.pmhut.com ) website in case you’re OK with this.

Sanju
April 15, 2012
ad

It gives an idea why estimates usually fails. However multiplying by some number and getting another estimate is also not a professional way to move on due to few factors:
- It may be unrealistic from customer point of view.
- There could be higher positive variance
- You may find difficult to justify the rational behind estimation

April 15, 2012
ad

Yeah feel free to republish it. Please include a link back here citing me :)

April 15, 2012
ad

@Sanjeev Adjusting estimates by a factor is a helpful tool. If you notice that you always underestimate by a factor of 1.5, it would be wise to multiply future estimate up by this. You should continually review how you do this by tracking your estimated/actual ratio. My comments at the start of the post were rather facetious, mainly because whilst blind estimate adjusting is useful, it doesn’t address the root cause of your bad estimates :)

Mathew Bukowicz
April 16, 2012
ad

Nice post. Thanks for this great ideas on task estimation.

James
April 17, 2012
ad

Thinking, Fast and Slow was indeed a great book.

April 17, 2012
ad

@James ha, you’re on to me! It is indeed excellent :)

Will
April 17, 2012
ad

Was just about to mention Daniel Kahneman’s “Thinking, Fast & Slow”. Deserves a credit in this article don’t you think?

http://www.amazon.co.uk/Thinking-Fast-Slow-Daniel-Kahneman/dp/1846140552

Ken Parmalee
April 17, 2012
ad

I think the real problem (and quite possibly an unsolveable one) lies in the fact that we’re trying to apply an industrial principle (i.e. time = project size / rate of production) to what is still an artisan craft. To borrow an Adam Smith metaphor, we’re still making pins by hand, one at a time.
Until we can break down SW dev into simpler steps our estimates will never move beyond “touchy-feely” techniques.

April 17, 2012
ad

@Will – It wasn’t my only source, but fair point. I’ll collate my sources later and update :)

April 17, 2012
ad

@Ken – Completely agree. I tried to keep this post narrow to just the problem of estimating, I’ve got a few others lined up for dealing with the wider problem and risks in s/w dev. Stay tuned :)

ad

[...] muonlab » Why you suck at estimating – a lesson in psychology. [...]

ad

[...] Você devia dar uma olhada nesses bias que a gente tem na hora de estimar também: Why you suck at estimating. Share this:TwitterFacebookLike this:LikeBe the first to like this post. Tagged agile, [...]

Stein-Bjarne Johansen
April 19, 2012
ad

Very good article, I like the “refuse to estimate unknowns”, at our team we use it whenever required, and it put requirements to our surroundings/customers/leaders to give us the tools and information to estimate properly.

raul
April 19, 2012
ad

I feel that most of us face a big issue of efforts estimates versus schedule sometimes in their estimating careers. In this the effort and the commercialisation of the effort considers only the changes requested by the client. Obviously there are more implicit efforts that cannot be charged to the client or are as part licensing, but ultimately the development team has to do it.

This effort count is fed into all the project management systems that churn out corresponding release dates. There is no way which we can map the actual effort versus the proposed schedule causing inadvertant slippages.

April 19, 2012
ad

@Raul “There is no way which we can map the actual effort versus the proposed schedule” Keeping track of how long things took vs your estimates will help you improve your m and c values for the equation at the top of this post, which should help you be more accurate in the future.

ad

[...] Update 18/04/12 : Você devia dar uma olhada nesses bias que a gente tem na hora de estimar também: Why you suck at estimating. [...]

ad

[...] it simple, stupid, You ain’t gonna need it, Fake it till you make it, Naming is everything, You suck at estimating, …), are no exception. Should my code be self-documenting or literate? Should I be succinct [...]

ad

[...] humans: we suck at estimating timeframes. The farther down the line we estimate, the worse we are at it (i.e. we may be able to estimate how [...]

Peter
March 28, 2014
ad

The blog has clearly explained how overconfidence can stop our work.thanks for sharing this information

Post a comment