Convenient Overconfidence

We are more overconfident on tasks we don’t actually expect to perform, and when we don’t expect to have to explain our evaluation to others.  On expecting to perform:

Participants made predictions about performance on tasks that they did or did not expect to complete.  In three experiments, participants in task-unexpected conditions were unrealistically optimistic: They overestimated how well they would perform, often by a large margin, and their predictions were not correlated with their performance. By contrast, participants assigned to task-expected conditions made predictions that were not only less optimistic but strikingly accurate. Consistent with predictions from construal level theory, data from a fourth experiment suggest that it is the uncertainty associated with hypothetical tasks, and not a lack of cognitive processing, that frees people to make optimistic prediction errors.  Unrealistic optimism, when it occurs, may be truly unrealistic; however, it may be less ubiquitous than has been previously suggested.

On expecting to explain

Accountability … [is] the expectation to explain, justify, and defend one’s self-evaluations (grades on an essay) to another person ("audience"). Experiment 1 showed that accountability curtails self-enhancement. Experiment 2 ruled out audience concreteness and status as explanations for this effect. Experiment 3 demonstrated that accountability-induced self-enhancement reduction is due to identifiability. Experiment 4 documented that identifiability decreases self-enhancement because of evaluation expectancy and an accompanying focus on one’s weaknesses.

It is almost as if we at some level realize that our overconfidence is unrealistic.

GD Star Rating
Tagged as:
Trackback URL:
  • Observing myself, it seems that the unrealistic forecasts come from a part of brain called “fun”, whereas the realistic forecasts come from a part of brain called “work”. These are two different mindsets used in different circumstances. When one is not bound by external factors, it is fun to contemplate long-shot possibilities, but when it’s time for work, one shifts to boring but more accurate asessments because circumstances demand it.

    In essence, taking all the realistic factors into account is boring, so it isn’t normally done for fun.

  • Bo

    I think that on some level we realize a lot more about rationality than one might think. My experiences of learning anti-bias and rationality techniques on OB have felt more like socratean unforgettings that make explicit that which I already know implicitly, than revelations.

    And from the viewpoint of evolutionary psychology, wouldn’t it be odd if we didn’t implicitly know quite a lot about rationality, since being able to make use of rationality is so useful?

    Maybe biases can be classified into just two kinds:
    1) Those that make us say biased things for our own benefit, but not actually act as if what we say is true. Example: overconfidence, thinking that social judgements are made rationally.
    2) Adaptations that actually are misplaced in today’s world, and make us act in irrational ways.

  • HH

    Isn’t this basic signaling again? As this blog has said many times before, the ability to deceive is a useful tool. Why not use a situation with no accountability to signal our superior skills? At the same time, if we actually have to perform, it makes sense to try to lower expectations so that our eventual success is that much more impressive. Remember:

    “Nothing is impossible for the man who doesn’t have to do it himself.”

  • JimmyH

    This is exactly Caplan’s point. People are more irrational when it’s cheap.

    It’s not that it’s worth doing more cognitive work when the stakes are high, it’s that people *actually try* to get it right. From an Evolutionary Psychology viewpoint, it’s not surprising.

    One of my favorite tricks is to ask myself “Would you bet on that at those odds?”. The answer should never be obvious (if it is, shift your odds such that it isn’t), so if you catch yourself thinking “WOAH! I never said anything about *betting*!”, then you’ve made a mistake. The goal is to give odds such that you’re willing to bet on either side if someone calls you out (unless conditioning on their willingness to bet changes the odds significantly).

  • HH:

    It looks like they tested that: “Experiment 2 ruled out audience concreteness and status as explanations for this effect.”
    At least that’s how I read that.

  • Pingback: Why We Humans Don't "Get" Dangerous Climate Change | Global Climate Change Information()