All Are Skill Unaware

The blogsphere adores Kruger and Dunning‘s ’99 paper "Unskilled and Unaware of It".  Google blog search lists ten blog mentions just in the last month.  For example:

Perhaps the single academic study most germane to the present election … In short, smart people tend to believe that everyone else "gets it." Incompetent people display both an increasing tendency to overestimate their cognitive abilities and a belief that they are smarter than the majority of those demonstrably sharper. 

This paper describes everyone’s favorite theory of those they disagree with, that they are hopelessly confused idiots unable to see they are idiots; no point in listening to or reasoning with such fools.  However, many psychologists have noted Kruger and Dunning’s main data is better explained by positing simply that we all have noisy estimates of our ability and of task difficulty.  For example, Burson, Larrick, and Klayman’s ’06 paper "Skilled or Unskilled, but Still Unaware of It":


We replicated, eliminated, or reversed the association between task performance and judgment accuracy reported by Kruger and Dunning (1999) depending on task difficulty. On easy tasks, where there is a positive bias, the best performers are also the most accurate in estimating their standing, but on difficult tasks, where there is a negative bias, the worst performers are the most accurate. This pattern is consistent with a combination of noisy estimates and overall bias, with no need to invoke differences in metacognitive abilities. In this regard, our findings support Krueger and Mueller’s (2002) reinterpretation of Kruger and Dunning’s (1999) findings.  An association between task-related skills and metacognitive insight may indeed exist, and later we offer some additional tests using the current data. However, our analyses indicate that the primary drivers of miscalibration in judging percentile are general inaccuracy due to noise and overall biases that arise from task difficulty.

So why does Google blog search finds zero mentions of this refutation?  My guess: because under this theory you should listen to those you disagree with instead of writing them off as idiots. 

Now Kruger and Dunning do have a 2008 followup paper, and in their first paper they were able to construct one situation where more able people had lower errors in estimating their ability. Also, Burson et al. saw some weak tendencies like this: 

We regressed perceived percentile on actual percentile among bottom-half performers and among top half performers. A Chow test confirmed at a marginal level of significance that bottom-half performers were less sensitive to their actual percentile … than were top-half performers.

Oddly, none of the dozen papers on this I’ve read pursue the obvious way to settle this question: look at the variance of ability estimates as a function of ability.  But however that turns out it seems clear that mostly what is going on is that we all misjudge our ability and task difficulty. 

GD Star Rating
a WordPress rating system
Tagged as:
Trackback URL:
  • http://maggiesfarm.anotherdotcom.com/archives/9926-All-are-Skill-Unaware.html Maggie’s Farm

    All are Skill Unaware

    Think you’re smart? That probably means that you are not. From a piece with the above title at Overcoming Bias:…it seems clear that mostly what is going on is that we all misjudge our ability and task difficulty.  You can say that again.

  • Hal

    So why does Google blog search finds zero mentions of this refutation?

    Um, because of confirmation bias? I don’t think one has to bring in any other theories to explain the zero references.

  • Cameron Taylor

    The ‘zero mentions’ link includes the following:

    __Is That Juice On Your Face?__
    **19 Nov 2007** by Russell Ball
    You should also check out “Skilled or Unskilled, but Still Unaware of It” which suggests that ‘competent’ people can be victim to the same effect as the difficulty of the task increases. Left by Rob Crowther on Nov 30, 2007 1:11 PM …
    Caffeinated Coder – http://geekswithblogs.net/coredump/Default.aspx – References

    Perhaps google’s bot is smart enough to parse OB and realized it should look a bit harder?

  • Cameron Taylor

    Now that I’ve read the findings, how do I use the information to my advantage? Perhaps find specific quantifiable methods for evaluating task performance and use them to recalibrate metacognitive monitoring.

  • http://brudnopis.blogspot.com Zbigniew Lukasiak

    “My guess: because under this theory you should listen to those you disagree with instead of writing them off as idiots.” – yeah – I am sure people would use it this way. But actually it is a double edged sword – it does not give anyone any assurance that they are not on the other side (i.e. that it is them who is unskilled and unaware).

  • frelkins

    Looks like more support for opening a prediction market on all your projects – because you’re going to underestimate how long they’ll take, how hard they are, and how good your team is.

  • http://pancrit.org Chris Hibbert

    How does this result argue for “listening to those you disagree with instead of writing them off as idiots?” It sounds like it argues for doubting your positions, but doesn’t tell you how to learn more. And even if your own evaluations are more suspect than you’d like, they’re still all you have to go on. Not trusting your own judgment leaves you exposed to the din without any indication where to start.

  • Doug S.

    I think one key thing the original paper mentioned was how people’s self-assessed percentile ratings changed when they saw other people’s performances: people with good test scores raised their self-assessments to a much more accurate level, while people with poor test scores didn’t change their assessments much.

    “What happens when you get feedback” was not addressed by the rebuttal paper (which is probably correct about the “noise plus bias” hypothesis), and only marginally addressed by the original paper.

  • http://occludedsun.wordpress.com Caledonian

    Trying to evaluate your own intellectual performance is a waste of time, because the only way you can do so is by using your intellect. That leads to massive problems with circularity.

    As the saying goes, the easiest person to fool is yourself.

    The only solution I am aware of is to find an objective measure of correctness, determine how that method rules on your position, and use the feedback to improve your thoughts. Repeat as necessary. (It’s always necessary.)

    This solution is otherwise known as ‘science’.

  • http://hanson.gmu.edu Robin Hanson

    Cameron, that ref still doesn’t appear when I click the link.

    Zbigniew, yup.

    Doug, yes but that sort of test situation is uncommon.

    Chris, a theory where idiots are more common has more support for the hypothesis that the specific person you are talking with is an idiot. And central to the concept of an idiot is that they aren’t worth talking to.

  • Cameron Taylor

    Robin: Fascinating. Still appearing as result 6 of 9 for me, now overshadowed by the several new additions that your own post encouraged. Perhaps personalised search history makes google searching even more subjective than it once was!

    * And central to the concept of an idiot is that they aren’t worth talking to.

    Are my human biasses leading me to conclude people are not worth talking to (idiots) when in fact I would benefit from such conversations? This particular bias suggest that there probably are more instances than I believe.

    More generally, however, I find I give the label ‘idiot’ not so much to people who lack skills but to those who reason less abstractly than I and are unwilling or unable to cross that gap when necessary. In those cases attempting to ‘reason’ leads to conflict independently of any actual difference in stimulus -> response pairings of the respective models. In those situations people can be legitimately ‘idiots who aren’t worth reasoning with’ yet simultaniously worth observing, talking to in a non-reasoning format and learning from.

    I’ll put it another way. It is easy to use ‘excuses’ like the earlier findings of “Unskilled and Unaware of It”. It is also easy to say “ha! I’m Biassed! I am just finding excuses to call people idiots and not talk to them! Reverse that conclusion!”. Yet it is far more difficult to arrive at an unbiassed evaluation of which people are in fact worth talking to, independently of social signalling and ego.

    Just as I cannot be sure I am smarter than my ethics, I cannot be certain that I’m smarter than my biasses either. For all I know my biasses encourage me to make excuses not to talk to idiots because I’m just too dumb to comprehend the real reason that they’re a waste of my time. Uncovering the bias with the cause for the bias may be counter productive.

  • Pingback: Weil nicht sein kann, was nicht sein darf › Immersion I/O

  • WWWebb

    I’d respond but it’s juuuuuust toooooo haaaaard.

  • John Carter

    I suspect the reason why nobody has done the study you propose is that variance is a second order effect. ie. You need an order of magnitude more data and/or signal to detect a trend above the noise.

    Sad, but that’s the hard mathematical facts of statistics.

  • Ralph S. Sturgen

    With one of Will Roger’s aphorisms in mind [ “We’re all ignorant, just about different things.”], I try very hard to remember that he’s talking about me.

    I’ve also learned that one cannot counter emotion with reason. One must start at the emotion and try to move the discussion to reason eventually.

    • Carl Malone

      You make the assumption that reason and emotion are mutually exclusive.

      You might have D.K. yourself.

  • http://jdc325.wordpress.com/ James Cole

    Those who have read Burson, Larrick and Klayman or Krueger and Mueller’s response to Kruger and Dunning might be interested in Ehrlinger et al.

    Links to all these papers can be found here: http://jdc325.wordpress.com/2009/06/09/incompetence-and-the-general-chiropractic-council/ (scroll down to the last couple of paras to see links to papers).

  • http://www.aSwedeInGermany.de Michael Eriksson

    I am currently considering writing a post on the topic of D-K myself (which is way I stumbled on this post).

    Looking at the above, I would caution that there are two big-picture issues to keep in mind, namely:

    1. Whether the less competent/intelligent tend to overestimate their abilities: My own observations (and those of many others) point very, very strongly in this direction. Notably, those really sure of themselves have very often turned out to be subaverage in ability, in my experience; while the very best have often been filled with self-doubt.

    2. Whether this is explained by low ability in X also leading to a low competence at judging ability in X.

    For my part, I was frustrated by 1. long before I encountered the D-K paper and speculated in the reverse causality, most notably that those who were too sure in their abilities failed to develop them further. (Obviously, there is unlikely to be a single explanation.)

    Now, if we look at e.g. the blogosphere and “You’re stupid! No, you’re stupid!” discussions, it really does not matter whether 2. is correct—all that is needed is 1. (Which is not to say that the application is typically justified: The “You disagree; ergo, you are stupid.” fallacy is quite common—and very hard to avoid when a debate becomes heated.)

  • Julia

    Thank you for this post. I was recently looking at the 1999 Dunning Kruger study (as part of a research project) and was astounded by its shoddy analysis. It’s good to see clearer heads have given the study a re-look.

    • Carl Malone

      The DK paper accurately describes so many people that I have encountered when they opine on science. They have no current knowledge, but they use proudly display their ignorance.

      It seems to me that the DK effect is simply described as intellectual dishonesty.

  • Kai Frederking

    It seems the D-K effect has claimed another victim. The author of the above.
    The so-called “refutation” is in fact an extension of the Dunning Kruger effect to extreme cases. It says that there is a bottom of the pit, where even the dumbest sees that he doesn’t succeed. Since a simple “I can’t” followed by failure is always accurate, while “I may have a chance”, followed by occasional failure is less accurate, that’s pretty much a commonplace statement.
    Every theory has it’s space of validity. Just because that of the D-K effect doesn’t encompass all possible tasks, but just most of the relevant cases, doesn’t “refute” the theory.

  • Pingback: Woodworking and the Dunning-Kruger Effect | Tommy Mac Team Blog

  • http://8ofdiamonds.blogspot.com Leroy Smith

    I am no expert, and I’m sure some here know better than me, but I think the main point of the original paper is lost here. Or at least I look at it a different way. If a skilled person miscalculates their competence downward, there is far less harm than if the unskilled overestimates their incompetence upward. even if the skilled overestimates upward, there is still less harm compared to the unskilled doing the same.

  • E-money

    In one billion years no one will care about this.

    • J O

      E-money, as far as I can tell, your comment is vague (what precisely does “care” mean?).  It may also be massively redundant – it could apply to most of what is done today, even though what we do is still relevant to now.  It might even be incorrect; it’s not clear to me that these results will stop being relevant in a billion years.  Maybe even an advanced typed 3 civilization decides to keep the information around for something.  Who knows? How would they?  Please clarify.

      • remeranAuthor

        You know you asked a guy about something two years ago, right?

        (I waited two years to ask you this… please find it funny…)

  • Pingback: Overcoming Bias : The Unskilled Are Aware

  • Pingback: Unskilled and unaware — Evolving Economics

  • Pingback: Paul van Buuren » Blog archief » Dunning-Kruger-effect

  • Pingback: The Dunning-Kruger Effect « You Are Not So Smart

  • Pingback: Equally Unaware Of Stupidity | W-Shadow.com

  • Pingback: The Dunning–Kruger effect - Oplossingsgericht Management