New data question the claim that people tend to overestimate their abilities: A large body of literature purports to find that people are generally overconfident. In particular, a better-than-average effect in which a majority of people claim to be superior to the average person has been noted for a wide range of skills, from driving, to spoken expression, to the ability to get along with others, to test taking on simple tests. The literature generally accepts that this better-than-average effect is indicative of inflated self- assessments. However, [we] recently … show that the better-than-average data … does not indicate … people have made some kind of error in their self-evaluations. Because of this reason, almost none of the existing experimental literature on relative overconfidence can actually claim to have found overconfidence. … In this paper, we report on an experiment designed to provide a proper test of overconfidence. … As in much previous experimental work, we find a better-than-average effect among our subjects. … We find evidence that subjects are uncertain of their own types. Our experiment can be viewed as a test of the null hypothesis that people are behaving rationally (and are not overconfident). We cannot reject that hypothesis.
I meant to say that in Svenson's study, 82.5% of the people placed themselves in the top 30% (not the top half as I said in my previous post). So there was no reason to expect that only 52% would place themselves in the top 30%. So adding this fact, the self selection recruitment, and the high motivation, we think we gave overconfidence a "fair shot" to reveal itself (as the person in the first post asks).
Hi to all, and thanks for your interest.
Regarding the first comment: first of all, we developed a theory of what would constitute a proper test of overconfidence. With this theory, we found that the two tests which were properly run found no overconfidence (tests based on "scales"; I know of no other tests that would be proper tests). Also, our theory allowed us to build the proper test that is referenced in the original post. We run the two treatments referenced in the firt comment because we sort of expected to find overconfidence: people took the original Svenson study quite seriously, and his data showed that 82.5% of the people placed themselves in the top half of the population; we selected people using a "self selection" advertisement (as in Camerer and Lovallo's AER paper) expecting to push people further into being more overconfident; finally, we had a treatment with "high motivation" which we expected to have the effect of making people more overconfident.
Regarding Chris's comment: we take care of that issue in the paper.
Regarding Zubon's comment: that is a good idea; you should look at Dunning's papers (the ones we reference in our work).
best to allJuan
Zubon echoes my interpretation. People are almost certainly not using consistent definitions for what "average X" even means.
Alice thinks she is a better than average driver. By this, she means that she drives safely and has never had a crash.
Bob thinks he is a better than average driver. By this, he means that he never speeds and obeys all the signs.
Carla thinks she is a better than average driver. By this, she means that she drive very quickly and efficiently, passing idiots like Alice and Bob who take forever to get anywhere.
David thinks he is a better than average driver. By this, he means he is a man, because you know how those woman drivers are.
Yah, so what if a majority of people think they're better than average? If there are extreme outliers in the other direction, this could certainly happen. Shouldn't we be looking at if people think they're better than the median?
Their theory questions the previous research on overconfidence, but their data don't say much. The basic idea behind their theory is that each person's expectations about their own performance (or beliefs about their abilities) have a probability distribution, and saying you're above average just means that you think there's more than a 50% chance that you're above average. So even if 98% of people say that they're going to place in the top half, that might not indicate overconfidence because some normatively permitted set of underlying probability distributions could produce those data (e.g. if 98% of people each think that they have a 51% chance of being in the top half and 2% think that they have a 1% chance).
With the cutoff at 50/50 the data are completely meaningless - any set of data (even 98% vs. 2%) is consistent with their theory. In their study, they shift a little bit away from the 50/50 point, so that it would require extremely high levels of overconfidence to rule out their theory but it is possible. Unsurprisingly, they can't rule it out. Less than 60% of people thought they were more likely than not to place in the top 30% (only 52% thought so), and less than 83% of people thought they had more than a 60% chance of placing in the top half (only 64% did). Thus they cannot reject the hypothesis that people are behaving rationally.
Next time, they should ask questions where there are narrower restrictions on the range of theory-consistent answers so that overconfidence has a fair shot to reveal itself. For instance, ask each person "Do you have at least an X% chance of placing in the top half?" for X = 20, 40, 60, and 80 (using incentive compatible gambles, as they do). I bet they'd find people putting too much probability mass on placing in the top half.