Tag Archives: Overconfidence

Overconfidence From Moral Signaling

Tyler Cowen in Stubborn Attachments:

The real issue is that we don’t know whether our actions today will in fact give rise to a better future, even when it appears that they will. If you ponder these time travel conundrums enough, you’ll realize that the effects of our current actions are very hard to predict,

While I think we often have good ways to guess which action is more likely to produce better outcomes, I agree with Tyler than we face great uncertainty. Once our actions get mixed up with a big complex world, it becomes quite likely that, not matter what we choose, in fact things would have turned out better had we made a different choice.

But for actions that take on a moral flavor, most people are reluctant to admit this:

If you knew enough history you’d see >10% as the only reasonable answer, for most any big historical counterfactual. But giving that answer to the above risks making you seem pro-South or pro-slavery. So most people express far more confidence. In fact, more than half give the max possible confidence!

I initially asked a similar question on if the world would have been better off overall if Nazis had won WWII, and for the first day I got very similar answers to the above. But I made the above survey on the South for one day, while I gave two days for the Nazi survey. And in its second day my Nazi survey was retweeted ~100 times, apparently attracting many actual pro-Nazis:

Yes, in principle the survey could have attracted wise historians, but the text replies to my tweet don’t support that theory. My tweet survey also attracted many people who denounced me in rude and crude ways as personally racist and pro-Nazi for even asking this question. And suggested I be fired. Sigh.

Added 13Dec: Many call my question ambiguous. Let’s use x to denote how well the world turns out. There is x0, how well the world actually turned out, and x|A, how well the world have turned out given some counterfactual assumption A. Given this terminology, I’m asking for P(x>x0|A).  You may feel sure you know x0, but you should not feel sure about  x|A; for that you should have a probability distribution.

GD Star Rating
loading...
Tagged as: , ,

Student Status Puzzle

Grad students vary in their research autonomy. Some students are very willing to ask for advice and to listen to it carefully, while others put a high priority on generating and pursuing their own research ideas their own way. This varies with personality, in that more independent people pick more independent strategies. It varies over time, in that students tend to start out deferring at first, and then later in their career switch to making more independent choices. It also varies by topic; students defer more in more technical topics, and where topic choices need more supporting infrastructure, such as with lab experiments. It also varies by level of abstraction; students defer more on how to pursue a project than on which project ideas to pursue.

Many of these variations seem roughly explained by near-far theory, in that people defer more when near, and less when far. These variations seem at least plausibly justifiable, though doubts make sense too. Another kind of variation is more puzzling, however: students at top schools seem more deferential than those at lower rank schools.

Top students expect to get lots of advice, and they take it to heart. In contrast, students at lower ranked schools seem determined to generate their own research ideas from deep in their own “soul”. This happens not only for picking a Ph.D. thesis, but even just for picking topics of research papers assigned in classes. Students seem as averse to getting research topic advice as they would be to advice on with whom to fall in love. Not only are they wary of getting research ideas from professors, they even fear that reading academic journals will pollute the purity of their true vision. It seems a moral matter to them.

Of course any one student might be correct that they have a special insight into what topics are neglected by their local professors. But the overall pattern here seems perverse; people who hardly understand the basics of a field see themselves as better qualified to identify feasible interesting research topics than those nearby with higher status, and who have been in the fields for decades.

One reason may be overconfidence; students think their profs deserve more to be at a lower rank school than they do, and so estimate a lower quality difference between they and their profs. More data supporting this is that students also seem to accept the relative status ranking of profs at their own school, and so focus most of their attention on the locally top status profs. It is as if each student thinks that they personally have so far been assigned too low of a status, but thinks most others have been correctly assigned.

Another reason may be like our preferring potential to achievement; students try to fulfill the heroic researcher stories they’ve heard, wherein researchers get much credit for embracing ideas early that others come to respect later. Which can make some sense. But these students are trying to do this way too early in their career, and they go way too far with it. Being smarter and knowing more, students at top schools understand this better.

GD Star Rating
loading...
Tagged as: , , , ,

Female Overconfidence

Men are famously more overconfident in war, in investments, in choosing firm projects, in their performance as managers (but not auditors), as math and econ students, and about their IQ. But these are traditional male areas (i.e., abilities expected more of men in traditional societies). I suspect, however, that women tend to be more overconfident in traditional female areas, such as parenting, housework, shopping, nurturing, and maintaining family relationships. Alas, though I found dozens of papers on overconfidence in traditional male areas, I couldn’t find any on traditional females areas. The closest I found was:

In both the lab and the field, female subjects tend to show greater confidence in their groups than in themselves, while male subjects show greater confidence in themselves than in their groups. (more)

This seems a nice opening for enterprising psych or econ experimentalists.

GD Star Rating
loading...
Tagged as: ,

Overconfidence Explained

We seem close to a good account of overconfidence:

We study a large sample of 656 undergraduate students, tracking the evolution of their beliefs about their own relative performance on an IQ test as they receive noisy feedback. … Subjects (1) place approximately full weight on their priors, but (2) are asymmetric, over-weighting positive feedback relative to negative, and (3) conservative, updating too little in response to both positive and negative signals. These biases are substantially less pronounced in a placebo experiment where ego is not at stake. We also find that (4) a substantial portion of subjects are averse to receiving information about their ability, and that (5) less confident subjects are more likely to be averse. We unify these phenomena by showing that they all arise naturally in a simple model of optimally biased Bayesian information processing … [of] agents who derive utility directly from their beliefs (for example, ego or anticipatory utility). (more; HT Dan Houser)

They also have results on how overconfidence relates to IQ and gender:

We show that agents who are of high ability according to our IQ quiz, and hence arguably cognitively more able, are just as conservative and asymmetric as those who score in the bottom half of the IQ quiz. … In our data women differ significantly in their priors, are significantly more conservative updaters than men while not significantly more asymmetric, and significantly more likely to be averse to feedback. These gender differences are consistent with our theoretical framework if a larger proportion of women than men value belief utility.

GD Star Rating
loading...
Tagged as: , ,

Nature Ignores Econ

A recent Nature article got lots of press:

We present an evolutionary model showing that, counterintuitively, overconfidence maximizes individual fitness and populations tend to become overconfident, as long as benefits from contested resources are sufficiently large compared with the cost of competition. In contrast, unbiased strategies are only stable under limited conditions. The fact that overconfident populations are evolutionarily stable in a wide range of environments may help to explain why overconfidence remains prevalent today.

They model pairwise contests over a discrete prize, won by the most able side. Each agent chooses if to fight, after seeing the other agent’s ability with error. If both agents fight, they pay a conflict cost. Agents can win by committing to overestimate their own ability, as this makes them fight more, which induces opponents to fight less.

I found that strategic commitment effect in ’06, in a paper that seemed too simple to publish in econ theory journals:

In a simple model of conflict, two agents fight over a fixed prize, and how hard they fight depends on what they believe about their abilities. To this model I add “preagents,” representing parents, leaders, or natural selection, who choose each agent’s confidence in his ability. Depending on the reason for such confidence, I find five different patterns in how confidence varies with ability. Agents who estimate their ability with error have under-confidence when ability is high and over-confidence when ability is low, while strategic commitment incentives induce the opposite pattern. Agents who misjudge their value for the prize, relative to their cost of effort, induce an over or under-confidence that is independent of ability, while cooperating pre-agents choose extreme under-confidence. Agents who use confidence to signal ability have a relatively uniform over-confidence.

I doubt Nature would have published my paper either. My paper used a few lines of math of simple game theory, while the paper Nature published used lots of evolutionary simulations, which don’t seem to add much beyond simple game theory. Based on this and other cases I’ve seen, I conclude Nature doesn’t care what econ theorists think about the social science papers they publish.

GD Star Rating
loading...
Tagged as: ,

Passion Vs. Doubt

Michael Gerson has doubts on doubt:

Without a doubt, doubt is useful and needed at the margins of any ideology. The world is too complex to know completely. Many of our judgments are, by nature, provisional. Those who are immune to evidence, who claim infallibility on debatable matters, are known as bores – or maybe columnists.

Doubt becomes destructive as it reaches the center of a belief and becomes its substitute. A systematic skepticism may keep us from bothering our neighbor. It does not motivate a passion to fight for his or her dignity and rights. How do ambiguity and agnosticism result in dreams of justice, in altruism and honor, in sacrifices for the common good? What great reformers of American history can be explained by their elegant ambivalence? (more)

Ask yourself this simple question: how confident would you need to be on a moral or political conclusion in order to work passionately for it? 99%? 90%? 75%?  If you have such an action-threshold, and this threshold is high, well then yes, to let your passion flower, you may need to lie to yourself about your confidence. So that you might actually do something.

Would your overconfidence then lead you to do too many things too enthusiastically?  If so, perhaps you’d do better to also allow yourself some other more graded psychological reluctance to passion, to counter this bias.

But it would of course be even better if you could see the nobility and glory in doing your best as a limited but well-meaning creature. You shouldn’t need to be absolutely sure of a conclusion to work sincerely and passionately for it.

GD Star Rating
loading...
Tagged as:

The Wisdom Of Others

When we choose to act on our own private clues, or to infer the clues of others from their actions, we too often neglect the wisdom of others:

In situations where it is empirically optimal to follow others and contradict one’s own information, the players err in the majority of cases, forgoing substantial parts of earnings. The average player contradicts her own signal only if the empirical odds ratio of the own signal being wrong, conditional on all available information, is larger than 2:1, rather than 1:1 as would be implied by rational expectations. … In total, the meta dataset contains 29,923 decisions made by 2,813 participants in 13 studies. All participants observe a private signal and a (possibly empty) string of previous choices made by others in analogous situations. In all decision problems there are two actions and two possible payoffs, but the dataset nevertheless comprises a large variety in environments, instructions, players’ personal characteristics, and histories of other players’ choices. (more)

Of course copying others’ acts sends a bad signal about our confidence in own own info and and analysis abilities. So it can make sense to focus more on one’s own clues to the extent is is important to send a positive signal to observers.  Just beware of assuming too easily that such gains are substantial.

GD Star Rating
loading...
Tagged as: ,

Overconfidence Signals

We test three different theories about observed relative overconfidence. The first theory notes that simple statistical comparisons … are compatible with a Bayesian model. … Data on 1,016 individuals’ relative ability judgments about two cognitive tests rejects the Bayesian model. The second theory … We test an important specific prediction of [self-image concern] models: individuals with a higher belief will be less likely to search for further information about their skill, because this information might make this belief worse. Our data also reject this prediction. The third theory is that overconfidence is induced by the desire to send positive signals to others about one’s own skill. … Personality traits strongly affect relative ability judgments in a pattern that is consistent with this. … Overconfidence in statements is most likely to be induced by social concerns than by either of the other two factors. (more; HT Tyler)

GD Star Rating
loading...
Tagged as: ,

Far is Overconfident

Since our minds are smaller than the world, the world tends to be more complicated than our mental models of it. Yes, sometimes we think things are more complex than they really are, but far more often reality is more complex than we appreciate. All else equal, since far mode makes us neglect detail, it tends to make us think things are even simpler, thus increasing our error. So far mode is a major source of human overconfidence. From the latest JPSP:

People generally tend to believe they are more competent than they actually are, and this effect is particularly pronounced among poor performers. … One striking demonstration, the illusion of explanatory depth (IOED), arises when people overestimate their ability to explain mechanical and natural processes. For example, people know that a zipper closes because it has teeth that somehow interlock, but they know very little about how the teeth actually interlock to enable the bridging mechanism. Similarly, many people know vaguely that an earthquake occurs because two geological plates collide and move relative to one another, but again they know little about the mechanism that initially produces these collisions. Nonetheless, people believe they understand these concepts quite deeply and are surprised by the shallowness of their own explanations when prompted to describe the concepts thoroughly. …

People who construe a ballpoint pen abstractly are more likely to focus on the pen’s function and perhaps its global appearance. In contrast, people who construe the pen concretely are more likely to focus on how well they understand how its parts work together to enable the pen to function—in this case, the appropriate metacognition. Accordingly, people are less likely to overestimate their understanding of how the pen works when their introspections focus appropriately on the pen’s concrete features rather than its abstract features. …

In six studies, we showed that IOEDs arise at least in part because people sometimes adopt an inappropriately broad or abstract construal style when evaluating their understanding of concrete processes. … Participants … experienced larger IOEDs the more abstractly they construed 13 basic human behaviors. … Participants rated their knowledge of how three mechanical devices worked more accurately when the devices were framed more narrowly according to their component parts. When asked to express how those devices worked, only participants in the broad construal condition were surprised by the incompleteness of their explanations. …

Participants were induced to adopt a concrete or an abstract mindset by expressing how (concrete) or why (abstract) they engage in certain everyday processes, like getting dressed in the morning. Again, participants in an abstract mindset tended to show a significantly greater IOED. … Participants … reported understanding their favored 2008 Presidential candidate’s policies better than they actually did when asked to express those policies in writing. … Participants who adopted a more abstract construal style showed a more pronounced illusion of political sophistication. …

Our findings suggest that … when adopting an abstract construal style, people might therefore be systematically overconfident about what the future holds and how well they understand themselves and others. … The IOED is both similar to and distinct from a range of overconfidence biases documented. … According to one account, egocentric over-confidence effects tend to emerge because people anchor on their own subjective experiences and fail to adequately account for the experiences and abilities of other people. … Other researchers have suggested that people are overconfident because … their memories tend to be overpopulated with successes rather than failures.

GD Star Rating
loading...
Tagged as: ,

Arrogant Professionals

  • CEO – “We study a unique panel of over 11,600 probability distributions provided by top financial executives and spanning nearly a decade of stock market expectations. Our results show that financial executives are severely miscalibrated: realized market returns are within the executives’ 80% confidence intervals only 33% of the time. We show that miscalibration improves following poor market performance periods because forecasters extrapolate past returns when forming their lower forecast bound (“worst case scenario”), while they do not update the upper bound (“best case scenario”) as much. Finally, we link stock market miscalibration to miscalibration about own-firm project forecasts and increased corporate investment.” (more)
  • Doc – “A study led by the Harvard researcher Nicholas Christakis asked the doctors of almost five hundred terminally ill patients to estimate how long they thought their patient would survive, and then followed the patients. Sixty-three per cent of doctors overestimated survival time. Just seventeen per cent underestimated it. The average estimate was five hundred and thirty per cent too high. And, the better the doctors knew their patients, the more likely they were to err. … Studies find that although doctors usually tell patients when a cancer is not curable, most are reluctant to give a specific prognosis, even when pressed. More than forty per cent of oncologists report offering treatments that they believe are unlikely to work.” (more)
  • Lawyer – “[Consider] predictions by a sample of attorneys (n = 481) across the United States who specified a minimum goal to achieve in a case set for trial. … After the cases were resolved, case outcomes were compared with the predictions. Overall, lawyers were overconfident in their predictions, and calibration did not increase with years of legal experience. Female lawyers were slightly better calibrated … In an attempt to reduce overconfidence, some lawyers were asked to generate reasons why they might not achieve their stated goals. This manipulation did not improve calibration.” (more)

I strongly suspect these patterns are driven mostly by customers, i.e., that more accurate professionals would be less successful in inspiring confidence by others in them.  If you are a successful professional, that is probably in part because of your unjustified arrogance.

Added: Carl reminds us of an ’06 post on overconfident software managers.

GD Star Rating
loading...
Tagged as: , , ,