Tag Archives: Standard Biases

The Smart Are MORE Biased To Think They Are LESS Biased

I seem to know a lot of smart contrarians who think that standard human biases justify their contrarian position. They argue:

Yes, my view on this subject is in contrast to a consensus among academic and other credentialed experts on this subject. But the fact is that human thoughts are subject to many standard biases, and those biases have misled most others to this mistaken consensus position. For example biases A,B, and C would tend to make people think what they do on this subject, even if that view were not true. I, in contrast, have avoided these biases, both because I know about them (see, I can name them), and because I am so much smarter than these other folks. (Have you seen my test scores?) And this is why I can justifiably disagree with an expert consensus on this subject.

Problem is, not only are smart folks not less biased for many biases, if anything smart folks more easily succumb to the bias of thinking that they are less biased than others:

The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. … We found that none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability. Additional analyses indicated that being free of the bias blind spot does not help a person avoid the actual classic cognitive biases. …

Most cognitive biases in the heuristics and biases literature are negatively correlated with cognitive sophistication, whether the latter is indexed by development, by cognitive ability, or by thinking dispositions. This was not true for any of the bias blind spots studied here. As opposed to the social emphasis in past work on the bias blind spot, we examined bias blind spots connected to some of the most well-known effects from the heuristics and biases literature: outcome bias, base-rate neglect, framing bias, conjunction fallacy, anchoring bias, and myside bias. We found that none of these bias blind spot effects displayed a negative correlation with measures of cognitive ability (SAT total, CRT) or with measures of thinking dispositions (need for cognition, actively open-minded thinking). If anything, the correlations went in the other direction.

We explored the obvious explanation for the indications of a positive correlation between cognitive ability and the magnitude of the bias blind spot in our data. That explanation is the not unreasonable one that more cognitively sophisticated people might indeed show lower cognitive biases—so that it would be correct for them to view themselves as less biased than their peers. However, … we found very little evidence that these classic biases were attenuated by cognitive ability. More intelligent people were not actually less biased—a finding that would have justified their displaying a larger bias blind spot. …

Thus, the bias blind spot joins a small group of other effects such as myside bias and noncausal base-rate neglect in being unmitigated by increases in intelligence. That cognitive sophistication does not mitigate the bias blind spot is consistent with the idea that the mechanisms that cause the bias are quite fundamental and not easily controlled strategically— that they reflect what is termed Type 1 processing in dual-process theory. (more)

Added 12June: The New Yorker talks about this paper:

The results were quite disturbing. For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” … All four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.”

GD Star Rating
loading...
Tagged as: , ,

Eventual Futures

I’ve noticed that recommendations for action based on a vision of the future are based on an idea that something must “eventually” occur. For example, eventually:

  • We will run out of coal, so we’d better find replacements soon.
  • Earth will run out of stored energy of fossil fuels and radioactivity, so we’d better get ready to run only on sunlight.
  • Earth will run out of place for trash, so we must stop making trash.
  • The sun will die out, so we’d better get ready to move to another sun.
  • There will be a race to colonize other planets and stars, so our group should get out there first so we don’t get lose this race.
  • Chips will use X instead of silicon, so our chip firms must use X now, to not be left behind.
  • There will be no privacy of any sort, so we might as well get used to it now.
  • Some races will win, so we’d best fight for ours before its too late.
  • Firms will be stronger than nations, unless we break their power soon.
  • There will be a stronger world government, so let’s start one now.
  • There will be conflict between China and West, or Islam and West, so we best strike first now.
  • Artificial intelligences will rule the world, so let’s figure out now how to make a good one.
  • We’ll invent all that is worth inventing, so let’s find a way now to live without innovation.
  • We’ll know all the physics there is, so lets find something else interesting now.
  • There will be a huge deadly world war, so let’s stock some bunkers to hide in.
  • Nanobots will give everyone anything they want, so why work now?
  • The first nano-assembler’s owner will rule the world, so we best study nanotech now.
  • More fertile immigrants will out number us, so we best not let them in.
  • The more fertile stupid will make the world dumb, unless we stop them now.

The common pattern: project forward a current trend to an extreme, while assuming other things don’t change much, and then recommend an action which might make sense if this extreme change were to happen all at once soon.

This is usually a mistake. The trend may not continue indefinitely. Or, by the time a projected extreme is reached, other changes may have changed the appropriate response. Or, the best response may be to do nothing for a long time, until closer to big consequences. Or, the best response may be to do nothing, ever – not all negative changes can be profitably resisted.

It is just not enough to suspect that an extreme will be reached eventually – you usually need a good reason to think it will happen soon, and and that you know a robust way to address it. In far mode it often feels like the far future is clearly visible, and that few obstacles stand in the way of planning paths to achieve far ends. But in fact, the world is much messier than far mode is willing to admit.

GD Star Rating
loading...
Tagged as: , ,

Honesty Via Distraction

From Triver’s book Folly of Fools:

When a person is placed under cognitive load (by having to memorize a string of numbers while making a moral evaluation), the individual does not express the usual bias toward self.  But when the same evaluation is made absent cognitive load, a strong bias emerges in favor of seeing oneself acting more fairly than another individual doing the identical action. This suggests that build deeply in us is a mechanism that tries to make universally just evaluations, but that after the fact, “higher” faculties paint the matter in our favor. (p.22)

This suggests an interesting way to avoid bias – make judgements fast under distracting cognitive load.

GD Star Rating
loading...
Tagged as:

The Beauty Bias

As I write these words I’m riding a late night train, listening to some beautiful music and noticing a beautiful woman in the aisle opposite. And I can feel with unusual vividness my complete vulnerability to a beauty bias. The careful analytical thoughts I had hours before now seem, no matter what their care or basis, trivial and small by comparison.

If words and coherent thoughts came through this beauty channel, they would feel so much more compelling. If I had to choose between beauty and something plain or ugly, I would be so so eager to find excuses to choose beauty. If I needed to believe beauty was stronger or more moral or better for the world, reasons would be found, and it would feel easy to accept them.

This all horrifies the part of me that wants to believe what is true, based on some coherent and fair use of reasons and analysis. But I can see how very inadequate I am to resist it. The best I can do, it seems, is to not form beliefs or opinions while attending to beauty. Such as by avoiding music with non-trivial lyrics. And by wariness of opinions regarding a divide where one side is more beautiful. (Yes Tyler, this does question my taste for elegant theoretical simplicity.)

I have little useful advice here, alas, other than: know your limits. If you cannot help but to fall into a ditch if you walk nearby, then keep away, or accept that you’ll fall in.

GD Star Rating
loading...
Tagged as: ,

Making Up Opinions

Perhaps the most devastating problem with subjective [survey] questions, however, is the possibility that attitudes may not “exist” in a coherent form. A first indication of such problems is that measured attitudes are quite unstable over time. For example, in two surveys spaced a few months apart, the same subjects were asked about their views on government spending. Amazingly, 55% of the subjects reported different answers. Such low correlations at high frequencies are quite representative.

Part of the problem comes from respondents’ reluctance to admit lack of an attitude. Simply because the surveyor is asking the question, respondents believe that they should have an opinion about it. For example, researchers have shown that large minorities would respond to questions about obscure or even fictitious issues, such as providing opinions on countries that don’t exist. (more; HT Tyler)

I’m not clear on just how far this effect goes, but one lesson is: you have fewer real opinions than you think. If you talk a lot, you probably end up expressing many opinions on many topics. But much, perhaps most, of that you just make up on the fly. You won’t give the same opinion later if the subject comes up again, and your opinion probably won’t effect your non-talk decisions.

So your decisions on charity donations, votes, and who or what to give verbal praise, may be a lot simpler than you think. Your decisions on where to live or work, and who to befriend or marry, may also be simpler. That is, you may consistently make similar decisions, but the reasons you give for them may matter less than you think.

GD Star Rating
loading...
Tagged as:

Who Is Consistent?

Young rich well-educated men make more consistent choices. Family structure, risk tolerance and personality type don’t matter:

We conduct a large-scale field experiment … to test subjects choices for consistency with utility maximization. … High-income and high-education subjects display greater levels of consistency …, men are more consistent than women, and young subjects are more consistent than older subjects. We also find that consistency with utility maximization is strongly related to wealth: a standard deviation increase in the consistency score is associated with 15-19 percent more wealth. This result conditions on socioeconomic variables including current income, education, and family structure, and is little changed when we add controls for past income, risk tolerance and the results of a standard personality test used by psychologists. (more)

GD Star Rating
loading...
Tagged as: , , , ,

Beware Big Bad Novelties

A central issue of this blog is: when exactly is it how important to emphasize truth, relative to other belief functions? New data suggest that truth is more important in bad times than in good, and when problems are big rather than small. Specifically, rose-colored marriage glasses help in good times, but hurt in bad times:

Individuals in new marriages were interviewed separately about their ongoing stressful experiences, and their own appraisals of those experiences were compared with those of the interviewers. … Spouses’ tendencies to form positively biased appraisals of their stressful experiences predicted fewer depressive symptoms over the subsequent 4 years among individuals judged to be facing relatively mild experiences but more depressive symptoms among individuals judged to be facing relatively severe experiences. … These effects were mediated by changes in those experiences, such that the interaction between the tendency to form positively biased appraisals of stressful experiences and the objectively rated severity of initial levels of those experiences directly predicted changes in those experiences, which in turn accounted for changes in depressive symptoms. (more)

Truth should also be especially important for situations that are novel relative to our evolved intuitions. The more our current situation differs from situations where our ancestors evolved (genetically or culturally) their intuitions about when to be truth-oriented, the more we risk by following such intuitions. And this seems especially likely for “futuristic” issues, with few genetic or cultural precedents.

Put them together and it is especially important for humanity to be truth-oriented regarding big bad evolutionarily-novel problems. Beware rose-colored glasses when turning a new corner to the future.

GD Star Rating
loading...
Tagged as: , ,

Don’t “Believe”

Why do people “I believe X” instead of just saying X? Or “I firmly believe in X?” Consider the last ten “believed” claims from featured essay abstracts at the This I believe website:

  1. believes sci-fi gives him a way to connect with his father and sharpen his own intellect in the real world.
  2. believes those regular calls help strengthen the bonds between mother and daughter.
  3. believes it’s important to offer that refuge to her kids because her mother did the same for her.
  4. believes making time to embrace nature gives her the strength to face life’s challenges.
  5. believes we can reach our dreams by embracing our hungers with creativity and passion.
  6. believes the best opportunities for healing may come when no words are spoken at all.
  7. believes he must make time to fulfill more than just the medical needs of his patients.
  8. believes those [sound] waves [from the big bang] are a siren call connecting all of us to the mysteries of the universe.
  9. believes she has found a way to start her journey by focusing on this one moment in time.
  10. believes in the comfort and peace she gets from making bread with those she loves.

In my experience “I believe X” suggests that the speaker has chosen to affiliate with X, feeling loyal to it and making it part of his or her identity. The speaker is unlikely to offer much evidence for X, or to respond to criticism of X, and such criticism will likely be seen as a personal attack.

Feel the warm comfort inside you when you say “I believe” – recognize it and be ready to identify it in the future, even without those woods. And then – flag that feeling as a dangerous bias. The “I believe” state of mind is quite far from being neutrally ready to adjust its opinions in the light of further evidence. Far better to instead say “I feel,” which directly warns listeners of the speaker’s attachment to an opinion.

GD Star Rating
loading...
Tagged as:

Neglected Conflicts

We tend to neglect our advisors’ conflicts of interest, especially in immediate face-to-face interactions, and especially when such conflicts are disclosed to us:

Certain study participants were required to make an estimate — evaluating the prices of houses, for instance. Meanwhile, other participants were … given additional information with which to advise the estimators. When these experts were put in a conflicted situation — they were paid according to how high the estimator guessed — they gave worse advice than if they were paid according to the accuracy of the estimate. … When the researchers required the experts to disclose this conflict to the people they were advising. … It actually caused them to inflate their numbers even more.

Experiments focusing on doctor-patient interactions, in which a doctor prescribes a medication but discloses a financial interest in the company that makes the drug. As expected, most people said such a disclosure would decrease their trust in the advice. But in practice, oddly enough, people were actually more likely to comply with the advice when the doctor’s bias was disclosed. … [Perhaps] people feel an increased pressure to take the advice to avoid insinuating that they distrust their doctor. ..

People who are prescribed medicines by personal doctors are less likely to recognize the potential dangers of their doctors’ conflict of interest. … People were more likely to discount biased advice from doctors if disclosures were made by a third party, if they were not made face-to-face, or if patients had a “cooling off” period to reconsider their decisions. … Even if these fixes make disclosure more effective, … transparency is not a blanket solution to problems of corruption. “Regulators should be looking harder at eliminating conflicts.” (more)

As with other products, we may care more about affiliating well with our advisors, especially high status ones, than we do about getting good deals from them.

Yes, regulators who want to help should push to better align advisor interests. But regulators and the politicians to whom they report also have conflicts of interest, and the above studies suggests that voters will also neglect such conflicts. Also, in a democracy regulators hands will be tied by voter perceptions of where the problems lie.  For example, since voters are more concerned about for-profit insurance company conflicts than about high status doctor conflicts, voters push regulators to limit insurer abilities to counter doctor conflicts.  And money-averse voters would probably oppose more extreme ways to use money to align doctor incentives more with patients.

GD Star Rating
loading...
Tagged as: ,

Fear As Scapegoat

In November I said we malign fear because it shows low status:

Political pundits like to accuse opponents of a “politics of fear”, or of hate. … Why do we embrace and accept our own fears and hates, even as we suggest that others’ fears and hates are bad signs about them? One obvious explanation: relative to low status folks, high status folks have less occassion to fear or hate. … Complaining that your opponents have a “politics of fear” or hate is really just complaining about their low status. (more)

We also unfairly blame fear when people die in crowds:

In the literature on crowd disasters, there is a striking incongruity between the way these events are depicted in the press and how they actually occur. In popular accounts, they are almost invariabluy described as “panics.” The crowed is portrayed as a single, unified entity, which act according to “mob psychology” – a set of primitive insticts (fear, followed by flight) that favor self-presevation over the welfare of othres, and casues “stampeded” and “tramplings.” But most crowd disasters are caused by “crazes” – people are usually moving toward something they want, rather than away from something they fear, and, if you’re caught up in a crush, you’re just as likely to die on your feet as under the feet of others, squased by the pressure of bodies smashing into you. In disasters not involving fire, panic is rarely the cause of fatalities, and even when fire is involved … research has shown that people continue to help one another, even at the cost of their own lives. (more)

GD Star Rating
loading...
Tagged as: ,