Tag Archives: Standard Biases

Yay Argument Orientation

Long ago I dove into science studies, which includes history, sociology, and philosophy of science. (Got a U. Chicago M.A. in it in 1983.) I concluded at the time that “science” doesn’t really have a coherent meaning, beyond the many diverse practices of many groups that called themselves “science”. But reflecting on my recent foray into astrophysics suggests to me that there may a simple related core concept after all.

Imagine you are in an organization with a boss who announces a new initiative, together with supporting arguments. Also imagine that you are somehow forced to hear a counter-argument against this initiative, offered by a much lower status person, expressed in language and using methods that are not especially high status. In most organizations, most people would not be much tempted to support this counter-argument; they’d rather pretend that they never heard of it.

More generally, imagine there is a standard claim, which is relevant enough to important enough topics to be worth consideration. This claim is associated with some status markers, such as the status of its supporters and their institutions, and the status of the language and methods used to argue for it. And imagine further that a counter-claim is made, with an associated argument, and also associated status markers of its supporters, languages, and methods.

The degree to which (status-weighted) people in a community would be inclined to support this counter-claim (or even to listen to supporting arguments offered) would depend on the relative strengths of both the arguments and the status markers on both sides. (And on the counter claim’s degree of informativeness and relevance regarding topics seen as important.) I’ll say that such a community is more “argument-oriented” to the degree that the arguments’ logical or Bayesian strengths are given more priority over the claims’ status strengths.

Even though almost everyone in most all communities feels obligated to offer supporting arguments for their claims, very few communities are actually very argument-oriented. You usually don’t contradict the boss in public, unless you can find pretty high status allies for your challenge; you know that the strength of your argument doesn’t count for much as an ally. So it is remarkable, and noteworthy, that there are at least some communities that are unusually argument-oriented. These include big areas of math, and smaller areas of philosophy and physics. And, alas, they include even smaller areas of most human and social sciences. So there really a sense in which some standard disciplines are more “scientific”.

Note that most people are especially averse to claims with especially low status markers. For example, when an argument made for a position is expressed using language that evokes in many people vague illicit associations, such as with racism, sexism, ghosts, or aliens. Or when the people who support a claim are thought to have had such associations on other topics. As such expressions are less likely to happen near topics in math, math is more intrinsically supportive of argument-oriented communities.

But even with supportive topic areas, argument-orientation is far from guaranteed. So let us try to identify and celebrate the communities and topic areas where it is more common, and perhaps find better ways to shame the others into becoming more argument-oriented. Such an orientation is plausibly a strong causal factor explaining variation in accuracy and progress across different communities and areas.

There are actually a few simple ways that academic fields could try to be and seem more argument-oriented. For example, while peer review is one of the main place where counter-arguments are now expressed, such reviews are usually private. Making peer review public might induce higher quality counter-arguments. Similarly, higher priority could be given to publishing articles that focus more on elaborating counter-arguments to other arguments. And communities might more strongly affirm their focus on the literal meanings of expressions, relative to drawing inferences from vague language associations.

(Note: that being “argumentative” is not very related to being “argument-oriented”. You can bluster and fight without giving much weight to logical and Bayesian strengths of arguments. And you can collect and weigh arguments in a consensus style without focusing on who disagrees with who.)

GD Star Rating
a WordPress rating system
Tagged as: ,

Opinion Entrenchment

How do and should we form and change opinions? Logic tells us to avoid inconsistencies and incoherences. Language tells us to attend to how meaning is inferred from ambiguous language. Decision theory says to distinguish values from fact opinion, and says exactly how decisions should respond to these. Regarding fact opinion, Bayesian theory says to distinguish priors from likelihoods, and says exactly how fact opinion should respond to evidence.

Simple realism tells us to expect errors in actual opinions, relative to all of these standards. Computing theory says to expect larger errors on more complex topics, and opinions closer to easily computed heuristics. And many kinds of human and social sciences suggest that we see human beliefs as often like clothes, which in mild weather we use more to show our features to associates than to protect ourselves from the elements. Beliefs are especially useful for showing loyalty and morality.

There’s another powerful way to think about opinions that I’ve only recently appreciated: opinions get entrenched. In biology, natural selection picks genes that are adaptive, but adds error. These gene choices change as environments change, except that genes which are entangled with large complex and valued systems of genes change much less; they get entrenched.

We see entrenchment also all over our human systems. For example, at my university the faculty is divided into disciplines, the curricula into classes, and classes into assignments in ways that once made sense, but now mostly reflect inertia. Due to many interdependencies, it would be slow and expensive to change such choices, so they remain. Our legal system accumulates details that become precedents that many rely on, and which become hard to change. As our software system accrue features, they get fragile and harder to change. And so on.

Beliefs also get entrenched. That is, we are often in the habit of building many analyses from the same standard sets of assumptions. And the more analyses that we have done using some set of assumptions, the more reluctant we are to give up that set. This attitude toward the set is not very sensitive to the evidential or logical support we see for each of its assumptions. In fact, we are often pretty certain that individual assumptions are wrong, but because they greatly simplify our analysis, we hope that they are still enable a decent approximation from their set.

When we use such standard assumption sets, we usually haven’t thought much about the consequences of individually changing each assumption in the set. As long as we can see some plausible ways in which each assumption might change conclusions, we accept it as part of the set, and hold roughly the same reluctance to give it up as for all the other members.

For example, people often say “I just can’t believe Fred’s dead”, meaning not that the evidence of Fred’s death isn’t sufficient, but that it will take a lot of work to think through all the implications of this new fact. The existence of Fred had been a standard assumption in their analysis. A person tempted to have an affair is somewhat deterred from this because of their standard assumption that they were not the sort of person who has affairs; it would take a lot of work to think through their world under this new assumption. This similarly discourages people from considering that their spouses might be having affairs.

In academic theoretical analysis, each area tends to have standard assumptions, many of which are known to be wrong. But even so, there are strong pressures to continue using prior standard assumptions, to make one’s work comparable to that of others. The more different things that are seen to be explained or understood via an assumption set, the more credibility is assigned to each assumption in that set. Evidence directly undermining any one such assumption does little by itself to reduce use of the set.

In probability theory, the more different claims one adds to a bundle, the less likely is the conjunction of that bundle. However, the more analyses that one makes with an assumption set, the more entrenched it becomes. So by combining different assumption sets so that they all get credit for all of their analyses, one makes those sets more, not less, entrenched. Larger bundles get less probability but more entrenchment.

Note that fictional worlds that specify maximal detail are maximally large assumption sets, which thus maximally entrench.

Most people feel it is quite reasonable to disagree, and that claim is a standard assumption in most reasoning about reasoning. But a philosophy literature did arise wherein some questioned that assumption, in the context of a certain standard disagreement scenario. I was able to derive some strong results, but in a different and to my mind more relevant scenario. But the fact of my using a different scenario, and being from a different discipline, meant my results got ignored.

Our book Elephant in the Brain says that social scientists have tended to assume the wrong motives re many common behaviors. While our alternate motives are about as plausible and easy to work with as the usual motives, the huge prior investment in analysis based on the usual motives means that few are interested in exploring our alternate motives. There is not just theory analysis investment, but also investment in feeling that we are good people, a claim which our alternate assumptions undermine.

Even though most automation today has little to do with AI, and has long followed steady trends, with almost no effect on overall employment, the favored assumption set among talking elites recently remains this: new AI techniques are causing a huge trend-deviating revolution in job automation, soon to push a big fraction of workers out of jobs, and within a few decades may totally surpass humans at most all jobs. Once many elites are talking in terms this assumption set, others also want to join the same conversation, and so adopt the same set. And once each person has done a lot of analysis using that assumption set, they are reluctant to consider alternative sets. Challenging any particular item in the assumption set does little to discourage use of the set.

The key assumption of my book Age of Em, that human level robots will be first achieved via brain emulations, not AI, has a similar plausibility to AI being first. But this assumption gets far less attention. Within my book, I picked a set of standard assumptions to support my analysis, and for an assumption that has an X% chance of being wrong, my book gave far less than X% coverage to that possibility. That is, I entrenched my standard assumptions within my book.

Physicists have long taken one of their standard assumptions to be denial of all “paranormal” claims, taken together as a set. That is, they see physics as denying the reality of telepathy, ghosts, UFOs, etc., and see the great success (and status) of physics overall as clearly disproving such claims. Yes, they once mistakenly included meteorites in that paranormal set, but they’ve fixed that. Yet physicists don’t notice that even though many describe UFOs as “physics-defying”, they aren’t that at all; they only plausibly defy current human tech abilities. Yet the habit of treating all paranormal stuff as the same denied set leads physicists to continue to staunchly ridicule UFOs.

I can clearly feel my own reluctance to consider theories wherein the world is not as it appears, because we are being fooled by gods, simulation sysops, aliens, or a vast world elite conspiracy. Sometimes this is because those assumptions seem quite unlikely, but in other cases it is because I can see how much I’d have to rethink given such assumptions. I don’t want to be bothered; haven’t I already considered enough weird stuff for one person?

Life on Mars is treated as an “extraordinary” claim, even though the high rate of rock transfer between early Earth and early Mars make it nearly as likely that life came from Mars to Earth as vice versa. This is plausibly because only life on Earth is the standard assumption used in many analyses, while life starting on Mars seems like a different conflicting assumption.

Across a wide range of contexts, our reluctance to consider contrarian claims is often less due to their lacking logical or empirical support, and more because accepting them would require reanalyzing a great many things that one had previously analyzed using non-contrarian alternatives.

In worlds of beliefs with strong central authorities, those authorities will tend to entrench a single standard set of assumptions, thus neglecting alternative assumptions via the processes outlined above. But in worlds of belief with many “schools of thought”, alternative assumptions will get more attention. It is a trope that “sophomores” tend to presume that most fields are split among different schools of thought, and are surprised to find that this is usually not true.

This entrenchment analysis makes me more sympathetic toward allowing and perhaps even encouraging different schools of thought in many fields. And as central funding sources are at risk of being taken over by a particular school, multiple independent sources of funding seem more likely to promote differing schools of thought.

The obvious big question here is: how can we best change our styles of thought, talk, and interaction to correct for the biases that entrenchment induces?

GD Star Rating
a WordPress rating system
Tagged as: ,

Too Much of a Good Thing

When people are especially eager to show allegiance to moral allies, they often let themselves be especially irrational. They try not to let this show, but most aren’t very good at hiding it. One cute way to watch this behavior is to ask people if it is possible to have too much of a good thing, or too little of a bad thing. The fully rational answer is of course yes, it is usually possible to go too far in most any direction. But many seem to fear seeming disloyal if they admit this.

For example, I recently gave this poll to my twitter followers:

One of my followers, who has many more followers than I, asked hers a related poll:

While my and Aella’s followers similarly say we do too little on global warming, hers are far more likely to say that isn’t possible to do too much. And my followers who think we do too much tend to be less reasonable in that more of them think it isn’t possible to do too little.

(Note that the third option in Aella’s poll is a logical contradiction: if people actually do too much, surely it must be possible to do too much. )

This seems ripe for a larger more representative poll. Which side is more reasonable in admitting that one could go too far in their direction? And which other kinds of people are more reasonable? How does this can’t-have-too-much effect vary with the topic?

Added 3:30p: If you can understand the first question, on if we do too much or little, you should be able to understand the second question, on if such things are possible. I don’t get how you can be confused about the meaning of the second question, yet can easily answer the first question.

GD Star Rating
a WordPress rating system
Tagged as:

On Stossel Tonight

I should be on tonight’s (9pm EST) episode of Stossel, on Fox Business TV, talking about biases.

Added 8Dec: I was wrong; the show should air Thursday Dec. 11

Added 28 Dec: Here is a video of the episode

GD Star Rating
a WordPress rating system
Tagged as: , ,

Reason, Stories Tuned for Contests

Humans have a capacity to reason, i.e., to find and weigh reasons for and against conclusions. While one might expect this capacity to be designed to work well for a wide variety of types of conclusions and situations, our actual capacity seems to be tuned for more specific cases. Mercier and Sperber:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. … Poor performance in standard reasoning tasks is explained by the lack of argumentative context. … People turn out to be skilled arguers (more)

That is, our reasoning abilities are focused on contests where we already have conclusions that we want to support or oppose, and where particular rivals give conflicting reasons. I’d add that such abilities also seem tuned to win over contest audiences by impressing them, and by making them identify more with us than with our rivals. We also seem eager to visibly hear argument contests, in addition to participating in such contests, perhaps to gain exemplars to improve our own abilities, to signal our embrace of social norms, and to exert social influence as part of the audience who decides which arguments win.

Humans also have a capacity to tell stories, i.e., to summarize sets of related events. Such events might be real and past, or possible and future. One might expect this capacity to be designed to well-summarize a wide variety of event sets. But, as with reasoning, we might similarly find that our actual story abilities are tuned for the more specific case of contests, where the stories are about ourselves or our rivals, especially where either we or they are suspected of violating social norms. We might also be good at winning over audiences by impressing them and making them identify more with us, and we may also be eager to listen to gain exemplars, signal norms, and exert influence.

Consider some forager examples. You go out to find fire wood, and return two hours later, much later than your spouse expected. During a hunt someone shot an arrow that nearly killed you. You don’t want the band to move to new hunting grounds quite yet, as your mother is sick and hard to move. Someone says something that indirectly suggests that they are a better lover than you.

In such examples, you might want to present an interpretation of related events that persuades others to adopt your favored views, including that you are able and virtuous, and that your rivals are unable and ill-motivated. You might try to do this via direct arguments, or more indirectly via telling a story that includes those events. You might even work more indirectly, by telling a fantasy story where the hero and his rival have suspicious similarities to you and your rival.

This view may help explain some (though hardly all) puzzling features of fiction:

  • Most of our real life events, even the most important ones like marriages, funerals, and choices of jobs or spouses, seem too boring to be told as stories.
  • Compared to real events, even important ones, stories focus far more on direct conscious conflicts between people, and on violations of social norms.
  • Compared to real people, character features are more extreme, and have stronger correlations between good features.
  • Compared to real events, fictional events are far more easily predicted by character motivations, and by assuming a just world.
GD Star Rating
a WordPress rating system
Tagged as: , ,

Why Info Push Dominates

Some phenomena to ponder:

  1. Decades ago I gave talks about how the coming world wide web (which we then called “hypertext publishing”) could help people find more info. Academics would actually reply “I don’t need any info tools; my associates will personally tell me about any research worth knowing about.”
  2. Many said the internet would bring a revolution of info pull, where people pay to get the specific info they want, to supplant the info push of ads, where folks pay to get their messages heard. But even Google gets most revenue from info pushers, and our celebrated social media mainly push info too.
  3. Blog conversations put a huge premium on arguments that appear quickly after other arguments. Mostly arguments that appear by themselves a few weeks later might as well not exist, for all they’ll influence future expressed opinions.
  4. When people hear negative rumors about others, they usually believe them, and rarely ask the accused directly for their side of the story. This makes it easy to slander folks who aren’t well connected enough to have friends who will tell them who said what about them.
  5. We usually don’t seem to correct well for “independent” confirming clues that actually come from the same source a few steps back. We also tolerate higher status folks dominating meetings and other communication channels, thereby counting their opinions more. So ad campaigns often have time-correlated channel-redundant bursts with high status associations.

Overall, we tend to wait for others to push info onto us, rather than taking the initiative to pull info in, and we tend to gullibly believe such pushed clues, especially when they come from high status folks, come redundantly, and come correlated in time.

A simple explanation of all this is that our mental habits were designed to get us to accept the opinions of socially well-connected folks. Such opinions may be more likely to be true, but even if not they are more likely to be socially convenient. Pushed info tends to come with the meta clues of who said it when and via what channel. In contrast, pulled info tends to drop many such meta clues, making it harder to covertly adopt the opinions of the well-connected.

GD Star Rating
a WordPress rating system
Tagged as: , ,

The Need To Believe

When a man loves a woman, …. if she is bad, he can’t see it. She can do no wrong. Turn his back on his best friend, if he puts her down. (Lyrics to “When a Man Loves A Woman”)

Kristeva analyzes our “incredible need to believe”–the inexorable push toward faith that … lies at the heart of the psyche and the history of society. … Human beings are formed by their need to believe, beginning with our first attempts at speech and following through to our adolescent search for identity and meaning. (more)

This “to believe” … is that of Montaigne … when he writes, “For Christians, recounting something incredible is an occasion for belief”; or the “to believe” of Pascal: “The mind naturally believes and the will naturally loves; so that if lacking true objects, they must attach themselves to false ones.” (more)

We often shake our heads at the gullibility of others. We hear a preacher’s sermon, a politician’s speech, a salesperson’s pitch, or a flatter’s sweet talk, and we think:

Why do they fall for that? Can’t they see this advocate’s obvious vested interest, and transparent use of standard unfair rhetorical tricks? I must be be more perceptive, thoughtful, rational, and reality-based than they. Guess that justifies my disagreeing with them.

Problem is, like the classic man who loves a woman, we find hard to see flaws in what we love. That is, it is easier to see flaws when we aren’t attached. When we “buy” we more easily see the flaws in the products we reject, and when we “sell” we can often ignore criticisms by those who don’t buy.

Why? Because we have near and far reasons to like things. And while we might actually choose for near reasons, we want to believe that we choose for far reasons. We have a deep hunger to love some things, and to believe that we love them for the ideal reasons we most respect for loving things. This applies not only to other people, but to politicians, to writers, actors, ideas.

For the options we reject, however, we can see more easily the near reasons that might induce others to choose them. We can see pandering and flimsy excuses that wouldn’t stand up to scrutiny. We can see forced smiles, implausible flattery, slavishly following fashion, and unthinking confirmation bias. We can see politicians who hold ambiguous positions on purpose.

Because of all this, we are the most vulnerable to not seeing the construction of and the low motives behind the stuff we most love. This can be functional in that we can gain from seeming to honestly sincerely and deeply love some things. This can make others that we love or who love the same things feel more bonded to us. But it also means we mistake why we love things. For example, academics are usually less interesting or insightful when researching topics where they feel the strongest; they do better on topics of only moderate interest to them.

This also explains why sellers tend to ignore critiques of their products as not idealistic enough. They know that if they can just get good enough on base features, we’ll suddenly forget our idealism critiques. For example, a movie maker can ignore criticisms that her movie is trite, unrealistic, and without social commentary. She knows that if she can make the actors pretty enough, or the action engaging enough, we may love the movie enough to tell ourselves it is realistic, or has important social commentary. Similarly, most actors don’t really need to learn how to express deep or realistic emotions. They know that if they can make their skin smooth enough, or their figure tone enough, we may want to believe their smile is sincere and their feelings deep.

Same for us academics. We can ignore critiques of our research not having important implications. We know that if we can include impressive enough techniques, clever enough data, and describe it all with a pompous enough tone, our audiences may be impressed enough to tell themselves that our trivial extension of previous ideas are deep and original.

Beware your tendency to overlook flaws in things you love.

GD Star Rating
a WordPress rating system
Tagged as: , ,

What Do We Know That We Can’t Say?

I’ve been vacationing with family this week, and (re-) noticed a few things. When we played miniature golf, the winners were treated as if shown to be better skilled than previously thought, even though score differences were statistically insignificant. Same for arcade shooter contests. We also picked which Mexican place to eat at based on one person saying they had eaten there once and it was ok, even though given how random is who likes what when, that was unlikely to be a statistically significant difference for estimating what the rest of us would like.

The general point is that we quite often collect and act on rather weak info clues. This could make good info sense. We might be slowly collecting small clues that eventually add up to big clues. Or if we know well which parameters matter the most, it can make sense to act on weak clues; over a lifetime this can add up to net good decisions. When this is what is going on, then people will tend to know many things they cannot explicitly justify. They might have seen a long history of related situations, and have slowly accumulated enough relevant clues to form useful judgments, but not be able to explicitly point to most of those weak clues which were the basis of that judgement.

Another thing I noticed on vacation is that a large fraction of my relatives age 50 or older think that they know that their lives were personally saved by medicine. They can tell of one or more specific episodes where a good doctor did the right thing, and they’d otherwise be dead. But people just can’t on average have this much evidence, since we usually find it hard to see effects of medicine on health even when we have datasets with thousands of people. (I didn’t point this out to them – these beliefs seemed among the ones they held most deeply and passionately.) So clearly this intuitive collection of weak clues stuff goes very wrong sometimes, even on topics where people suffer large personal consequences. It is not just that random errors can show up; there are topics on which our minds are systematically structured, probably on purpose, to greatly mislead us in big ways.

One of the biggest questions we face is thus: when are judgements trustworthy? When can we guess that the intuitive slow accumulation of weak clues by others or ourselves embodies sufficient evidence to be a useful guide to action? At one extreme, one could try to never act on anything less than explicitly well-founded reasoning, but this would usually go very badly; we mostly have no choice but to rely heavily on such intuition. At the other extreme, many people go their whole lives relying almost entirely on their intuitions, and they seem to mostly do okay.

In between, people often act like they rely on intuition except when good solid evidence is presented to the contrary, but they usually rely on their intuition to judge when to look for explicit evidence, and when that is solid enough. So when those intuitions fail the whole process fails.

Prediction markets seem a robust way to cut through this fog; just believe the market prices when available. But people are usually resistant to creating such market prices, probably exactly because doing so might force them to drop treasured intuitive judgements.

On this blog I often present weak clues, relevant to important topics, but by themselves not sufficient to draw strong conclusions. Usually commenters are eager to indignantly point out this fact. Each and every time. But on many topics we have little other choice; until many weak clues are systematically collected into strong clues, weak clues are what we have. And the topics of where our intuitive conclusions are most likely to be systematically biased tend to be those sort of topics. So I’ll continue to struggle to collect whatever clues I can find there.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Signaling bias in philosophical intuition

Intuitions are a major source of evidence in philosophy. Intuitions are also a significant source of evidence about the person having the intuitions. In most situations where onlookers are likely to read something into a person’s behavior, people adjust their behavior to look better. If philosophical intuitions are swayed in this way, this could be quite a source of bias.

One first step to judging whether signaling motives change intuitions is to determine whether people read personal characteristics into philosophical intuitions. It seems to me that they do, at least for many intuitions. If you claim to find libertarian arguments intuitive, I think people will expect you to have other libertarian personality traits, even if on consideration you aren’t a libertarian. If consciousness doesn’t seem intuitively mysterious to you, one can’t help wonder if you have a particularly un-noticable internal life. If it seems intuitively correct to push the fat man in front of the train, you will seem like a cold, calculating sort of person. If it seems intuitively fine to kill children in societies with pro-children-killing norms, but you choose to condemn it for other reasons, you will have all kinds of problems maintaining relationships with people who learn this.

So I think people treat philosophical intuitions as evidence about personality traits. Is there evidence of people responding by changing their intuitions?

People are enthusiastic to show off their better looking intuitions. They identify with some intuitions and take pleasure in holding them. For instance, in my philosophy of science class the other morning, a classmate proudly dismissed some point, declaring,’my intuitions are very rigorous’. If his intuitions are different from most, and average intuitions actually indicate truth, then his are especially likely to be inaccurate. Yet he seems particularly keen to talk about them, and chooses positions based much more strongly on they than others’ intuitions.

I see similar urges in myself sometimes. For instance consistent answers to the Allais paradox are usually so intuitive to me that I forget which way one is supposed to err. This seems good to me. So when folks seek to change normative rationality to fit their more popular intuitions, I’m quick to snort at such a project. Really, they and I have the same evidence from intuitions, assuming we believe one anothers’ introspective reports. My guess is that we don’t feel like coming to agreement because they want to cheer for something like ‘human reason is complex and nuanced and can’t be captured by simplistic axioms’ and I want to cheer for something like ‘maximize expected utility in the face of all temptations’ (I don’t mean to endorse such behavior). People identify with their intuitions, so it appears they want their intuitions to be seen and associated with their identity. It is rare to hear a person claim to have an intuition that they are embarrassed by.

So it seems to me that intuitions are seen as a source of evidence about people, and that people respond at least by making their better looking intuitions more salient. Do they go further and change their stated intuitions? Introspection is an indistinct business. If there is room anywhere to unconsciously shade your beliefs one way or another, it’s in intuitions. So it’s hard to imagine there not being manipulation going on, unless you think people never change their beliefs in response to incentives other than accuracy.

Perhaps this isn’t so bad. If I say X seems intuitively correct, but only because I guess others will think seeing X as intuitively correct is morally right, then I am doing something like guessing what others find intuitively correct. Which might be a bit of a noisy way to read intuitions, but at least isn’t obviously biased. That is, if each person is biased in the direction of what others think, this shouldn’t obviously bias the consensus. But there is a difference between changing your answer toward what others would think is true, and changing your answer to what will cause others to think you are clever, impressive, virile, or moral. The latter will probably lead to bias.

I’ll elaborate on an example, for concreteness. People ask if it’s ok to push a fat man in front of a trolley to stop it from killing some others. What would you think of me if I said that it at least feels intuitively right to push the fat man? Probably you lower your estimation of my kindness a bit, and maybe suspect that I’m some kind of sociopath. So if I do feel that way, I’m less likely to tell you than if I feel the opposite way. So our reported intuitions on this case are presumably biased in the direction of not pushing the fat man. So what we should really do is likely further in the direction of pushing the fat man than we think.

GD Star Rating
a WordPress rating system
Tagged as: , ,

The Smart Are MORE Biased To Think They Are LESS Biased

I seem to know a lot of smart contrarians who think that standard human biases justify their contrarian position. They argue:

Yes, my view on this subject is in contrast to a consensus among academic and other credentialed experts on this subject. But the fact is that human thoughts are subject to many standard biases, and those biases have misled most others to this mistaken consensus position. For example biases A,B, and C would tend to make people think what they do on this subject, even if that view were not true. I, in contrast, have avoided these biases, both because I know about them (see, I can name them), and because I am so much smarter than these other folks. (Have you seen my test scores?) And this is why I can justifiably disagree with an expert consensus on this subject.

Problem is, not only are smart folks not less biased for many biases, if anything smart folks more easily succumb to the bias of thinking that they are less biased than others:

The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. … We found that none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability. Additional analyses indicated that being free of the bias blind spot does not help a person avoid the actual classic cognitive biases. …

Most cognitive biases in the heuristics and biases literature are negatively correlated with cognitive sophistication, whether the latter is indexed by development, by cognitive ability, or by thinking dispositions. This was not true for any of the bias blind spots studied here. As opposed to the social emphasis in past work on the bias blind spot, we examined bias blind spots connected to some of the most well-known effects from the heuristics and biases literature: outcome bias, base-rate neglect, framing bias, conjunction fallacy, anchoring bias, and myside bias. We found that none of these bias blind spot effects displayed a negative correlation with measures of cognitive ability (SAT total, CRT) or with measures of thinking dispositions (need for cognition, actively open-minded thinking). If anything, the correlations went in the other direction.

We explored the obvious explanation for the indications of a positive correlation between cognitive ability and the magnitude of the bias blind spot in our data. That explanation is the not unreasonable one that more cognitively sophisticated people might indeed show lower cognitive biases—so that it would be correct for them to view themselves as less biased than their peers. However, … we found very little evidence that these classic biases were attenuated by cognitive ability. More intelligent people were not actually less biased—a finding that would have justified their displaying a larger bias blind spot. …

Thus, the bias blind spot joins a small group of other effects such as myside bias and noncausal base-rate neglect in being unmitigated by increases in intelligence. That cognitive sophistication does not mitigate the bias blind spot is consistent with the idea that the mechanisms that cause the bias are quite fundamental and not easily controlled strategically— that they reflect what is termed Type 1 processing in dual-process theory. (more)

Added 12June: The New Yorker talks about this paper:

The results were quite disturbing. For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” … All four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.”

GD Star Rating
a WordPress rating system
Tagged as: , ,