Tag Archives: Standard Biases

Reason, Stories Tuned for Contests

Humans have a capacity to reason, i.e., to find and weigh reasons for and against conclusions. While one might expect this capacity to be designed to work well for a wide variety of types of conclusions and situations, our actual capacity seems to be tuned for more specific cases. Mercier and Sperber:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. … Poor performance in standard reasoning tasks is explained by the lack of argumentative context. … People turn out to be skilled arguers (more)

That is, our reasoning abilities are focused on contests where we already have conclusions that we want to support or oppose, and where particular rivals give conflicting reasons. I’d add that such abilities also seem tuned to win over contest audiences by impressing them, and by making them identify more with us than with our rivals. We also seem eager to visibly hear argument contests, in addition to participating in such contests, perhaps to gain exemplars to improve our own abilities, to signal our embrace of social norms, and to exert social influence as part of the audience who decides which arguments win.

Humans also have a capacity to tell stories, i.e., to summarize sets of related events. Such events might be real and past, or possible and future. One might expect this capacity to be designed to well-summarize a wide variety of event sets. But, as with reasoning, we might similarly find that our actual story abilities are tuned for the more specific case of contests, where the stories are about ourselves or our rivals, especially where either we or they are suspected of violating social norms. We might also be good at winning over audiences by impressing them and making them identify more with us, and we may also be eager to listen to gain exemplars, signal norms, and exert influence.

Consider some forager examples. You go out to find fire wood, and return two hours later, much later than your spouse expected. During a hunt someone shot an arrow that nearly killed you. You don’t want the band to move to new hunting grounds quite yet, as your mother is sick and hard to move. Someone says something that indirectly suggests that they are a better lover than you.

In such examples, you might want to present an interpretation of related events that persuades others to adopt your favored views, including that you are able and virtuous, and that your rivals are unable and ill-motivated. You might try to do this via direct arguments, or more indirectly via telling a story that includes those events. You might even work more indirectly, by telling a fantasy story where the hero and his rival have suspicious similarities to you and your rival.

This view may help explain some (though hardly all) puzzling features of fiction:

  • Most of our real life events, even the most important ones like marriages, funerals, and choices of jobs or spouses, seem too boring to be told as stories.
  • Compared to real events, even important ones, stories focus far more on direct conscious conflicts between people, and on violations of social norms.
  • Compared to real people, character features are more extreme, and have stronger correlations between good features.
  • Compared to real events, fictional events are far more easily predicted by character motivations, and by assuming a just world.
GD Star Rating
a WordPress rating system
Tagged as: , ,

Why Info Push Dominates

Some phenomena to ponder:

  1. Decades ago I gave talks about how the coming world wide web (which we then called “hypertext publishing”) could help people find more info. Academics would actually reply “I don’t need any info tools; my associates will personally tell me about any research worth knowing about.”
  2. Many said the internet would bring a revolution of info pull, where people pay to get the specific info they want, to supplant the info push of ads, where folks pay to get their messages heard. But even Google gets most revenue from info pushers, and our celebrated social media mainly push info too.
  3. Blog conversations put a huge premium on arguments that appear quickly after other arguments. Mostly arguments that appear by themselves a few weeks later might as well not exist, for all they’ll influence future expressed opinions.
  4. When people hear negative rumors about others, they usually believe them, and rarely ask the accused directly for their side of the story. This makes it easy to slander folks who aren’t well connected enough to have friends who will tell them who said what about them.
  5. We usually don’t seem to correct well for “independent” confirming clues that actually come from the same source a few steps back. We also tolerate higher status folks dominating meetings and other communication channels, thereby counting their opinions more. So ad campaigns often have time-correlated channel-redundant bursts with high status associations.

Overall, we tend to wait for others to push info onto us, rather than taking the initiative to pull info in, and we tend to gullibly believe such pushed clues, especially when they come from high status folks, come redundantly, and come correlated in time.

A simple explanation of all this is that our mental habits were designed to get us to accept the opinions of socially well-connected folks. Such opinions may be more likely to be true, but even if not they are more likely to be socially convenient. Pushed info tends to come with the meta clues of who said it when and via what channel. In contrast, pulled info tends to drop many such meta clues, making it harder to covertly adopt the opinions of the well-connected.

GD Star Rating
a WordPress rating system
Tagged as: , ,

The Need To Believe

When a man loves a woman, …. if she is bad, he can’t see it. She can do no wrong. Turn his back on his best friend, if he puts her down. (Lyrics to “When a Man Loves A Woman”)

Kristeva analyzes our “incredible need to believe”–the inexorable push toward faith that … lies at the heart of the psyche and the history of society. … Human beings are formed by their need to believe, beginning with our first attempts at speech and following through to our adolescent search for identity and meaning. (more)

This “to believe” … is that of Montaigne … when he writes, “For Christians, recounting something incredible is an occasion for belief”; or the “to believe” of Pascal: “The mind naturally believes and the will naturally loves; so that if lacking true objects, they must attach themselves to false ones.” (more)

We often shake our heads at the gullibility of others. We hear a preacher’s sermon, a politician’s speech, a salesperson’s pitch, or a flatter’s sweet talk, and we think:

Why do they fall for that? Can’t they see this advocate’s obvious vested interest, and transparent use of standard unfair rhetorical tricks? I must be be more perceptive, thoughtful, rational, and reality-based than they. Guess that justifies my disagreeing with them.

Problem is, like the classic man who loves a woman, we find hard to see flaws in what we love. That is, it is easier to see flaws when we aren’t attached. When we “buy” we more easily see the flaws in the products we reject, and when we “sell” we can often ignore criticisms by those who don’t buy.

Why? Because we have near and far reasons to like things. And while we might actually choose for near reasons, we want to believe that we choose for far reasons. We have a deep hunger to love some things, and to believe that we love them for the ideal reasons we most respect for loving things. This applies not only to other people, but to politicians, to writers, actors, ideas.

For the options we reject, however, we can see more easily the near reasons that might induce others to choose them. We can see pandering and flimsy excuses that wouldn’t stand up to scrutiny. We can see forced smiles, implausible flattery, slavishly following fashion, and unthinking confirmation bias. We can see politicians who hold ambiguous positions on purpose.

Because of all this, we are the most vulnerable to not seeing the construction of and the low motives behind the stuff we most love. This can be functional in that we can gain from seeming to honestly sincerely and deeply love some things. This can make others that we love or who love the same things feel more bonded to us. But it also means we mistake why we love things. For example, academics are usually less interesting or insightful when researching topics where they feel the strongest; they do better on topics of only moderate interest to them.

This also explains why sellers tend to ignore critiques of their products as not idealistic enough. They know that if they can just get good enough on base features, we’ll suddenly forget our idealism critiques. For example, a movie maker can ignore criticisms that her movie is trite, unrealistic, and without social commentary. She knows that if she can make the actors pretty enough, or the action engaging enough, we may love the movie enough to tell ourselves it is realistic, or has important social commentary. Similarly, most actors don’t really need to learn how to express deep or realistic emotions. They know that if they can make their skin smooth enough, or their figure tone enough, we may want to believe their smile is sincere and their feelings deep.

Same for us academics. We can ignore critiques of our research not having important implications. We know that if we can include impressive enough techniques, clever enough data, and describe it all with a pompous enough tone, our audiences may be impressed enough to tell themselves that our trivial extension of previous ideas are deep and original.

Beware your tendency to overlook flaws in things you love.

GD Star Rating
a WordPress rating system
Tagged as: , ,

What Do We Know That We Can’t Say?

I’ve been vacationing with family this week, and (re-) noticed a few things. When we played miniature golf, the winners were treated as if shown to be better skilled than previously thought, even though score differences were statistically insignificant. Same for arcade shooter contests. We also picked which Mexican place to eat at based on one person saying they had eaten there once and it was ok, even though given how random is who likes what when, that was unlikely to be a statistically significant difference for estimating what the rest of us would like.

The general point is that we quite often collect and act on rather weak info clues. This could make good info sense. We might be slowly collecting small clues that eventually add up to big clues. Or if we know well which parameters matter the most, it can make sense to act on weak clues; over a lifetime this can add up to net good decisions. When this is what is going on, then people will tend to know many things they cannot explicitly justify. They might have seen a long history of related situations, and have slowly accumulated enough relevant clues to form useful judgments, but not be able to explicitly point to most of those weak clues which were the basis of that judgement.

Another thing I noticed on vacation is that a large fraction of my relatives age 50 or older think that they know that their lives were personally saved by medicine. They can tell of one or more specific episodes where a good doctor did the right thing, and they’d otherwise be dead. But people just can’t on average have this much evidence, since we usually find it hard to see effects of medicine on health even when we have datasets with thousands of people. (I didn’t point this out to them – these beliefs seemed among the ones they held most deeply and passionately.) So clearly this intuitive collection of weak clues stuff goes very wrong sometimes, even on topics where people suffer large personal consequences. It is not just that random errors can show up; there are topics on which our minds are systematically structured, probably on purpose, to greatly mislead us in big ways.

One of the biggest questions we face is thus: when are judgements trustworthy? When can we guess that the intuitive slow accumulation of weak clues by others or ourselves embodies sufficient evidence to be a useful guide to action? At one extreme, one could try to never act on anything less than explicitly well-founded reasoning, but this would usually go very badly; we mostly have no choice but to rely heavily on such intuition. At the other extreme, many people go their whole lives relying almost entirely on their intuitions, and they seem to mostly do okay.

In between, people often act like they rely on intuition except when good solid evidence is presented to the contrary, but they usually rely on their intuition to judge when to look for explicit evidence, and when that is solid enough. So when those intuitions fail the whole process fails.

Prediction markets seem a robust way to cut through this fog; just believe the market prices when available. But people are usually resistant to creating such market prices, probably exactly because doing so might force them to drop treasured intuitive judgements.

On this blog I often present weak clues, relevant to important topics, but by themselves not sufficient to draw strong conclusions. Usually commenters are eager to indignantly point out this fact. Each and every time. But on many topics we have little other choice; until many weak clues are systematically collected into strong clues, weak clues are what we have. And the topics of where our intuitive conclusions are most likely to be systematically biased tend to be those sort of topics. So I’ll continue to struggle to collect whatever clues I can find there.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Signaling bias in philosophical intuition

Intuitions are a major source of evidence in philosophy. Intuitions are also a significant source of evidence about the person having the intuitions. In most situations where onlookers are likely to read something into a person’s behavior, people adjust their behavior to look better. If philosophical intuitions are swayed in this way, this could be quite a source of bias.

One first step to judging whether signaling motives change intuitions is to determine whether people read personal characteristics into philosophical intuitions. It seems to me that they do, at least for many intuitions. If you claim to find libertarian arguments intuitive, I think people will expect you to have other libertarian personality traits, even if on consideration you aren’t a libertarian. If consciousness doesn’t seem intuitively mysterious to you, one can’t help wonder if you have a particularly un-noticable internal life. If it seems intuitively correct to push the fat man in front of the train, you will seem like a cold, calculating sort of person. If it seems intuitively fine to kill children in societies with pro-children-killing norms, but you choose to condemn it for other reasons, you will have all kinds of problems maintaining relationships with people who learn this.

So I think people treat philosophical intuitions as evidence about personality traits. Is there evidence of people responding by changing their intuitions?

People are enthusiastic to show off their better looking intuitions. They identify with some intuitions and take pleasure in holding them. For instance, in my philosophy of science class the other morning, a classmate proudly dismissed some point, declaring,’my intuitions are very rigorous’. If his intuitions are different from most, and average intuitions actually indicate truth, then his are especially likely to be inaccurate. Yet he seems particularly keen to talk about them, and chooses positions based much more strongly on they than others’ intuitions.

I see similar urges in myself sometimes. For instance consistent answers to the Allais paradox are usually so intuitive to me that I forget which way one is supposed to err. This seems good to me. So when folks seek to change normative rationality to fit their more popular intuitions, I’m quick to snort at such a project. Really, they and I have the same evidence from intuitions, assuming we believe one anothers’ introspective reports. My guess is that we don’t feel like coming to agreement because they want to cheer for something like ‘human reason is complex and nuanced and can’t be captured by simplistic axioms’ and I want to cheer for something like ‘maximize expected utility in the face of all temptations’ (I don’t mean to endorse such behavior). People identify with their intuitions, so it appears they want their intuitions to be seen and associated with their identity. It is rare to hear a person claim to have an intuition that they are embarrassed by.

So it seems to me that intuitions are seen as a source of evidence about people, and that people respond at least by making their better looking intuitions more salient. Do they go further and change their stated intuitions? Introspection is an indistinct business. If there is room anywhere to unconsciously shade your beliefs one way or another, it’s in intuitions. So it’s hard to imagine there not being manipulation going on, unless you think people never change their beliefs in response to incentives other than accuracy.

Perhaps this isn’t so bad. If I say X seems intuitively correct, but only because I guess others will think seeing X as intuitively correct is morally right, then I am doing something like guessing what others find intuitively correct. Which might be a bit of a noisy way to read intuitions, but at least isn’t obviously biased. That is, if each person is biased in the direction of what others think, this shouldn’t obviously bias the consensus. But there is a difference between changing your answer toward what others would think is true, and changing your answer to what will cause others to think you are clever, impressive, virile, or moral. The latter will probably lead to bias.

I’ll elaborate on an example, for concreteness. People ask if it’s ok to push a fat man in front of a trolley to stop it from killing some others. What would you think of me if I said that it at least feels intuitively right to push the fat man? Probably you lower your estimation of my kindness a bit, and maybe suspect that I’m some kind of sociopath. So if I do feel that way, I’m less likely to tell you than if I feel the opposite way. So our reported intuitions on this case are presumably biased in the direction of not pushing the fat man. So what we should really do is likely further in the direction of pushing the fat man than we think.

GD Star Rating
a WordPress rating system
Tagged as: , ,

The Smart Are MORE Biased To Think They Are LESS Biased

I seem to know a lot of smart contrarians who think that standard human biases justify their contrarian position. They argue:

Yes, my view on this subject is in contrast to a consensus among academic and other credentialed experts on this subject. But the fact is that human thoughts are subject to many standard biases, and those biases have misled most others to this mistaken consensus position. For example biases A,B, and C would tend to make people think what they do on this subject, even if that view were not true. I, in contrast, have avoided these biases, both because I know about them (see, I can name them), and because I am so much smarter than these other folks. (Have you seen my test scores?) And this is why I can justifiably disagree with an expert consensus on this subject.

Problem is, not only are smart folks not less biased for many biases, if anything smart folks more easily succumb to the bias of thinking that they are less biased than others:

The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. … We found that none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability. Additional analyses indicated that being free of the bias blind spot does not help a person avoid the actual classic cognitive biases. …

Most cognitive biases in the heuristics and biases literature are negatively correlated with cognitive sophistication, whether the latter is indexed by development, by cognitive ability, or by thinking dispositions. This was not true for any of the bias blind spots studied here. As opposed to the social emphasis in past work on the bias blind spot, we examined bias blind spots connected to some of the most well-known effects from the heuristics and biases literature: outcome bias, base-rate neglect, framing bias, conjunction fallacy, anchoring bias, and myside bias. We found that none of these bias blind spot effects displayed a negative correlation with measures of cognitive ability (SAT total, CRT) or with measures of thinking dispositions (need for cognition, actively open-minded thinking). If anything, the correlations went in the other direction.

We explored the obvious explanation for the indications of a positive correlation between cognitive ability and the magnitude of the bias blind spot in our data. That explanation is the not unreasonable one that more cognitively sophisticated people might indeed show lower cognitive biases—so that it would be correct for them to view themselves as less biased than their peers. However, … we found very little evidence that these classic biases were attenuated by cognitive ability. More intelligent people were not actually less biased—a finding that would have justified their displaying a larger bias blind spot. …

Thus, the bias blind spot joins a small group of other effects such as myside bias and noncausal base-rate neglect in being unmitigated by increases in intelligence. That cognitive sophistication does not mitigate the bias blind spot is consistent with the idea that the mechanisms that cause the bias are quite fundamental and not easily controlled strategically— that they reflect what is termed Type 1 processing in dual-process theory. (more)

Added 12June: The New Yorker talks about this paper:

The results were quite disturbing. For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” … All four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.”

GD Star Rating
a WordPress rating system
Tagged as: , ,

Eventual Futures

I’ve noticed that recommendations for action based on a vision of the future are based on an idea that something must “eventually” occur. For example, eventually:

  • We will run out of coal, so we’d better find replacements soon.
  • Earth will run out of stored energy of fossil fuels and radioactivity, so we’d better get ready to run only on sunlight.
  • Earth will run out of place for trash, so we must stop making trash.
  • The sun will die out, so we’d better get ready to move to another sun.
  • There will be a race to colonize other planets and stars, so our group should get out there first so we don’t get lose this race.
  • Chips will use X instead of silicon, so our chip firms must use X now, to not be left behind.
  • There will be no privacy of any sort, so we might as well get used to it now.
  • Some races will win, so we’d best fight for ours before its too late.
  • Firms will be stronger than nations, unless we break their power soon.
  • There will be a stronger world government, so let’s start one now.
  • There will be conflict between China and West, or Islam and West, so we best strike first now.
  • Artificial intelligences will rule the world, so let’s figure out now how to make a good one.
  • We’ll invent all that is worth inventing, so let’s find a way now to live without innovation.
  • We’ll know all the physics there is, so lets find something else interesting now.
  • There will be a huge deadly world war, so let’s stock some bunkers to hide in.
  • Nanobots will give everyone anything they want, so why work now?
  • The first nano-assembler’s owner will rule the world, so we best study nanotech now.
  • More fertile immigrants will out number us, so we best not let them in.
  • The more fertile stupid will make the world dumb, unless we stop them now.

The common pattern: project forward a current trend to an extreme, while assuming other things don’t change much, and then recommend an action which might make sense if this extreme change were to happen all at once soon.

This is usually a mistake. The trend may not continue indefinitely. Or, by the time a projected extreme is reached, other changes may have changed the appropriate response. Or, the best response may be to do nothing for a long time, until closer to big consequences. Or, the best response may be to do nothing, ever – not all negative changes can be profitably resisted.

It is just not enough to suspect that an extreme will be reached eventually – you usually need a good reason to think it will happen soon, and and that you know a robust way to address it. In far mode it often feels like the far future is clearly visible, and that few obstacles stand in the way of planning paths to achieve far ends. But in fact, the world is much messier than far mode is willing to admit.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Honesty Via Distraction

From Triver’s book Folly of Fools:

When a person is placed under cognitive load (by having to memorize a string of numbers while making a moral evaluation), the individual does not express the usual bias toward self.  But when the same evaluation is made absent cognitive load, a strong bias emerges in favor of seeing oneself acting more fairly than another individual doing the identical action. This suggests that build deeply in us is a mechanism that tries to make universally just evaluations, but that after the fact, “higher” faculties paint the matter in our favor. (p.22)

This suggests an interesting way to avoid bias – make judgements fast under distracting cognitive load.

GD Star Rating
a WordPress rating system
Tagged as:

The Beauty Bias

As I write these words I’m riding a late night train, listening to some beautiful music and noticing a beautiful woman in the aisle opposite. And I can feel with unusual vividness my complete vulnerability to a beauty bias. The careful analytical thoughts I had hours before now seem, no matter what their care or basis, trivial and small by comparison.

If words and coherent thoughts came through this beauty channel, they would feel so much more compelling. If I had to choose between beauty and something plain or ugly, I would be so so eager to find excuses to choose beauty. If I needed to believe beauty was stronger or more moral or better for the world, reasons would be found, and it would feel easy to accept them.

This all horrifies the part of me that wants to believe what is true, based on some coherent and fair use of reasons and analysis. But I can see how very inadequate I am to resist it. The best I can do, it seems, is to not form beliefs or opinions while attending to beauty. Such as by avoiding music with non-trivial lyrics. And by wariness of opinions regarding a divide where one side is more beautiful. (Yes Tyler, this does question my taste for elegant theoretical simplicity.)

I have little useful advice here, alas, other than: know your limits. If you cannot help but to fall into a ditch if you walk nearby, then keep away, or accept that you’ll fall in.

GD Star Rating
a WordPress rating system
Tagged as: ,

Making Up Opinions

Perhaps the most devastating problem with subjective [survey] questions, however, is the possibility that attitudes may not “exist” in a coherent form. A first indication of such problems is that measured attitudes are quite unstable over time. For example, in two surveys spaced a few months apart, the same subjects were asked about their views on government spending. Amazingly, 55% of the subjects reported different answers. Such low correlations at high frequencies are quite representative.

Part of the problem comes from respondents’ reluctance to admit lack of an attitude. Simply because the surveyor is asking the question, respondents believe that they should have an opinion about it. For example, researchers have shown that large minorities would respond to questions about obscure or even fictitious issues, such as providing opinions on countries that don’t exist. (more; HT Tyler)

I’m not clear on just how far this effect goes, but one lesson is: you have fewer real opinions than you think. If you talk a lot, you probably end up expressing many opinions on many topics. But much, perhaps most, of that you just make up on the fly. You won’t give the same opinion later if the subject comes up again, and your opinion probably won’t effect your non-talk decisions.

So your decisions on charity donations, votes, and who or what to give verbal praise, may be a lot simpler than you think. Your decisions on where to live or work, and who to befriend or marry, may also be simpler. That is, you may consistently make similar decisions, but the reasons you give for them may matter less than you think.

GD Star Rating
a WordPress rating system
Tagged as: