In Bias, Meta is Max

A recent Science review notes our worst bias is meta – being more aware of biases makes us more willing to assume that others’ biases, and not ours, are responsible for our disagreement:

Because people often do not recognize when personal biases and idiosyncratic interpretations have shaped their judgments and preferences, they often take for granted that others will share those judgments and preferences. When others do not, people’s faith in their own objectivity often prompts them to view those others as biased. Indeed, people show a broad and pervasive tendency to see (and even exaggerate) the impact of bias on others’ judgments while denying its influence on their own.

For example, people think that others’ policy opinions are biased by self-interest, that others’ social judgments are biased by an inclination to rely on dispositional (rather than situational) explanations for behavior, and that others’ perceptions of interpersonal conflicts are biased by their personal allegiances. At the same time, people are blind to each of these biases in their own judgments.


Such divergent perceptions of bias are bolstered by the fact that people evaluate their own bias by introspecting about thoughts and motives but evaluate others’ bias by considering external behavior (e.g., "My motive was to be fair; his actions only helped himself."). People place less emphasis on others’ introspections even when those others proffer them – a finding that is perhaps unsurprising in light of people’s skepticism about the accuracy of others’ perceptions.

In the face of disagreement, beliefs in one’s own objectivity and the other side’s bias can produce and exacerbate conflict. For example, American students favor bombing terrorists after being led to view them as biased and irrational, whereas they favor negotiating with terrorists after being led to view them as objective and rational.

People also behave more conflictually toward those whom they suspect will be biased by self-interest.  Participants in one study were instructed to consider the perspective of their adversaries in a conflict over limited resources. That instruction had the ironic effect of leading them to expect that their adversaries would be biased by self-interest, which, in turn, led the participants themselves to act more competitively and selfishly. Acts of competitiveness and aggression are likely to engender a vicious cycle, as the recipients of those acts are likely to view them as unwarranted by the objective situation and, therefore, as signaling their perpetrators’ bias.

Without a way to overcome this bias, our other efforts are largely wasted.  So everyone, please repeat after me:  The fact that I can identify a particular bias in those I disagree with is only very weak evidence that I am more right than they.  For example, if on average about twenty biases afflict each opinion, then identifying a bias in someone that afflicts half of opinions might move their expected bias count to 20.5 instead of 19.5, relative to my expected count of 20.  (Seeing they have at least one of three half-common independently-distributed biases puts their expected bias count at 20.1.) 

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Unknown

    One of the main things preventing me from seeing my own biases is concern for my self-image: I want to be able to think that I am rational and objective, and therefore I am unwilling to admit that I am biased. My suspicion is that it is very similar for others. If it weren’t for this, it probably would not be a great deal harder to see our own biases than the biases of others. This is why I suggested (on Hal’s post on overcoming disagreement) that it could be profitable to make deals, where I accept something of your position and demand that you accept something of mine in return. To people like Eliezer this doesn’t seem to make sense, since it seems to be “dishonest” unless I already that the other person was right in some way. But in fact, at the start the main reason I thought I was right and he was wrong, is that if I thought anything else, it would hurt my self-image. By making the deal, I make both myself and the other appear to be reasonable people, and thus take away from much of the damage to our self-images. This allows me to see something biased in my own position, therefore accepting whatever was right in the other’s position, while still seeing something biased in the other’s position, namely on the point where he accepts my position.

    These judgements of bias are likely enough to be accurate, and so both of us can end up improving our position, thus avoiding the danger Eliezer sees in this process, namely that the two move towards a middle position without any movement toward truth.

  • http://hanson.gmu.edu Robin Hanson

    Unknown, regarding trade barriers between nations many say we shouldn’t lower our barriers outside an agreement for others to lower their barriers as well. Economists argue that we are better off lowering our barriers even if others do not lower theirs. But something, perhaps the pride you mention, keeps nations from listening to economists. Similarly, you will be more accurate if you listen more to others even if not part of a deal where they will listen more to you.

  • Unknown

    Robin, that’s certainly true. I guess my idea about deals is that it is a way of becoming more accurate which is more practical, in the sense that it is more likely to actually happen, given human nature. I know I personally sometimes have disagreements where both of us are extremely stubborn and refuse to budge an inch, and others which much more approach the ideal Bayesian conversation. There have actually been times (in the second case) when I think for a minute or so about someone’s point, originally opposed to my own, and then start arguing a position more extreme than his. In other words, in one case I am unwilling to listen to the other because he is unwilling to listen to me, and in the other I am willing to listen because he is willing to listen as well.

    So I guess my main point is that even though you can become more accurate by listening more to others even without a deal, it is very difficult to make ourselves do this, so if we can find conversational partners who are willing to make such deals, it will be easier in practice to overcome some of our errors and biases.

  • http://profile.typekey.com/halfinney/ Hal Finney

    This error reminds me of the “horizon effect” in software for games like chess. The program can look ahead only so many moves, and so anything beyond a certain number of moves is largely invisible to it. Some of the early chess programs would do really stupid things just to postpone an inevitable loss, by keeping the loss beyond the horizon. For example, a rook is inevitably lost, but the program would sacrifice a knight unnecessarily just to postpone the loss of the rook and keep it beyond the horizon and hence invisible.

    In the same way, full understanding of mutual rationality in a disagreement is limited by how deeply we can go in the recursive “I know that he knows that I know that he knows” pattern. People put themselves in the other person’s shoes, but it’s hard to then imagine the other person putting himself into our shoes. Going further and imagining what he considers ourselves to be picturing what he believes, and so on, becomes very confusing and ultimately degenerates into mental noise for most of us. So we truncate the recursion at a pretty shallow depth, in computer science terms. This then leads to a rather superficial evaluation of the other person’s likely motivations and accuracy that fails to take into account the whole picture.

    The classic manifestation of this error is the failure to appreciate that the other person’s rejection of your views implies that he has considered what your rejection of his views tells him. Just that simple level of recursion is missed by most people, I think. All they see is that the other person is being stubborn and not listening to their own well reasoned positions.

  • Grant

    Unknown,

    I think you are referring to a sort of disagreement where each party seems mostly interested in their respective self-images. In those cases, it seems possible for each compromising party to move towards a compromise because they do not want to be seen as radical or uncompromising. So I am not sure the existence of a compromise reveals anything about the biases (or lack thereof) of the parties involved in a disagreement.

    I have been involved in disagreements (usually in the context of engineering) where involved parties seemed to have very weak incentives to preserve their social images and much stronger incentives to find the truth of the matter. I can see why these sorts of disagreements could be rare in academia.

    My own pet method for maintaining objectivity is to pretend I am betting on the positions of a disagreement in a prediction market. It seems to shift my brain out of the “preserve social image” mode and into the “find the truth” mode, but then I suppose I could just have an irrational bias towards it (sigh).

  • Unknown

    Grant, when you say, “it seems possible for each compromising party to move towards a compromise because they do not want to be seen as radical or uncompromising”, if I understand you correctly, you seem to be saying that since the persons are concerned about their self-image, the compromise may be no better than their original positions, which they also held because of their concern for their self-image. This is much like Eliezer’s reasoning, arguing against this sort of deal.

    My response to this was that it may be true that the persons have an irrational motivation to move toward compromise. But when they address the question in practice, “What can I do in order to make a compromise,” they can distinguish between what they are really certain of, and what they are less certain of, and can give up the less certain part, maintaining the more certain part. So the resulting middle position in fact does end up being more probable than the two original positions.

  • Roland

    Person A believes in astrology.

    Person B believes that astrology is bullshit based on his understanding of science and human biases.

    After reading your article should B reconsider his point of view and assume that he is probably as wrong as A?

  • http://metaandmeta.typepad.com Q the Enchanter

    “[B]eing more aware of biases makes us more willing to assume that others’ biases, and not ours, are responsible for our disagreement…”

    Okay, sure, maybe other people are biased in this way. But not me.

  • John Maxwell

    “Without a way to overcome this bias, our other efforts are largely wasted.”

    This understates the problem. Without a way to overcome this bias, overcomingbias.com is actually making people more biased than they were before.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Roland, I can give you an even simpler dilemma:

    Person A believes in astrology.

    Should people generally trust their own judgement on this matter? If so, doesn’t that mean that person A is right?

  • Joseph Buck

    Roland, Robin wrote, “The fact that I can identify a particular bias in those I disagree with is only very weak evidence that I am more right than they.” While the presence of bias in an astrologist might be weak evidence of their wrongness, the majority opinion against them, and the overwhelming majority against them from people who have expertise in such matters, is strong evidence that they are wrong. If your only evidence against their belief was that they are biased, that might not be such a strong case. That is not the only evidence, however.

    Hal, I don’t understand your dilemma. From where do you get the idea that people should “generally trust their own judgment on this matter”?

  • http://hanson.gmu.edu Robin Hanson

    What Joseph said. If you could identify some sloppiness and wishful thinking in astrologers, that would be only weak evidence against astrology. The fact that few experts have much good to say about astrology is much stronger evidence.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Sorry, I was pretty unclear. I took Roland to be suggesting that the resolution to his dilemma is that person B should generally trust his own judgement on the matter and disregard A’s opinion. I was trying to suggest that that principle would leave A with an incorrect belief, so it does not really help. We have a symmetrical situation in Roland’s dilemma, so we have to break the symmetry somehow, such as by reference to other people’s opinions.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Traditional Rationality teaches us to focus on the object-level issues, not the people; it theoretically denigrates authority while sneaking it in around the sides.

    I seriously wonder if this whole notion of “Who do you trust?” and “How biased are they?” ends up being weaker than Traditional Rationality’s injunction to focus on the object-level arguments. Maybe people just end up constructing elaborate arguments for who they are trusting, based on their object-level impressions of the issues – in the best-case scenario.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, yes given this serious meta-bias one is tempted to set aside all meta-arguments, but focusing on object-level issues has similarly serious problems with asymmetries in attention to object-level issues. For example, when we know our own personal arguments and evidence much better than others’ we naturally find them more persuasive. And when one view is dominant its arguments and evidence are much better known than those for other views. (Which is a way of, as you say, sneaking authority in around the sides.)

  • http://profile.typekey.com/halfinney/ Hal Finney

    My problem with Eliezer’s notion of Traditional Rationality is whether most people are able to practice it. (This argument deserves a blog post but I will just summarize it here.) Which is going to be more practical and more successful for the average person, someone who is not a super-genius: to apply Bayesian style reasoning consistently, or to overcome their biases enough to accept the appropriate consensus? I would argue that we already have an instinct to accept consensus, and that this will improve most people’s accuracy over trying to think for themselves. The main problem IMO on issues where it matters (i.e. issues that would actually affect the typical person’s quality of life and which he has control over) is that people are sometimes not getting an accurate view of the informed consensus. Compared to this minor tweak, I see attempting to master Rationality as being far more difficult. Even Eliezer has had trouble with it.

  • Pingback: Summary: 32nd Gentle Thinkers Debate (Free Will and Justice) | Concept Sandbox

  • Pingback: Why do people believe in conspiracy theories? | Concept Sandbox