16 Comments

My problem with Eliezer's notion of Traditional Rationality is whether most people are able to practice it. (This argument deserves a blog post but I will just summarize it here.) Which is going to be more practical and more successful for the average person, someone who is not a super-genius: to apply Bayesian style reasoning consistently, or to overcome their biases enough to accept the appropriate consensus? I would argue that we already have an instinct to accept consensus, and that this will improve most people's accuracy over trying to think for themselves. The main problem IMO on issues where it matters (i.e. issues that would actually affect the typical person's quality of life and which he has control over) is that people are sometimes not getting an accurate view of the informed consensus. Compared to this minor tweak, I see attempting to master Rationality as being far more difficult. Even Eliezer has had trouble with it.

Expand full comment

Eliezer, yes given this serious meta-bias one is tempted to set aside all meta-arguments, but focusing on object-level issues has similarly serious problems with asymmetries in attention to object-level issues. For example, when we know our own personal arguments and evidence much better than others' we naturally find them more persuasive. And when one view is dominant its arguments and evidence are much better known than those for other views. (Which is a way of, as you say, sneaking authority in around the sides.)

Expand full comment

Traditional Rationality teaches us to focus on the object-level issues, not the people; it theoretically denigrates authority while sneaking it in around the sides.

I seriously wonder if this whole notion of "Who do you trust?" and "How biased are they?" ends up being weaker than Traditional Rationality's injunction to focus on the object-level arguments. Maybe people just end up constructing elaborate arguments for who they are trusting, based on their object-level impressions of the issues - in the best-case scenario.

Expand full comment

Sorry, I was pretty unclear. I took Roland to be suggesting that the resolution to his dilemma is that person B should generally trust his own judgement on the matter and disregard A's opinion. I was trying to suggest that that principle would leave A with an incorrect belief, so it does not really help. We have a symmetrical situation in Roland's dilemma, so we have to break the symmetry somehow, such as by reference to other people's opinions.

Expand full comment

What Joseph said. If you could identify some sloppiness and wishful thinking in astrologers, that would be only weak evidence against astrology. The fact that few experts have much good to say about astrology is much stronger evidence.

Expand full comment

Roland, Robin wrote, "The fact that I can identify a particular bias in those I disagree with is only very weak evidence that I am more right than they." While the presence of bias in an astrologist might be weak evidence of their wrongness, the majority opinion against them, and the overwhelming majority against them from people who have expertise in such matters, is strong evidence that they are wrong. If your only evidence against their belief was that they are biased, that might not be such a strong case. That is not the only evidence, however.

Hal, I don't understand your dilemma. From where do you get the idea that people should "generally trust their own judgment on this matter"?

Expand full comment

Roland, I can give you an even simpler dilemma:

Person A believes in astrology.

Should people generally trust their own judgement on this matter? If so, doesn't that mean that person A is right?

Expand full comment

"Without a way to overcome this bias, our other efforts are largely wasted."

This understates the problem. Without a way to overcome this bias, overcomingbias.com is actually making people more biased than they were before.

Expand full comment

"[B]eing more aware of biases makes us more willing to assume that others' biases, and not ours, are responsible for our disagreement..."

Okay, sure, maybe other people are biased in this way. But not me.

Expand full comment

Person A believes in astrology.

Person B believes that astrology is bullshit based on his understanding of science and human biases.

After reading your article should B reconsider his point of view and assume that he is probably as wrong as A?

Expand full comment

Grant, when you say, "it seems possible for each compromising party to move towards a compromise because they do not want to be seen as radical or uncompromising", if I understand you correctly, you seem to be saying that since the persons are concerned about their self-image, the compromise may be no better than their original positions, which they also held because of their concern for their self-image. This is much like Eliezer's reasoning, arguing against this sort of deal.

My response to this was that it may be true that the persons have an irrational motivation to move toward compromise. But when they address the question in practice, "What can I do in order to make a compromise," they can distinguish between what they are really certain of, and what they are less certain of, and can give up the less certain part, maintaining the more certain part. So the resulting middle position in fact does end up being more probable than the two original positions.

Expand full comment

Unknown,

I think you are referring to a sort of disagreement where each party seems mostly interested in their respective self-images. In those cases, it seems possible for each compromising party to move towards a compromise because they do not want to be seen as radical or uncompromising. So I am not sure the existence of a compromise reveals anything about the biases (or lack thereof) of the parties involved in a disagreement.

I have been involved in disagreements (usually in the context of engineering) where involved parties seemed to have very weak incentives to preserve their social images and much stronger incentives to find the truth of the matter. I can see why these sorts of disagreements could be rare in academia.

My own pet method for maintaining objectivity is to pretend I am betting on the positions of a disagreement in a prediction market. It seems to shift my brain out of the "preserve social image" mode and into the "find the truth" mode, but then I suppose I could just have an irrational bias towards it (sigh).

Expand full comment

This error reminds me of the "horizon effect" in software for games like chess. The program can look ahead only so many moves, and so anything beyond a certain number of moves is largely invisible to it. Some of the early chess programs would do really stupid things just to postpone an inevitable loss, by keeping the loss beyond the horizon. For example, a rook is inevitably lost, but the program would sacrifice a knight unnecessarily just to postpone the loss of the rook and keep it beyond the horizon and hence invisible.

In the same way, full understanding of mutual rationality in a disagreement is limited by how deeply we can go in the recursive "I know that he knows that I know that he knows" pattern. People put themselves in the other person's shoes, but it's hard to then imagine the other person putting himself into our shoes. Going further and imagining what he considers ourselves to be picturing what he believes, and so on, becomes very confusing and ultimately degenerates into mental noise for most of us. So we truncate the recursion at a pretty shallow depth, in computer science terms. This then leads to a rather superficial evaluation of the other person's likely motivations and accuracy that fails to take into account the whole picture.

The classic manifestation of this error is the failure to appreciate that the other person's rejection of your views implies that he has considered what your rejection of his views tells him. Just that simple level of recursion is missed by most people, I think. All they see is that the other person is being stubborn and not listening to their own well reasoned positions.

Expand full comment

Robin, that's certainly true. I guess my idea about deals is that it is a way of becoming more accurate which is more practical, in the sense that it is more likely to actually happen, given human nature. I know I personally sometimes have disagreements where both of us are extremely stubborn and refuse to budge an inch, and others which much more approach the ideal Bayesian conversation. There have actually been times (in the second case) when I think for a minute or so about someone's point, originally opposed to my own, and then start arguing a position more extreme than his. In other words, in one case I am unwilling to listen to the other because he is unwilling to listen to me, and in the other I am willing to listen because he is willing to listen as well.

So I guess my main point is that even though you can become more accurate by listening more to others even without a deal, it is very difficult to make ourselves do this, so if we can find conversational partners who are willing to make such deals, it will be easier in practice to overcome some of our errors and biases.

Expand full comment

Unknown, regarding trade barriers between nations many say we shouldn't lower our barriers outside an agreement for others to lower their barriers as well. Economists argue that we are better off lowering our barriers even if others do not lower theirs. But something, perhaps the pride you mention, keeps nations from listening to economists. Similarly, you will be more accurate if you listen more to others even if not part of a deal where they will listen more to you.

Expand full comment

One of the main things preventing me from seeing my own biases is concern for my self-image: I want to be able to think that I am rational and objective, and therefore I am unwilling to admit that I am biased. My suspicion is that it is very similar for others. If it weren't for this, it probably would not be a great deal harder to see our own biases than the biases of others. This is why I suggested (on Hal's post on overcoming disagreement) that it could be profitable to make deals, where I accept something of your position and demand that you accept something of mine in return. To people like Eliezer this doesn't seem to make sense, since it seems to be "dishonest" unless I already that the other person was right in some way. But in fact, at the start the main reason I thought I was right and he was wrong, is that if I thought anything else, it would hurt my self-image. By making the deal, I make both myself and the other appear to be reasonable people, and thus take away from much of the damage to our self-images. This allows me to see something biased in my own position, therefore accepting whatever was right in the other's position, while still seeing something biased in the other's position, namely on the point where he accepts my position.

These judgements of bias are likely enough to be accurate, and so both of us can end up improving our position, thus avoiding the danger Eliezer sees in this process, namely that the two move towards a middle position without any movement toward truth.

Expand full comment