Modesty in a Disagreeable World

An interesting paradox arise if one tries to apply Aumann’s theorem in a world of widespread disagreement. The problem is especially apparent in considering the stronger principle which Eliezer calls the Modesty Argument, which claims not only that rational, mutually-respecting people should not agree to disagree, but that in general one should be agreeable and allow oneself to be easily persuaded in any argument.

I’ll introduce it with one of my favorite jokes:

This is a story about advice, and any story about advice becomes a story about a village rabbi. Two men came to see the rabbi of their village.

The first one said, "Rabbi, I have a pear tree in my yard. My father planted it, I keep it watered and don’t let the chickens peck on its roots. One of its branches hangs over my neighbor’s wall. So what do I see yesterday but my neighbor standing there eating one of my pears. This is theft, and I want him to pay me damages."

The rabbi nodded his head. "You’re right, you’re right."

The other neighbor said, "Rabbi, you know that I have seven children–without my garden to feed them, how would I manage? But that tree casts a shadow where nothing will grow. So yesterday, when I go out to dig some potatoes, a pear from his tree falls and hits me right on the head. How am I hurting my neighbor if I eat it? And doesn’t he owe me something for his tree’s blocking sunlight?

The rabbi thought and then said, "You’re right, you’re right."

Meanwhile, the rabbi’s wife had been hearing all this. "How can you say, ‘You’re right’ to both of these men? Surely one of the men is right, and the other is wrong!"

The rabbi looked unhappy. "You’re right, you’re right."

Suppose that I am somewhat neutral on a given issue and I am going to argue with one of two partisans on the issue, who each have opposing views. Applying the Modesty Argument it might seem that I should be prepared to be persuaded to the view of whichever one of them I interact with. It is true, in talking to one of them, that we both know that other people exist with opposing views, but since this person has not been persuaded by this knowledge, he must have good reason to hold to his particular side. There is no reason a priori to suppose that my beliefs on this are any better founded than his, especially since we stipulated that it was a position on which I was relatively ignorant. So it is plausible, given that we reach agreement, and given his long-standing commitment to his position, that our resulting agreement will generally be favorable to his side of the argument.

The problem is that I can equally well reason that if I were to interact instead with the person who takes the opposite view, I would be just as convinced to take that other side. This evokes a rather comical image of sequential interactions with the two partisans, in which I am first convinced of one side and then the other. It seems that someone following the Modesty Argument too literally will find himself in the position of the poor rabbi in the joke.

The problem and paradox is that this behavior is irrational. You can’t be in a position where you have an expectation that you can gain some information that will move your opinion in a predictable direction. Just knowing this fact about the situation should be enough to move your opinion already. Robin has what I think is a formal proof of this in his paper, Disagreement is Unpredictable (not sure I understood it correctly though). For a rational person, every new piece of information is a surprise in the sense that he can’t predict which way it will change his opinion, on the average. When I post these messages here I have no idea whether I will be more or less convinced of my opinion as a result of the feedback I receive. That is how it always must be.

Therefore I can’t be in a position where I know if I talk to person A I will be convinced that he is right, and if I talk to person B I will be convinced that A is wrong. A practitioner of the Modesty Argument who finds himself in this situation is doing something wrong.

So there are two questions: what is wrong with the argument above that puts us in the shoes of the rabbi; and what in fact should a rational and modest person do in a world full of disagreement?

First, I seem to have gone wrong when I concluded that if I argued with person A, I would come to agree with him. As I just pointed out, it is irrational to have an expectation that gaining information will predictably change one’s opinion. Instead, it must be the case that I am completely uncertain about how my opinion will change if I talk with A. Given that we will agree, it must be my expectation that A is as likely to change his mind as that he will convince me. Now, I argued above that this is unlikely given A’s long-standing position as a partisan in the dispute. Therefore I have to conclude that A does not meet the conditions for Aumann’s result; he is not rational, or honest, or at least he assumes that those he disagrees with lack these traits. In that case I cannot apply Aumann’s theorem and cannot act in accordance with the Modesty Argument in my discussions with A or B.

It’s interesting to consider what would happen if A and B are in fact both rational and honest, but both distrust whether their counterpart shares these properties. This is a stable configuration, and it’s rational for A and B to disagree in this situation; but of course they lack the mutual respect which is generally associated with saying that they agree to disagree. Now suppose A and B both trust me as an "honest broker", and I trust them. Then I can’t disagree with A, and I can’t disagree with B, once I interact with them. What will happen is that when I interact with A, we two will agree; and when I interact with B, we two will agree. In this way, A and B will come to agreement by both agreeing with me (I might have to repeat the interaction a few times). This again illustrates how odd is the persistence of disagreement in the world, among people who claim to be rational and honest.

But realistically, disagreement does persist, which leaves the question of how a rational person should behave. Unfortunately, I don’t think that Aumann’s theorem or the Modesty Argument offers much guidance. It’s not rational to adopt a stance where you are easily persuaded by everyone you talk to. There’s no particular reason, just because you are talking to a proponent of one side, to believe him, given that you know there are plenty of people around who believe the opposite. It’s not that you think you are better (smarter, more knowledgeable, more rational) than he is; it’s just that you assume there are many equally smart, knowledgeable and rational people on the other side. In the end, you have to make your decision on controversial issues using other grounds than the irrationality of disagreement.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/robinhanson/ Robin Hanson

    When you see a distribution of opinion among people, of which they are fully aware, you can conclude they are far from meta-rational. But this does not at all mean you should ignore their opinions and just think for yourself. You instead should estimate an error model, saying what sorts of people are likely to have how much error, and then use that model to estimate a best “middle” summary estimate. You should be very reluctant to disagree with this estimate.

  • http://profile.typekey.com/halfinney/ Hal Finney

    You’re right, you’re right.
    :)

    Your proposal about estimating errors makes sense, but do you mean this as a practical or a theoretical suggestion? It sounds very complicated and difficult to do it in detail, but will a drastically simplified version work? Can you give any examples (perhaps in future blog posts) where you demonstrate the technique with an example of a controversy?

    I certainly agree that once you’ve come up with your best guess at the truth, based on this technique or whatever other hopefully-unbiased method you can find, you should agree with it.

  • http://pdf23ds.net pdf23ds

    I don’t know that it’s necessarily unreasonable for someone to *expect* that a conversation with A will leave them closer to A’s position, and the same with B would leave them closer to B’s position. There are two main factors here. First, A and B often base their arguments based on different sets of intuitions (e.g. moral intuitions). (The intuitions are the same between them, but they disagree as to which intuitions apply or overrule.) So without knowing beforehand which intuitions can be applied to the situation, one can expect that both sets of intuitions will be persuasive, but not compelling.

    The second factor is that one can expect that A’s arguments will be flawed, but that one won’t be able to immediately see all of the flaws. Talking to B would raise some of those flaws to your awareness (but not all, and also some irrelevant or incorrect objections to A’s position). Talking to A again could raise some flaws in B’s objections, and so on. Are ideal Bayesian reasoners supposed to be perfectly able to see all logical inconsistencies?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Hal, a first cut would be to assume people have the same error rates, and look for a simple average. A second cut would be to weigh people according to their apparent expertize in the topic. It would not make sense to use people’s opinion on this topic to estimate their expertize. Once you have such an average of other people’s opinion, you would need a very good reason to move your opinion much away from it.

  • Daniel Greco

    Robin,

    I find your averaging suggestion very intuitively attractive, but I worry that it might be hard to cash it out formally. My worry is easy to explain using a simple, binary belief model, but from what I understand it carries over to more realistic models that use degrees of belief.

    Suppose A, B, and C are all equally expert in some domain, and they are consider 3 propositions within this domain, p, p–>q, and ~q. A believes p and ~q, B believes p–>q and ~q, and C believes p and p–>q. If I, as a bystander with no expertise in the domain, decide to go with the majority of experts on these three questions, I end up believing p, p–>q, and ~q, which is of course inconsistent. So, if I try to form binary beliefs based on going with the majority of experts in some domain, I can end up with inconsistent beliefs. Like I said before, my understanding is that one can also end up with probabilistically incoherent degrees of belief if one tries to do a weighted average of the degrees of belief of experts in some domain.

    This isn’t an original observation. I saw it in a presentation by Christian List, and lots of people who work on “judgment aggregation” are interested in issues like these. I’m not sure what the upshot should be. Clearly the weight of expert opinion about issues should affect our beliefs-if other people have thought about some question and weighed the various arguments and evidence, we’d be fools not to take their beliefs into account in forming our own beliefs about the question. But it’s far from clear that the way we should take others’ beliefs into account is by taking a weighted average of their degrees of belief.

  • http://yorkshire-ranter.blogspot.com Alex

    On the other hand, I think there’s fairly good evidence for the proposition that people actually do apply something like the agreement theorem, that is to say, they move towards what they take to be social consensus, going so far as to repress discrepant information even if it has an empirical basis – and we consider that a cognitive bias!

    The problem is that agreement would hold in a rational-expectations world, where errors are subject to random distribution – but they are not. Come to think of it, the answer to the paradox that you can’t know that you are the rational one, and therefore you should tack towards consensus, is that you *know* you aren’t perfectly rational because you know you are human and subject to cognitive bias – and so is your interlocutor, unless they are HAL9000.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Daniel, yes, it is well known that one can’t just do the same syntactic average of beliefs independent of the subject. See: Genest & Zidek, “Combining Probability Distributions: A Critique and an Annotated Bibliography”, Statistical Science 1986. So your model of error needs to guess enough about the process that produces beliefs to estimate which beliefs the errors show up at; the errors at other beliefs would be derivative.

    I should have mentioned that Carl Shulman is also right, that one needs to think about the correlations between different error sources: http://www.overcomingbias.com/2006/12/meme_lineages_a.html

  • Tom Crispin

    How does one determine who is expert in any particular field? Seems to me that itself is based on some weighted average, and we’re going to have turtles all the way down.

  • http://profile.typekey.com/halfinney/ Hal Finney

    pdf23ds – It may seem reasonable to think that way, but really it’s not. Consider a simpler case where there is someone so smart that whenever you interact with him, he can convince you of anything. In fact he can convince you that A is true and then turn around and convince you that A is false. Then it might seem that you can reasonably have an expectation that after you interact with him, your opinion on a subject will change in favor of whatever position you know in advance that he will advocate.

    But eventually you should realize that since he has this ability, even though he provides a convincing argument in favor of his position, you know that he could provide an equally convincing case for the other side. This knowledge should discredit his argumentation and cause you to reject his position, no matter how persuasive it seems to be on the surface. It’s not reasonable to be convinced of something when you know that there is just as compelling an argument against it, even though you don’t know what that argument is.

    To use the possible-world formulation, if you know that there is information that will cause you to have belief X on a matter, then you know that you exist in a set of possible worlds where belief X is reasonable, and therefore you should have belief X now.