Agreeing to Agree

It’s been mentioned a few times already, but I want to draw attention to what is IMO probably the most interesting, surprising and challenging result in the field of human bias: that mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s opinions. They cannot "agree to disagree", they can only agree to agree.

This result goes back to Nobel Prize winner Robert Aumann in the 1970s: Agreeing to Disagree. Unfortunately Aumann’s proof is quite static and formal, building on a possible-world semantics formalism so powerful that Aumann apologizes: "We publish this note with some diffidence, since once one has the appropriate framework, it is mathematically trivial." It’s ironic that a result so counter-intuitive and controversial can be described in such terms. This combination of elegance and parsimony of proof combined with the totally unexpected nature of the result is part of what makes this area so fascinating to me.

Aumann’s proof, although elegant, is opaque unless you are familiar with the formalism. Tyler Cowan and Robin Hanson translate Aumann’s proof into English on pages 7-9 of their paper, Are Disagreements Honest? Some other papers that touch on the same result include Geanakoplos & Polemarchakis’ We Can’t Disagree Forever, which discusses the sequence of events as two rational debaters come to agreement; and various "no bet" theorems such as the classic by Milgrom & Stokey, showing that rational people will not participate in betting markets, since the mere fact that someone is willing to take your bet is evidence that you are wrong. Robin has several other papers in this area available from his web site.

There is much that can be said on this topic but I’ll focus on two aspects here. The result can be seen in either normative terms, telling us what we should do as rational thinkers, or positive terms, describing how rational humans behave. In the positive sense, it is obvious that the theorem is not a good description of human behavior. People do disagree persistently, and when they "agree to disagree" it is taken as a sign of respect rather than mutual contempt. It’s possible that this is mere politeness, though, and that we recognize at some level that such failures to reach agreement indicate a certain lack of good faith among the participants. I’d be curious to hear how others perceive such situations.

Normatively what I find most striking is the variation in how people respond upon learning of this result. Many people have a strong intuitive opposition to it, and seek out loopholes and exceptions which will allow them to justify their persistent disagreements. Indeed, such loopholes do exist, the most notable being the assumption that the debaters are acting as Bayesian reasoners with common priors. However as Tyler and Robin note in their paper, a number of extensions and relaxations of Aumann’s original result over the years have increased its scope and made it harder to appeal to these exceptions as a justification for ignoring the results.

It’s odd, because many other kinds of bias in the literature seem to introduce less opposition. For example, overconfidence bias is often freely admitted, with a rueful acknowledgement that it is a human failing to rank oneself too highly. Overconfidence is probably a large part of the reason for persistent disagreement, each party ranking his own knowledge above that of the other. Only a rather complex chain of reasoning exposes the logical contradiction in this conclusion. But even once that flaw is exposed, people seem much more reluctant to admit that their conclusions are likely to be no better than average, than that their abilities are also generally likely to be about average.

This bias is one I’ve found to be prevalent and influential in day to day life, more so than many others. Small disagreements are extremely common. For me, understanding the nature of Aumann’s result has been generally helpful in terms of allowing me to be less committed to my positions and more willing to seriously consider that the other person may have good reasons for his beliefs. There are still times when I am unpersuaded, but I recognize now that I have to see the other person as irrational and biased in order for me to hold my position in the face of his disbelief. As I alluded to above, I suspect that many of us adopt such an attitude unconsciously when we disagree, and it is helpful to be more aware of what is going on in such a common situation.

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • http://www.Argument.com Monty Python

    We both look at a digit printed on a piece of paper. I say it is a 0. You say it is an 1. By your theory, I must agree that it is, in fact, an 1 because you have stated this as your belief, while you must discard your own irrationality and bias and agree it is a 0. As mutually respectful, honest, etc. people, we must agree with each other. Absent a God-like arbiter, how can this disagreement between two truly rational, honest, mutually respectful people be resolved when each cites the evidence of their own senses?

    While it is useful to take full account of another’s viewpoint, it does not follow that adoption of that viewpoint is the only possible conclusion. There will, in fact, be cases where one person is better informed or more intelligent than the other and has reached a ‘more correct’ conclusion on a subject or question. Of course, you stumble into the twin morasses of semantics and objectivism when you try to define ‘correct’, but welcome to the real world. Without objective reality, even one that is difficult to defend except as the result of mutual concensus, basic human functions such as communication become impossible.

    While I, as an honest, respectful, etc. guy have seriously considered your viewpoint on this argument about arguments, and thank you for the lesson in the wisdom of practising intellectual empathy, I do not agree with you.

    At this point, of course, you could claim that I do not fit the qualifying categories (mutually respectful, honest and rational) and this is why I fail to reach agreement. Thus – a perfectly circular argument; rational people agree with this theory, therefore irrational people disagree, therefore if you disagree you are irrational, thereby proving the theory true through your irrational disagreement. You are assigning ‘rationality’ with the meaning of ‘thinks the same way I do’.

    I do not have to beleive that you are irrational or biased in order to disagree with you, nor do I have to assume that you have evil motives, a low IQ or worms in your pockets. You could just be mistaken.

  • http://pancrit.org Chris Hibbert

    I’m reasonably convinced that “respectful, honest and rational debaters cannot disagree on any factual matter”, but I have real trouble with the extension to “theorems such as the classic by Milgrom & Stokey, showing that rational people will not participate in betting markets [...]“.

    The betting markets are a form of trade, and there are many reasons other than disagreement about the facts that drive people to trade, the no-trade theorems notwithstanding. I know that some of the work on the applications to trade has adopted some of the limits, but many people neglect these limits as Hal did above.

    When trading, people have differences in their budget, need for liquidity, trading strategy, time horizon, and different risk exposures that they may be trying to mitigate. These apply in prediction markets, real and play money, as well as in the stock market or anywhere else people trade. When offering or accepting a trade, you should expect that the trade encompasses more than just the information differences between you and the counterparty.

    And in a market, the assumption of mutual rationality ought to be weaker than in a one-on-one rational discussion. On markets with open records http://www.ideosphere.com/fx-bin/SimpleAccount?uid=2&profile=-3 like FX, you can see that people have different trading strategies, and get different results. http://www.ideosphere.com/fx-bin/AllAccounts?sortby=networth

  • http://profile.typekey.com/halfinney/ Hal Finney

    Chris, if we step back from betting markets, do you think rational people would engage in ordinary bets on factual matters? For example, betting $50 over who will win the weekend football game? I’m trying to understand whether you see the presence of a market as important in the situation, or whether you think people might make bets even if they don’t disagree about what they are betting on.

    Monty, if you see a 0 and I see a 1, if we both switch then we still disagree! And that could happen at first, but eventually we should come to agreement. Say for example that you got only a brief glimpse at the digit, so you were quite uncertain about it, but you kind of thought it was a 0. You don’t know how good a look I got, but you might suppose that it was probably better than yours, so when you hear that I saw a 1, you are willing to switch. But then when we both announce our new opinions (suppose we do so simultaneously), you are surprised to learn that I switched too! Now I say it’s 0 and you say it’s 1. So this tells you that apparently I didn’t get a very good look at it either. Maybe I saw it even worse than you, so now you switch back. But when we announce, suppose I switched back too. This tells you that I must not have seen it all that badly, because like you I switched back to my original impression. This kind of thing can in theory repeat for several rounds, but each round gives a rational and careful thinker new evidence about the quality of information the other party has, in comparison to his own. Eventually they will settle on a common agreement about which person saw it best, even without discussing it explicitly.

    The same thing should happen even if both parties get a good look. After all, the fact that they disagree even though both saw it well is pretty strange! Something weird must be going on here. Maybe one of them had a bizarre visual malfunction; perhaps a transient ischemic attack in the visual cortex caused him to see a false image. Given that you are considering that, there’s no particular reason to assume that the other person was more likely to have suffered this kind of undetectable hallucination than you. If after several rounds both you and I are unwilling to switch, the evidence for such a rare occurance grows stronger, as it is the only thing that can explain this outcome (assuming both parties are honest and rational). Eventually each of us must become quite uncertain that our seemingly clear observations were in fact correct, and one of us will switch to agree with the other.

  • eric

    I think the paper Fact-Free Learning explains Agreeing to Disagree pretty well (AER, Dec 2005). The idea is that most people disagree on ‘facts’ that are really statements about relations between datapoints. As there are an infinite number of relations, you can’t expect to know them all even if you know all the basic data points. So everyone knows only a different subset of the relations, which gives them different ‘facts’.

    http://ideas.repec.org/a/aea/aecrev/v95y2005i5p1355-1368.html

  • http://profile.typekey.com/halfinney/ Hal Finney

    Eric – That is an interesting phenomenon – Socrates was good at inducing fact-free learning back in the day. However if people recognize this limitation in their thinking, that they don’t necessarily know all the implications of the facts available to them, they should be more open to surprising statements by others. People should recognize the many gaps in their own understanding and accept that others will have their own information and interpretations that may be useful and relevant. So I don’t see this as giving grounds for the strong stubbornness that is Agreeing to Disagree.

  • eric

    OK, how about this. Nietzsche said ‘no one lies like the indignant’, by which he captures the fact that people often excuse, ignore, or downright lie about data contrary to their subjective ‘bigger picture’. So even Milton Friedman (God bless) in his Free to Choose video series (now free on the internet!) would not answer all counter arguments to school vouchers (‘you leave public schools with all those kicked out of private schools’), or state interventions (some successful Asian Tigers had intrusive government sectors). He didn’t have an answer to these questions but didn’t think these falsified his views, even though I’m sure his intellectual opponents did.

    Propositions are, like Paul Feyerabend argued, ultimately a confluence of myriad sources hardly amenable to pure logic. So we must choose to weight supporting and confounding data subjectively, there is no totally objective way. Humans have a finite life, and must act before sufficient data is available, and in any case there’s the logical issue with induction in a complex non-linear adaptive systems such as societies. As Richard Feynman noted, we develop theories, see how the implications of these theories fit the data, looking for prediction, generality and parsimony (even beauty), all judged relative to existing theory–quite different than a search for Truth.

    Humans are not merely discovering the truth, they are competing for it via different policy platforms, academic papers, and business models. At some level most arguments become based on assumptions that are hardly provable, such as whether or not people will work more efficiently with lower marginal tax rates, or whether the costs of crime prevention in terms of civil liberties is worth the costs of crime prevented. A worldview, or theory, is both a lens and a filter: it amplifies and excludes various observations. But because the alternative is to have no theory, in which the world is random, that’s the best we can do. Evolution makes us predisposed to develop theories and act on them, which makes us treat data points differently.

  • Greg Marsh

    A good friend of mine — and a world-class debater to boot — once said in frustration after a long discussion about politics, “we are both reasonable people; we are both intelligent; why is it that we can’t agree about these things? Surely it should just be a matter of exploring our assumptions fully.”

    I was stymied at the time, but have searched for a satisfactory explanation ever since. I hadn’t come across Aumann at the time, but am satisfied to learn that this is a well-explored problem at a formal level.

    Suffice to say, the best explanation I’ve since come across was characterised by George Lakoff — a sometime student of Chomsky’s — in his book ‘Moral Politics’ (and more accessibly in his recent short political book, ‘Don’t Think of an Elephant’). He shares with Chomsky a journey from linguistics to politics. Loosely, his analysis is that someone’s ‘political persuasion’ is not just a set of postulates and logical inferences, but a Weltanschauung; Lakoff uses the term ‘deep metaphor’, though I don’t know if that is originally his coinage.

    His case is fairly persuasive, and can also be expressed in cost/benefit evolutionary psychology terms: where the brain is held to be a complex adaptive system thoroughly remodelling a worldview in the light of each new piece of evidence is, however ‘rational’, only possible if it is not unreasonably expensive. A radical re-evaluation would typically be unacceptably costly, involving as it would a period of tumultuous dissonance, with the potential for dangerous even chaotic disequilibrium states in the ensuing ‘logic cascades’ that would lead to a period of serious cognitive impairment. Sanity might be at risk, perhaps permanently. The brain, being resilient, avoids this danger, and so applies limits to its receptiveness for even the most compelling controverting evidence. We tolerate some degree of discrepancy where the cost of accommodation exceeds the cost of dissonance.

    Instead, picture the brain in a condition of constant unresolved inconsistency. Much like a lived-in house, where the latest interior design whimsy may affect one or two rooms, but the whole is never quite aligned. It’s just too expensive to whip out the Victorian fireplace just because minimalism is ‘in’ this season — and besides, we might miss it.

    (How else, incidentally, can you begin to explain the tenacity of latent religious beliefs in otherwise committed Dawkins-readers?)

    The underlying error, here, if there is one, is in applying the word ‘rational’ to humans (implied by the term ‘debaters’ in your first paragraph, Hal). Rationality — viz. the pursuit of consistency and deductive inference — might be said to be a process, or even a goal, but it can’t ever be an equilibrium state of a living brain. Brains as systems are just far too complex for that, the selection pressures on their operations too dynamic, the tumult perpetual.

    Tremendous blog, by the way.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Greg, the theory you like is for most people a better theory of other people than of themselves. People don’t just want to think of their beliefs as convenient comfortable furniture of their minds; they also want to think of them as their best estimates of reality.

  • James Dama

    Though it may be impossible for mutually respectful rational agents to disagree, that inevitability necessitates a further exploration of the consequences of knowledge. In a world in which factual knowledge can cause pleasure and pain (for instance, the depression that theodicy addresses) and in which factual knowledge can frequently prove to have near-zero marginal utility, this result could be enough to give limited irrationality a positive marginal utility or enough to rationalize avoidance of rational debate. This result answers questions about why people avoid rational argument as much as it brings up questions about why people don’t agree more often.

  • http://shagbark.livejournal.com Phil Goetz

    The theorem is not normative. Agreeing to agree may be a logical result of interaction between perfect Bayesian agents; but my simulation indicates that doing this decreases expected correctness.

    I also dispute that the theorem says what Aumann claimed it says, for two reasons.

    First, it requires agents to know each others’ partition functions. This is laughably impossible in the real world.

    Second, I believe that Aumann’s attempt to justify saying that “The meet at w of the partitions of X and Y is a subset of event E” means the same as the English phrase “X knows that Y knows event E” means, is incorrect.

  • Pingback: Theories of the firm - Economics -

  • Pingback: Ten Things You Should Learn From LessWrong.com | Brian Brown's Official Website

  • Pingback: Ten Things You Should Learn From LessWrong.com | The Garden of Princess Aileen 心灵的驿站

  • Pingback: Overcoming Bias : Agreeing to Agree « who is linked

  • Pingback: Is this your true rejection? | Ottih

  • Pingback: Responding to whatthemuffin | Holden on Everything