11 Comments

Imo, it's because beliefs can be rational and/or aesthetic at the same time. A good example of this is my inclination towards socialism. I have heard many compelling criticisms of socialism over the years, some of which have raised serious doubts in my mind as to the feasibility of socialism as a system, but because I have a greater investment in socialism as an aesthetic, I continue to hold this allegiance despite my awareness of these valid criticisms on a rational level. I often wonder if a similar phenomenon is occurring at a cognitive level for those who identify as religious in spite of the overwhelming lack of evidence they are presented with today.

Expand full comment

The theorem is not normative. Agreeing to agree may be a logical result of interaction between perfect Bayesian agents; but my simulation indicates that doing this decreases expected correctness.

I also dispute that the theorem says what Aumann claimed it says, for two reasons.

First, it requires agents to know each others' partition functions. This is laughably impossible in the real world.

Second, I believe that Aumann's attempt to justify saying that "The meet at w of the partitions of X and Y is a subset of event E" means the same as the English phrase "X knows that Y knows event E" means, is incorrect.

Expand full comment

Though it may be impossible for mutually respectful rational agents to disagree, that inevitability necessitates a further exploration of the consequences of knowledge. In a world in which factual knowledge can cause pleasure and pain (for instance, the depression that theodicy addresses) and in which factual knowledge can frequently prove to have near-zero marginal utility, this result could be enough to give limited irrationality a positive marginal utility or enough to rationalize avoidance of rational debate. This result answers questions about why people avoid rational argument as much as it brings up questions about why people don't agree more often.

Expand full comment

Greg, the theory you like is for most people a better theory of other people than of themselves. People don't just want to think of their beliefs as convenient comfortable furniture of their minds; they also want to think of them as their best estimates of reality.

Expand full comment

A good friend of mine -- and a world-class debater to boot -- once said in frustration after a long discussion about politics, "we are both reasonable people; we are both intelligent; why is it that we can't agree about these things? Surely it should just be a matter of exploring our assumptions fully."

I was stymied at the time, but have searched for a satisfactory explanation ever since. I hadn't come across Aumann at the time, but am satisfied to learn that this is a well-explored problem at a formal level.

Suffice to say, the best explanation I've since come across was characterised by George Lakoff -- a sometime student of Chomsky's -- in his book 'Moral Politics' (and more accessibly in his recent short political book, 'Don't Think of an Elephant'). He shares with Chomsky a journey from linguistics to politics. Loosely, his analysis is that someone's 'political persuasion' is not just a set of postulates and logical inferences, but a Weltanschauung; Lakoff uses the term 'deep metaphor', though I don't know if that is originally his coinage.

His case is fairly persuasive, and can also be expressed in cost/benefit evolutionary psychology terms: where the brain is held to be a complex adaptive system thoroughly remodelling a worldview in the light of each new piece of evidence is, however 'rational', only possible if it is not unreasonably expensive. A radical re-evaluation would typically be unacceptably costly, involving as it would a period of tumultuous dissonance, with the potential for dangerous even chaotic disequilibrium states in the ensuing 'logic cascades' that would lead to a period of serious cognitive impairment. Sanity might be at risk, perhaps permanently. The brain, being resilient, avoids this danger, and so applies limits to its receptiveness for even the most compelling controverting evidence. We tolerate some degree of discrepancy where the cost of accommodation exceeds the cost of dissonance.

Instead, picture the brain in a condition of constant unresolved inconsistency. Much like a lived-in house, where the latest interior design whimsy may affect one or two rooms, but the whole is never quite aligned. It's just too expensive to whip out the Victorian fireplace just because minimalism is 'in' this season -- and besides, we might miss it.

(How else, incidentally, can you begin to explain the tenacity of latent religious beliefs in otherwise committed Dawkins-readers?)

The underlying error, here, if there is one, is in applying the word 'rational' to humans (implied by the term 'debaters' in your first paragraph, Hal). Rationality -- viz. the pursuit of consistency and deductive inference -- might be said to be a process, or even a goal, but it can't ever be an equilibrium state of a living brain. Brains as systems are just far too complex for that, the selection pressures on their operations too dynamic, the tumult perpetual.

Tremendous blog, by the way.

Expand full comment

OK, how about this. Nietzsche said 'no one lies like the indignant', by which he captures the fact that people often excuse, ignore, or downright lie about data contrary to their subjective 'bigger picture'. So even Milton Friedman (God bless) in his Free to Choose video series (now free on the internet!) would not answer all counter arguments to school vouchers ('you leave public schools with all those kicked out of private schools'), or state interventions (some successful Asian Tigers had intrusive government sectors). He didn't have an answer to these questions but didn't think these falsified his views, even though I'm sure his intellectual opponents did.

Propositions are, like Paul Feyerabend argued, ultimately a confluence of myriad sources hardly amenable to pure logic. So we must choose to weight supporting and confounding data subjectively, there is no totally objective way. Humans have a finite life, and must act before sufficient data is available, and in any case there's the logical issue with induction in a complex non-linear adaptive systems such as societies. As Richard Feynman noted, we develop theories, see how the implications of these theories fit the data, looking for prediction, generality and parsimony (even beauty), all judged relative to existing theory--quite different than a search for Truth.

Humans are not merely discovering the truth, they are competing for it via different policy platforms, academic papers, and business models. At some level most arguments become based on assumptions that are hardly provable, such as whether or not people will work more efficiently with lower marginal tax rates, or whether the costs of crime prevention in terms of civil liberties is worth the costs of crime prevented. A worldview, or theory, is both a lens and a filter: it amplifies and excludes various observations. But because the alternative is to have no theory, in which the world is random, that's the best we can do. Evolution makes us predisposed to develop theories and act on them, which makes us treat data points differently.

Expand full comment

Eric - That is an interesting phenomenon - Socrates was good at inducing fact-free learning back in the day. However if people recognize this limitation in their thinking, that they don't necessarily know all the implications of the facts available to them, they should be more open to surprising statements by others. People should recognize the many gaps in their own understanding and accept that others will have their own information and interpretations that may be useful and relevant. So I don't see this as giving grounds for the strong stubbornness that is Agreeing to Disagree.

Expand full comment

I think the paper Fact-Free Learning explains Agreeing to Disagree pretty well (AER, Dec 2005). The idea is that most people disagree on 'facts' that are really statements about relations between datapoints. As there are an infinite number of relations, you can't expect to know them all even if you know all the basic data points. So everyone knows only a different subset of the relations, which gives them different 'facts'.

http://ideas.repec.org/a/ae...

Expand full comment

Chris, if we step back from betting markets, do you think rational people would engage in ordinary bets on factual matters? For example, betting $50 over who will win the weekend football game? I'm trying to understand whether you see the presence of a market as important in the situation, or whether you think people might make bets even if they don't disagree about what they are betting on.

Monty, if you see a 0 and I see a 1, if we both switch then we still disagree! And that could happen at first, but eventually we should come to agreement. Say for example that you got only a brief glimpse at the digit, so you were quite uncertain about it, but you kind of thought it was a 0. You don't know how good a look I got, but you might suppose that it was probably better than yours, so when you hear that I saw a 1, you are willing to switch. But then when we both announce our new opinions (suppose we do so simultaneously), you are surprised to learn that I switched too! Now I say it's 0 and you say it's 1. So this tells you that apparently I didn't get a very good look at it either. Maybe I saw it even worse than you, so now you switch back. But when we announce, suppose I switched back too. This tells you that I must not have seen it all that badly, because like you I switched back to my original impression. This kind of thing can in theory repeat for several rounds, but each round gives a rational and careful thinker new evidence about the quality of information the other party has, in comparison to his own. Eventually they will settle on a common agreement about which person saw it best, even without discussing it explicitly.

The same thing should happen even if both parties get a good look. After all, the fact that they disagree even though both saw it well is pretty strange! Something weird must be going on here. Maybe one of them had a bizarre visual malfunction; perhaps a transient ischemic attack in the visual cortex caused him to see a false image. Given that you are considering that, there's no particular reason to assume that the other person was more likely to have suffered this kind of undetectable hallucination than you. If after several rounds both you and I are unwilling to switch, the evidence for such a rare occurance grows stronger, as it is the only thing that can explain this outcome (assuming both parties are honest and rational). Eventually each of us must become quite uncertain that our seemingly clear observations were in fact correct, and one of us will switch to agree with the other.

Expand full comment

I'm reasonably convinced that "respectful, honest and rational debaters cannot disagree on any factual matter", but I have real trouble with the extension to "theorems such as the classic by Milgrom & Stokey, showing that rational people will not participate in betting markets [...]".

The betting markets are a form of trade, and there are many reasons other than disagreement about the facts that drive people to trade, the no-trade theorems notwithstanding. I know that some of the work on the applications to trade has adopted some of the limits, but many people neglect these limits as Hal did above.

When trading, people have differences in their budget, need for liquidity, trading strategy, time horizon, and different risk exposures that they may be trying to mitigate. These apply in prediction markets, real and play money, as well as in the stock market or anywhere else people trade. When offering or accepting a trade, you should expect that the trade encompasses more than just the information differences between you and the counterparty.

And in a market, the assumption of mutual rationality ought to be weaker than in a one-on-one rational discussion. On markets with open records http://www.ideosphere.com/f... like FX, you can see that people have different trading strategies, and get different results. http://www.ideosphere.com/f...

Expand full comment

We both look at a digit printed on a piece of paper. I say it is a 0. You say it is an 1. By your theory, I must agree that it is, in fact, an 1 because you have stated this as your belief, while you must discard your own irrationality and bias and agree it is a 0. As mutually respectful, honest, etc. people, we must agree with each other. Absent a God-like arbiter, how can this disagreement between two truly rational, honest, mutually respectful people be resolved when each cites the evidence of their own senses?

While it is useful to take full account of another's viewpoint, it does not follow that adoption of that viewpoint is the only possible conclusion. There will, in fact, be cases where one person is better informed or more intelligent than the other and has reached a 'more correct' conclusion on a subject or question. Of course, you stumble into the twin morasses of semantics and objectivism when you try to define 'correct', but welcome to the real world. Without objective reality, even one that is difficult to defend except as the result of mutual concensus, basic human functions such as communication become impossible.

While I, as an honest, respectful, etc. guy have seriously considered your viewpoint on this argument about arguments, and thank you for the lesson in the wisdom of practising intellectual empathy, I do not agree with you.

At this point, of course, you could claim that I do not fit the qualifying categories (mutually respectful, honest and rational) and this is why I fail to reach agreement. Thus - a perfectly circular argument; rational people agree with this theory, therefore irrational people disagree, therefore if you disagree you are irrational, thereby proving the theory true through your irrational disagreement. You are assigning 'rationality' with the meaning of 'thinks the same way I do'.

I do not have to beleive that you are irrational or biased in order to disagree with you, nor do I have to assume that you have evil motives, a low IQ or worms in your pockets. You could just be mistaken.

Expand full comment