We Can’t Foresee To Disagree

How much can Aumann style "we can’t agree to disagree" results say about real human disagreements?  One reason for doubt is that Aumann required agents to have common knowledge of their current opinions, i.e., of what their next honest statements would be.   But how often do our conversations reach an "end" where everyone is sure no one has changed their mind since last speaking?

A few years ago I published a more relevant variation: "we can’t foresee to disagree."  The setup is again two Bayesians with estimates on the same topic (i.e., the expected value of a random variable), but here they have estimates at two different times.  The first Bayesian could foresee a disagreement if he could estimate a non-zero direction in which the second Bayesian’s estimate will differ from his own estimate.   And he could visibly foresee this disagreement if he could announce this direction, so that it was clear (i.e., common knowledge) to them both that the second Bayesian heard it.   

For example, I would visibly foresee disagreeing with you if I said "I think will probably rain tomorrow, but I’m guessing that in an hour you will think it probably won’t rain."  It turns out that such visible foreseeing of a disagreement is impossible for Bayesians with the same prior.  Of course humans disagree this way all the time; if someone says it won’t rain, and then you say it will rain, you can be pretty sure they won’t next be even more sure than you were that it will rain.  (Lab data confirms this.)   

This result holds for arbitrary (finite) info distributions that may improve with time.  It is also easy to weaken the common knowledge requirement; they might make estimates conditional on the second Bayesian hearing, or if they were only pretty sure the second Bayesian heard they could only foresee a small disagreement.  It is also easy to allow cognitive errors; Bayesian wannabes could only foresee disagreements due to errors, and then only if they disagreed about topics where info is irrelevant.

Of course there still remain the issues of how relevant are honest Bayesians as a normative standard, and whether reasonable priors must be common. 

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://www.sims.berkeley.edu/~hal Hal Varian

    I think that issue is very important, but I have to say I don’t have a very good resolution.

    Why it is important: if we were all Bayesians with a common prior there would be virtually no trade in financial markets.

    The glass-is-half-full economist might say “Well, less than %1 of tradeable assets change hands in a given day, so the theory isn’t bad.”

    The glass-is-half-empty economist says “Yeah, but what does that add up to over a month? Surely that is much too much trade for common priors.”

    I wrote a paper once where I argued that trade could be due to different models. For example, Apple comes out with a new gadget and some people think it is great, while others think it is ho-hum. So they can trade on this difference in opinion.

    I think that a lot of what goes on in financial markets is trading on differences of opinion — i.e., trades not supported by factual evidence. So we are left with priors. I know they are supposed to be uninformative, but why should we think that they are any less genetic than hair color or eye color?

    Maybe one could argue that diversity in priors has evolutionary value? If we all had a common prior, we would have all died out a long time ago? Hmm. That’s an interesting angle…

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Hal, yes financial disagreement seems small compared to verbal disagreement. Different models only explains it if we don’t realize we have different models, and that other people might have good models. Clearly our ancestors must have gained some evolutionary advantage from the tendencies that make us disagree; the challenge is to tease those out and decide which are still relevant today. On the rationality of priors, see the post Why Common Priors

  • http://cob.jmu.edu/rosserjb Barkley Rosser

    It has been well known since a famous paper by Diaconis and Freeman in the Annals of Statistics quite some time ago that if the game is infinite-dimensioned and the basis is not continuous, then there may be no Bayesian convergence at all. A cyclical outcome is quite likely, the players simply bounce back and forth between two (or more) given priors that never agree and are never correct (this of course assumes an “objective Bayesianism” in which there are “correct” probabilities that the posteriors are supposed to converge to).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Barkley, yes, in a large space beliefs need not converge with evidence. But my result has nothing to do with whether beliefs converge with evidence; it should apply to the situations you describe as well.

  • Pingback: Reading Yudkowsky, part 26

  • Pingback: Overcoming Bias : Arresting irrational information cascades

  • Pingback: Overcoming Bias : Prefer Contrarian Questions