How much can Aumann style "we can’t agree to disagree" results say about real human disagreements? One reason for doubt is that Aumann required agents to have common knowledge of their current opinions, i.e., of what their next honest statements would be. But how often do our conversations reach an "end" where everyone is sure no one has changed their mind since last speaking?

I tried to follow your proof in the paper. I think I understand the math, but maybe there is some notation that I have misunderstood.

Firstly, what's up with not numbering all your equations? That is just rude to anyone trying to comment on you paper.

Secondly, what happens between the fist and second equation in the proof of Theorem 1? I understand how you arrive at the fist equation of the proof of Theorem 1. But the next equation seems wrong to me.

If start from the second equation of the proof of Theorem 1, and then you take out X from the sum on the left hand side (since X is a constant in that sum), and then divide by the remaining sum, so that you get only X on the left hand side. (Same as how you got the previous equation, only backwards.) Then you end up with:

E[V|I(w*)] = E[V|(I and J)(w*)]

and that does not look right?

What is going on here? What am I misunderstanding?

Barkley, yes, in a large space beliefs need not converge with evidence. But my result has nothing to do with whether beliefs converge with evidence; it should apply to the situations you describe as well.

It has been well known since a famous paper by Diaconis and Freeman in the Annals of Statistics quite some time ago that if the game is infinite-dimensioned and the basis is not continuous, then there may be no Bayesian convergence at all. A cyclical outcome is quite likely, the players simply bounce back and forth between two (or more) given priors that never agree and are never correct (this of course assumes an "objective Bayesianism" in which there are "correct" probabilities that the posteriors are supposed to converge to).

Hal, yes financial disagreement seems small compared to verbal disagreement. Different models only explains it if we don't realize we have different models, and that other people might have good models. Clearly our ancestors must have gained some evolutionary advantage from the tendencies that make us disagree; the challenge is to tease those out and decide which are still relevant today. On the rationality of priors, see the post Why Common Priors

I think that issue is very important, but I have to say I don't have a very good resolution.

Why it is important: if we were all Bayesians with a common prior there would be virtually no trade in financial markets.

The glass-is-half-full economist might say "Well, less than %1 of tradeable assets change hands in a given day, so the theory isn't bad."

The glass-is-half-empty economist says "Yeah, but what does that add up to over a month? Surely that is much too much trade for common priors."

I wrote a paper once where I argued that trade could be due to different models. For example, Apple comes out with a new gadget and some people think it is great, while others think it is ho-hum. So they can trade on this difference in opinion.

I think that a lot of what goes on in financial markets is trading on differences of opinion --- i.e., trades not supported by factual evidence. So we are left with priors. I know they are supposed to be uninformative, but why should we think that they are any less genetic than hair color or eye color?

Maybe one could argue that diversity in priors has evolutionary value? If we all had a common prior, we would have all died out a long time ago? Hmm. That's an interesting angle...

I tried to follow your proof in the paper. I think I understand the math, but maybe there is some notation that I have misunderstood.

Firstly, what's up with not numbering all your equations? That is just rude to anyone trying to comment on you paper.

Secondly, what happens between the fist and second equation in the proof of Theorem 1? I understand how you arrive at the fist equation of the proof of Theorem 1. But the next equation seems wrong to me.

If start from the second equation of the proof of Theorem 1, and then you take out X from the sum on the left hand side (since X is a constant in that sum), and then divide by the remaining sum, so that you get only X on the left hand side. (Same as how you got the previous equation, only backwards.) Then you end up with:

E[V|I(w*)] = E[V|(I and J)(w*)]

and that does not look right?

What is going on here? What am I misunderstanding?

Barkley, yes, in a large space beliefs need not converge with evidence. But my result has nothing to do with whether beliefs converge with evidence; it should apply to the situations you describe as well.

It has been well known since a famous paper by Diaconis and Freeman in the Annals of Statistics quite some time ago that if the game is infinite-dimensioned and the basis is not continuous, then there may be no Bayesian convergence at all. A cyclical outcome is quite likely, the players simply bounce back and forth between two (or more) given priors that never agree and are never correct (this of course assumes an "objective Bayesianism" in which there are "correct" probabilities that the posteriors are supposed to converge to).

Hal, yes financial disagreement seems small compared to verbal disagreement. Different models only explains it if we don't realize we have different models, and that other people might have good models. Clearly our ancestors must have gained some evolutionary advantage from the tendencies that make us disagree; the challenge is to tease those out and decide which are still relevant today. On the rationality of priors, see the post Why Common Priors

I think that issue is very important, but I have to say I don't have a very good resolution.

Why it is important: if we were all Bayesians with a common prior there would be virtually no trade in financial markets.

The glass-is-half-full economist might say "Well, less than %1 of tradeable assets change hands in a given day, so the theory isn't bad."

The glass-is-half-empty economist says "Yeah, but what does that add up to over a month? Surely that is much too much trade for common priors."

I wrote a paper once where I argued that trade could be due to different models. For example, Apple comes out with a new gadget and some people think it is great, while others think it is ho-hum. So they can trade on this difference in opinion.

I think that a lot of what goes on in financial markets is trading on differences of opinion --- i.e., trades not supported by factual evidence. So we are left with priors. I know they are supposed to be uninformative, but why should we think that they are any less genetic than hair color or eye color?

Maybe one could argue that diversity in priors has evolutionary value? If we all had a common prior, we would have all died out a long time ago? Hmm. That's an interesting angle...