14 Comments

Unknown, I agree that Eliezer probably wouldn't be the best person to do this with. Maybe not even Robin, although Robin would make a good overseer/ref of the conversation case study.

The ideal would be two people, comfortable with bayesian reasoning, who have a demonstrated track record of updating their positions in response to new information from 3rd parties.

I think Anders Sandberg clearly would be one good choice. I'll have to think more about the second. I would nominate TGGP as a smart person with a demonstrated capacity to update his beliefs as a result of new information from third parties, but I'm unaware of his facility with the bayesian aspect.

Expand full comment

Hal, you are assuming that the person who hears your message is acting rationally and reasonably in responding to the message. Given this assumption, as you find, you can't really tell them much useful in this way. But if he is not responding reasonably, you can accurately tell him that his estimate will be lower than yours. And this is in fact what we find in lab experiments with humans, and in our ordinary experience with each other. We can in fact anticipate how others' future opinions will differ from our opinion now, even when we warn them of the direction of this difference.

Expand full comment

Robin, yes, that was the result I was thinking of. I guess I interpreted it incorrectly. I've always struggled with that paper, actually. Here is an example. (Sorry about going off-topic!)

Let's suppose we throw a die such that only I can see the outcome. We are going to estimate the value that comes up. I know the value, but for you any of the 6 alternatives is equally likely, so your weighted estimate is 3.5. Now, suppose I tell you, "My estimate of the die value is less than what I think yours will be after you hear this." (I think that is an accurate paraphrase of the paper's "P=1".) What are you going to do? What is your new estimate?

You might think, my estimate was 3.5, he knows what it is and says that is lower than what mine will be, so it was probably a 1, 2, or 3, in which case my estimate would be 2, but he says it is lower than what I will think, so it would have had to have been a 1, but then my estimate is 1, and his can't be lower. You reach a contradiction. There is no estimate you can pick that makes my statement true, because whatever number you pick, I'm saying that the actual die value is lower than that, so you would have to go lower.

Because of this contradiction, then, I guess the conclusion is that I shouldn't have said that, even if the die really was a 1. I'm tempted to say that because of its self-contradictory nature, my statement gave you no information, so you would leave your estimate at 3.5, in which case my statement was actually true! But then we would have to take another turn on the merry go round and it would become contradictory again.

The paper seems to suggest that I can say, "My estimate of the die value is less or equal to than what I think yours will be after you hear this." (That would be "N=0" in the paper.) But I don't see how I can always say this. What is your estimate going to be? What if I rolled a 6? It doesn't seem like it works much better.

In the end I'm not sure this result tells us much about the nature of disagreement, rather it seems to be more of a logical paradox similar to the Unexpected Hanging.

Expand full comment

Hal, I would be interested in seeing a disagreement case study with Robin and Eliezer as well. But it wouldn't be very interesting if they simply gave their estimate. For example, if Robin gave one estimate, and then Eliezer gave another, Robin might update his estimate by some amount. But Eliezer would not subsequently update his estimate, not even by 0.00001%. Now Robin already knows that Eliezer would not update his estimate, and consequently Eliezer's refusal to update his estimate would not result in Robin changing his estimate again, since his refusal would give no new information. So there would be at most one update, on Robin's part, and from then on there would be persistent disagreement.

Expand full comment

Stirling, there is a tradeoff in modeling between realism and understandability. Let me know when you work out a more realistic model.

Caledonian, it is usually extremely impractical to share all your data.

J, the first link in the post above is to a previous post discussing that very research.

Hal, I'm not sure what you have in mind about a complex path to convergence. I prefer to focus on this result, saying we can't foresee to disagree.

Expand full comment

Hopefully Anonymous, Robin has occasionally posted "disagreement case studies" where he explores various aspects of disagreements he has had with various people. It would be interesting to see such a case study among contributors here. He and Eliezer would be good candidates for this, since both accept the Aumann results. Both have done work on AI, so perhaps they might find something to disagree on about that technology. For example, what is the probability of greater-than-human AI before the year 2030? Maybe they would have substantially different estimates about some such question.

Then we need to consider, what would be the most informative (or entertaining!?) format for the argument. Tradition would call for them to explain their reasons to each other, periodically indicate when they have changed their minds somewhat, either leading in the end to convergence or "agreeing to disagree". The Aumann theorem implies that agreement should be reached even with the much more restrictive communication channel modeled here by Robin, where the two simply take turns reciting their estimates regarding the disputed issue. Robin has a paper showing that the path to convergence will often be rather more complicated in this scenario than one might have expected. It would be exciting to come up with a case study which demonstrates this effect.

Expand full comment

What do you think of the argument in this paper? http://www.jstor.org/pss/22...

(In case the link doesn't work, look for "When Rational Disagreement is Impossible" by Keith Lehrer, in Nous (1976)).

Expand full comment

It would be good to see a model discussion done along these lines. Robin, maybe you and another contributor can attempt to discuss something you disagree about, and attempt to do so in a bayesian way? We can analyze it in real time and/or afterwards.

Expand full comment

Wouldn't the two Bayesians simply exchange all their data, including their priors and their reasons for holding those priors?

Given that, they should agree every time.

Expand full comment

Is it just me, or does this analysis make a lot of assumptions that don't seem to ever pan out in reality. For instance:

1) I've only ever met one Bayesian yet, and we didn't end up disagreeing on anything...2) The assumption that all of the Xi are independently and normally distributed with zero mean and a known variance Vi, seems unlikely. When attempting to gather evidence, one takes all possible observations accessible to one, and they often, I would think, have internal correlations.3) Every time I've ever had a serious discussion where I've tried to get to the root of a disagreement with someone logical (even if not Bayesian), we've traced it to a set of different priors. In practice, I've found, it can be extremely difficult to figure out the set of priors we agree on, and which we differ on, and how they interrelate (because real people don't have independent priors either).

Expand full comment

Richard, for people in a situation close to that modeled here, those who follow the weighting advice given here should be more accurate.

Hal, the means don't matter - everyone knows to subtract them off. The weight on last clue is less than one unless thinking noise is zero. I added the column you requested to the table.

Expand full comment

Very interesting approach - many comments are possible. A few simple ones to start with.

The model considers variance but not mean. I guess means are assumed to be zero? And in a real argument, if we have non-zero (but known) means for some of the terms, they can be collected elsewhere, and the truth t we are estimating here is a correction factor to these summed means, which then has expected value of zero?

It's not clear that this is a good model for most disagreements, because in most cases, new information is not coming in all the time. Estimates can be exchanged much more quickly than new information comes in. Perhaps we could model this by having think noise be high relative to info noise, which would correspond to the "1 4" case, but that leads to very non-Bayesian behavior. Could we set info noise to zero to get this case, after initial non-zero info-noise values for the first data point or two? I suspect the model collapses in that case though.

"weight on last clue" is not given, but I assume from context it is 1. This is in part due to the 3rd column in the 1st table, with Think noise of 0. That would be the perfect Bayesian case and indeed we see that he only uses the other person's estimate, consistent with the analysis earlier. That estimate gets added to the last clue, with weight 1, so my guess is that this weight applies throughout.

The 2nd table considers all pairings between 4/4, 4/1, 1/4 and 1/1 except for 4/4 vs 1/1, so it might be nice to see that for completeness (not clear what exactly it means though).

One pattern I see here is that if you're going up against the 4/1 guy, who is quite Bayesian in his thinking, at least relative to the quality of his information, then you are pretty much going to do .1/.8 to .2/.8 for weighting your and his most recent estimates, independent of your own situation. In other words, if you think your disputant is relatively Bayesian, you need to give his last estimate much higher weight than your last, regardless of your own degree of Bayesianism. That is presumably because as a good Bayesian, his estimate does a good job of incorporating the data from your last estimate.

In fact in general, it seems like your weights have more to do with the quality of the other person's thinking and data, than your own. If he thinks clearly, you rely on him; if not, you rely more on yourself.

Much food for thought here!

Expand full comment

To test the formalism, can you explain in words how this answers the question: "when you find that you and someone else disagree, how much weight should you give to your and their opinions in forming your new opinion?"

Expand full comment

I'll study and think about this more before commenting substantively. But this is a post I don't think too many will criticize as being outside of the purview of this blog. Thanks for this, Robin!

Expand full comment