Agree with Yesterday’s Duplicate

A week ago I tried to offer a clear example of justified disagreement.   Today, I try to offer a clear example where disagreement is not justified.   Imagine:

On Star Trek, a "transporter" moves people by scanning the location of all their atoms, and then constructing a new version in another location, with all the same sort of atoms in the same relative locations.   Yesterday you entered such a transporter, except that this version did not destroy the original copy as did the one on Star Trek.  So a minute after you entered the transporter there were two of you, with identical personality, memories, and cognitive styles. 

You each spent the last day separately reviewing material supplied by supporters on opposite sides of some controversy, such as which side was most responsible for a particular war.  You have just sat down across a table from your duplicate, and you have just told each other that you found your material reasonably persuasive.  This surprised you; how could he think that?  Although you understand abstractly that he is your day-old duplicate, you have a hard time relating to him.  He does not sound or look or act the way you think you do.   Your initial intuition is to treat him like anyone else with views contrary to yours; he must be missing something you see.   How much should you follow this intuition, versus consciously forcing yourself to agree?

This seems to me a clear case where you should try as hard as possible to satisfy the rationality constraint "we can’t forsee to disagree."   While both of you have many biases, and are far from Bayesian, you have almost no good reason for thinking that his biases are worse than yours.   Yes it is possible that he acquired a mental problem in the last day, and his taking on a strange view may be some evidence for that.  But it is at best only very weak evidence; the odds are pretty overwhelming that he is no less rational than you.

As before, it is interesting to think about which variations on this scenario would justify larger disagreements.

Added: I had in mind the case where you and he have not had time to review exactly all the same material; you must react instead to his opinion.       

GD Star Rating
Tagged as:
Trackback URL:
  • If I find myself disagreeing with myself II after just one day of divergence, it is evidence that my (and his) conclusion is very easily swayed by some minor external factor or noise rather than just the facts and our previous knowledge. So we should both become very sceptical of the reliability of our conclusions. If we have both become very convinced about them, that would be further evidence that not only is there one or more strong biases hiding somewhere, but that these are easily acquirable biases. That might help us debug our reasoning.

  • Anders, after how many days of divergence would you find it reasonable to disagree with your duplicate?

  • Are we to assume that all the same facts about the issue are known by both copies? First I would try to make sure that we agree on all the facts regarding the issue. I would be surprised if any disagreement persisted after that point, in my own case. But it would be expected that many facts would be omitted from one or the other side’s accounts, which would explain the initial disagreement between the copies.

  • Pdf, assume they have “access” to all the same facts, but that there are too many facts to know them all. Imagine that a high school class who had been indoctrinated to favor the Allies in WWII was given access to material favoring the Axis (or vice versa); what percentage of them do you think would change their minds?

  • Timothy Underwood

    What determines which sets of facts each of the duplicates pursue? The disagreement almost certainly would derive from different presentations of the evidence and different arguments being given. I am fairly sure I would agree with my duplicate had we been given the same arguments.

  • Timothy, there were two sides to the war, and we had two duplicates. So we started them each out listening to the material presented by one side, after which they had the option to look into any further details they wanted.

  • Doug S.


    Person A reads argument X, and decides that Z is true.
    Person A’s duplicate reads argument Y, and decides that not-Z is true.
    Person A’s duplicate tells Person A that he read argument Y and found that it led him to believe not-Z.
    I see no problem with this. Presumably, both arguments were logically consistent and started from premises that seemed plausible. It would seem that they ought to be able to discuss their way to an agreement.

    If person A and his duplicate read identical materials and came to different conclusions, that would be rather strange.

  • I agree with Robin that this is an obvious case where Aumann should apply, in full force; when I am talking to my fellow Eliezer, I should not be able to predict the direction of disagreement after each exchange.

  • Stuart Armstrong

    after how many days of divergence would you find it reasonable to disagree with your duplicate?

    I would find it reasonable to disagree with my duplicate when our experiences have diverged to the point that we no longer agree on fundamental things. If his priors and his fundamental values have changed substantially, then I can perfectly believe his reasoning is flawed according to my own priors and values (however, I should never think that his biases are any bigger than mine).

    But I agree with Anders – the very next day, if we disagree, and cannot reconcile our disagreements, then our reasoning needs debugging.

    But the in-between period would be the most fruitfull – as we drift apart through different experiences, we could continue to debate various issues, knowing that neither of us is more biased than the other. That experience would be quite something, and I would definetly go for it, if only Star Trek inc. would get to work and build that transporter.

  • Stuart Armstrong

    Extreem duplicate: Hundreds of copies of me are created, and “indocrinated” by hundreds of different groups. Then we all come together to share our experiences. At the end of that, we take a vote.

    If I disagree with the result of the vote, how should I interpret my disagreement? Ditto if I agree.

  • Stuart, we are talking about disagreement over fact not values, and why would you allow yourself to change your priors with time if you thought only your current prior was unflawed?

  • Robin, in your example, you stated that the disagreement was over “which side was most responsible for a particular war” and then, in the comments, stated that the disagreement was over fact.
    In your duplicate scenario, I would be rather surprised if the two of us (both being me) would disagree over a fact. Of course, to me, an issue of responsibility isn’t really an issue of fact — rather, it is an issue of opinion.
    So, if we disagreed over who was responsible for a war, I would like to think that we would hash it out, each making his case for which side bore more responsibility.
    If it was a disagreement over an issue of fact (such as whether or not country C was involved in the war), I would be very dubious of the transporter’s reconstruction of me if I disagreed with myself. Unless, of course, my copy’s extraordinary claim was backed up with extraordinary evidence.

  • Ocmpoma, by “fact” I don’t mean only things you can look up in an almanac. I mean statements about the way things are or could be, as opposed to how the should be.

  • Stuart Armstrong

    “Responsibility” definetly doesn’t seem to be a fact:
    A young women, angry with her violent father, storms out of her house early and is killed by an iraqi sniper when the US soldier he was aiming for ducks out of the way.

    Asigning responsability in such cases is as much a question of values as of fact.

    And I would never think my current priors were unflawed – whether through ignorance or undiscovered biases. I would hope to improve on them.

    But to get back to the double situation, assuming the disagreement is a question of fact, then it’s just a question of me and my double assesing different evidence. Normally I would just discuss the issue with the other person, ranking his evidence, since it is second hand, slightly below mine (NB: due to a certain confidence in myself, and the fact that the other person may be lying; of course, this is assuming he doesn’t have specific expertise). But two things are different here:

    1) I know my double is exactly as skilled at assessing evidence as I am.
    2) I can trust my double (since I would never try and deceive my double when I met him, the same will be true for him)

    Using these facts, I would argue with my double until we each reached an equilibrium. If these equilibiras are not the same, then there is a bias, and I would take the compromise position between the two.

    (other alternative: figuring out the position that would be the most likely to cause us to end up with those equilibiria; but mathematically intensive and depends on some knowledge of my own biases)

  • Michael Sullivan

    We’ve read different arguments, and neither of us is perfectly good at detecting falsehoods or slippery arguments. If we start out reasonably unbiased toward either side, it’s reasonable to think that we will tend to believe the side whose arguments we read, as long as those arguments do not display obvious (to me) bias or hackery.

    So on hearing that the other I has read a different set of polemics and come to a different conclusion, my first instinct (and so presumably also my alter-ego’s) would be to trade reading material and see if our conclusions still differed. We would each have first impression bias as well (I think), so there would still be some capacity to disagree, but I would guess we’d be able to get to the bottom of it and come to agreement, given enough communication time. If first impression bias alone was enough to separate our conclusions, they would likely have become tentative at best, and that would be a good indication that the real answer is closer to “a pox on both your houses” than to one side or the other being in the right.

    I think you are right about it being not justifiable to disagree given the same facts and priors in this case, but I’m also pretty certain that we wouldn’t disagree after having time to compare all of our facts and thoughts about them. In fact, since we both have been introduced the logical issues inherent in disagreement, we will both immediately be very skeptical about our conclusions as soon as we see that we disagree.


  • Everyone, it is too easy to say you would agree after you considered all possible relevant material, or after you had made sure both of you had seen exactly the same material. That avoids the harder and more interesting question, since in practice we rarely have time to look at all relevant or exactly the same material as those we disagree with.

  • Carl Shulman

    I agree, you should come to agreement with your duplicate even when materials and reasons cannot be exchanged. However, if you are repeatedly exchanging conclusions, then their path should be a random walk until agreement is reached, while a powerful new bias is disproportionately likely to lead to no movement. If your duplicate’s opinion does not change at all over several exchanges of opinion in which you shift your estimates closer to his or hers, the probability of a mental problem becomes more worrisome. The evidentiary impact of this depends on the ratio between your prior probabilities of mental defect in your duplicate and of the other side possessing extraordinarily powerful evidence that would justify stability of opinion over a long exchange.

  • Dagon

    Why would you expect to have perfect communication and trust with your double? I wouldn’t expect to agree on all topics after some amount of divergent inputs any more than I would with any other human.

    He wants something from me, and I want something from him (at the very least, we want not to be attacked, and probably want cooperation on other fronts). This puts an upper bound on how much trust we can each give the others’ claims.

    For a recent double, it’s likely that I have more insight into his motivations and am somewhat better at determining his true beliefs than I am a random person, but it’s still not perfect, or even all that good.

    It seems perfectly reasonable to believe that my biases are equally likely to be truthful as his, and benefit me more than him, so I should weigh them more heavily (after doing what I can to identify and discard biases on both sides, and to share evidence in both directions).

  • Dagon, follow the link “we can’t foresee to disagree”; perfect communication and trust are not required. And since you are estimating the same matter of fact, I don’t see how a bias could be good for him and bad for you.

  • Douglas Knight

    Michael Sullivan:
    it’s reasonable to think that we will tend to believe the side whose arguments we read

    It may be reasonable self-knowledge, but it is not knowledge of a reasonable self. Acting as you predict is an error and we should try to avoid it. Looking for “obvious bias or hackery” is not enough.

    If you’re saying “reasonable self-knowledge,” that it’s difficult to evaluate polemics, then I agree with you, but it sounds like you’re giving up too soon.

  • To expand on Douglas’s comment, to expect that one will be convinced by a certain argument more than the argument will actually warrant (for example, to expect that after hearing the opposite side’s argument one will be less certain about the position) is the same as saying that one is not discounting the evidence of the argument enough to compensate for its polemical nature. If one can predict now that one’s opinion will be changed in a certain direction in the future, one should change it in that direction right now. If I have read one side’s argument and estimate that my opinion will shift by X amount after reading the other side’s, I should go ahead and shift it by X. I can still expect that my opinion will change after reading the other side’s argument, but not in which direction my opinion would shift. If I understand it correctly, this follows directly from Aumann’s theorem.

    (I actually made this same mistake on an earlier thread.)

    Ideally, my duplicate and I would both be mostly neutral about the dispute, since we recognized the one-sided nature of the information we learned during the day and withheld judgment for lack of information from the other point of view. Then, upon meeting, we’d both defend what we saw the previous day, but only as an efficient way of exchanging information about arguments, not because we actually held the respective positions.

    I think that some (but not much) real life discourse is actually like this. People will defend a position directly that they don’t actually espouse in order to get the most information quickly out of someone with an opposing position, without actually believing most of what they’re defending. They might even talk about it as if it were they’re actual beliefs. How common is this? Probably not very.

  • Pdf, I never said you and he were told that you would meet in a day to compare notes; that could have been a surprise.

  • Well, (ideally, as I said,) the neutrality would be there regardless of our knowledge of either the duplication itself or the future meeting. The neutrality is something we should practice all the time.

  • Dagon

    I think the term “fact” is confusing when discussing this topic. Almost no topic of disagreement that I have with respected peers, which my double would be, are over verifiable, simple facts in the vernacular sense of the word.

    I get the impression that technical use of the word “fact” is much broader, and includes probability estimates of future events, recommended strategies, etc. I _DO_ disagree with peers over some of these, and I expect I’d disagree with my double. I further expect that the reasons for disagreement would be the same: I cannot trust the communication of beliefs. There are belief biases, which can be taken to be the same in me and my double, but there are also signalling and communication biases, which affect claims made by non-me differently than those made by me.

    I don’t understand how trust is not required. All the models I’ve seen use phrases like “reliably informed” or otherwise imply that knowledge about someone’s contradictory belief is actual knowledge about the contradiction, rather than suspect knowledge inferred from misleading signals.

  • Dagon, all we need is that it be clear that someone said something, clear what it meant, and clear that they meant it.

  • Dagon

    It’s the “clear what it meant” and “clear that they meant it” that’s tripping me up. I know of very few persistent disagreements where those hold. For non-trivial claims, it’s impossible to be clear exactly what someone means, and even more impossible to be clear that they mean it.

    Something as simple as “I think it will rain tomorrow” is questionable. Even if you try to be insanely explicit: “I think there’s more than a 42% chance that there will be measurable precipitation somewhere in the area”, you STILL don’t know if that’s what the speaker really means. What he really means may be “I want credit for warning you about whatever weather occurs”.

    For more complicated claims, like which side was responsible for a war, it’s completely impossible to know what anyone means with enough exactness and trust that one should give their opinion as much weight as your own. Both sides are likely biased, but I have more data about the extent and honesty of my own claim.

    This is not to say that contradiction isn’t strong evidence that I should examine my position and in many cases change my convictions. All I’m saying is there’s an upper bound to the information quality coming from other people, and this leaves us open to rational disagreement.

  • Dagon, if you think there is a small chance they didn’t understand you, or that you didn’t understand them, that justifies only a small disagreement – the effect is linear.

  • Michael Sullivan

    “Everyone, it is too easy to say you would agree after you considered all possible relevant material, or after you had made sure both of you had seen exactly the same material. That avoids the harder and more interesting question, since in practice we rarely have time to look at all relevant or exactly the same material as those we disagree with.”

    If all I have to go on is the claim, I have to reconsider my estimates of my conclusion. I don’t have good reasons to believe that my alter-ego has *better* evidence than I do, so my new estimate goes back to no opinion with regard to the question, and presumably my alter-ego will make the same decision. No matter how strong the evidence that I have seen, it is possible for it to be filtered and for him to have an equivalent body of countervailing evidence. I suppose if my evidence had been relatively weak, then I could be convinced by my alter-ego simply because of his conviction, assuming I could be reasonably assured that he hadn’t been brainwashed or altered in some way to make him more susceptible to poor argument.

    If I am strongly convinced by the evidence that I’ve seen, given that I should already (as Douglas K and Carl S. point out) have discounted the evidence somewhat given its purview and the lack of counterevidence, then the farthest I can probably get is back to tabula rasa, so we’ll probably end up there, unless one of us feels his set of evidence is significantly weaker than the other does, or one of us shows the other signs of suspect mental alteration.

  • I’ve been out of touch and I’m sorry to come late to this discussion. I proposed a similar thought experiment a couple of years ago, and here is a version of what I said then:

    Here’s a thought experiment. Suppose you were duplicated, and you and your copy went out and independently worked and did research on X. You come back together after a year, and you simultaneously report your estimates on whether a given X-related project will succeed. One of you says the odds are 1/10, and the other says the odds are 9/10.

    I would think, in this situation, that at least one of you would be extremely surprised and shocked. You would be forced to accept the fact that your copy had accumulated evidence during that year which led him to a very different estimate than your own. And the mere knowledge of that difference, even before you begin talking about the details of what you both learned, will be enough to sharply change your internal estimate of the probability.

    You might even be struck sufficiently by the symmetry of the situation to get past the automatic assumption that you are more likely to be right than your copy. You would understand that it was really only an accident of chance which copy your consciousness was in, that you could just as easily have been the other one, in which case you would hold the opposite view about probability.

    Suppose your experience was that the strength of the evidence you accumulated during the year had not been as strong as you hoped and expected it would be, and therefore it is likely that your copy had better luck than you did in terms of the strength of his evidence. Given the symmetry which otherwise exists, you would then prefer to adopt its position as your new estimate. OTOH if the year had been unusually productive and strong in the quality of evidence it gave you for your beliefs, you would choose not to switch. Since you know nothing about what the copy’s experiences were along these lines, you can’t predict whether he will switch or not; it will depend solely on whether the strength and quality of the evidence he accumulated during the year was above or below average, which is not known to you.

    Even if you do follow these predicted behaviors, you would still be right to argue (or at least vigorously discuss) the reasons why your copy came to such a dramatically different result than you did. You want to pool your information and come up with the best quality estimate based on everything you two learned during the year. And it may well be that the best way to do that is for the one to defend the 0.1 estimate while the other defends 0.9, each based on what they learned.

    While doing this, though, I think mentally you would each be somewhat in the frame of mind of playing devil’s advocate. You’d be pushing a strong position while privately believing that the other side may well be right. As long as you’re honest about it, that seems fine. I think this is the mental stance that Bayesians would have to hold while arguing and disagreeing with each other, and such a practice seems plausible to be the optimal method for consolidating information.

  • Hal Finney: You’d be pushing a strong position while privately believing that the other side may well be right. As long as you’re honest about it, that seems fine.

    Honest as the term is used colloquially, but not as required for Aumann’s theorem. In general people don’t express their beliefs as honestly assessed probabilities; the natural human tendency is to argue for the belief which you want the other person to believe more strongly in, while hiding doubts. Note that it is rational to discount the expressed opinions of someone who may be “playing devil’s advocate”. I suspect that in practice this may be as much a barrier to the real world applicability of Aumann’s theorem as human irrationality.

    Of course, I myself do play devil’s advocate sometimes, and you should be less convinced by my opinions because of this.

  • The honesty of Hal’s scenario depends on whether “privately believing” means that you haven’t also publicly indicated that belief.