Agree With Young Duplicate

On the last two Sundays I gave examples where disagreement is justified, and where it is unjustified.  Today let me try to give a harder example where disagreement is not justified:

Again we have Star Trek style transporter modified to preserve the original while creating an atom-by-atom copy with identical personality, memories, and cognitive styles.  But this time the information to create this copy was stored for twenty years, and you find yourself standing outside the transporter as your duplicate emerges. 

You explain to your younger self that over the last twenty years you have reconsidered and rejected some of your most precious beliefs.   Following common aging trends, you are now more politically conservative and believe less in true love, your future fame, and the nobility of your profession.  Your younger duplicate suspects you are biased by self-interest and a need to explain your career failure, while you suspect he suffers youthful overconfidence.   How hard should you try to agree with him?

I say you should try hard to satisfy the rationality constraint "we can’t forsee to disagree."  You started out with exactly his biases, and you both know you may have overcome some, and some may be worse.  He should on average defer to you more than you to he, as he accepts that your belief changes come partly from more information and analysis.  But when he hesitates to fully adopt your views, you should accept that he may have good reasons to do so.   You should random walk your way to common estimates about your relative biases. 

GD Star Rating
Tagged as:
Trackback URL:
  • Carl Shulman

    “you are now more politically conservative and believe less in true love, your future fame, and the nobility of your profession”
    I should apply disagreement theory on all of the empirical questions, but I would categorize some proportion of conservatism and ‘belief in true love’ (specifically, belief that it is valuable, rather than beliefs about its prevalence) as a raw preference, about which there is no fact of the matter. I might shift values somewhat as a counterweight to insufficient fact-sensitivity (, but not to the point of full agreement.

  • Robin, my younger self would shut up and listen to my older self. At least any younger self past the age of fourteen. You see, back when we were young enough to believe time travel might be possible, we already thought through this scenario.

  • Eliezer, I’m sure you would evaluate your older self to some degree. If your older self turned out to be a cocaine junkie (as unlikely as that might be), you might not really be disposed to model yourself after them.

    I suppose, though, that we can draw a bright line around the sort of thing that interferes with your future self’s rationality, and input that simply changes beliefs without compromising the rational mechanism itself. That is, unless you take an expansive view of the power of memes.

    Eliezer, what if you found that your future self had become religious? (Or is that question the same as “What if you woke up one day with a blue tentacle for an arm?”)

  • Zhong Lu

    I haven’t rejected or reconsidered any of my precious beliefs. By definition a precious belief is something that can’t be rejected or reconsidered. I don’t understand your point.

  • Zhong, you can be emotionally invested in pretty much any belief, and thus it can be said to be “precious” to you. I don’t think the definition of “precious” is standard enough to say what you do. But emotional investment in itself doesn’t mean that you can’t, and definitely doesn’t mean that you shouldn’t, be critical of that belief.

  • “what you do” should be “what you assert in your comment”.

  • Eliezer and pdf, you guys addressed the question of how much the young version should defer to the old one, but the question I asked, which I think is harder, is how much the old one should defer to the younger, when that younger one chooses to disagree.

  • If he were, and had always been, a Bayesian, then he would have only updated his beliefs based on incoming information and would have always fully considered the knowledge available to his younger self. In that situation, the younger self should immediately adopt all the beliefs of the older, and the older should not update at all.

    If they are imperfect Bayesians but nevertheless “wannabees”, it is still hard for me to see the younger self stubbornly clinging to his factual beliefs given that his older self calmly states that with additional experience he was forced to update and alter his opinions. The younger self knows that he does plan and intend to update on new information, and does not cling to any beliefs for purely emotional or sentimental reasons. So he is not inherently surprised to find that his beliefs have changed, and should be easily persuaded, as long as the older self does in fact seem to have maintained his commitment to approximate Bayesianism.

    If we go a step further and consider people who don’t take Bayesian reasoning very seriously, who cling to their biases and defend them vigorously, who passionately commit themselves to their positions and hope never to change them, well, such people will probably disagree with themselves persistently and readily.

  • Hal, your conditional answers avoid the hard unconditional questions, and avoid the question as I asked it, about how the old reacts to the young. If the old you finds that the young you is not very convinced that the old you is acting like a Bayesian wannabe, how should the old you react?

  • Robin, I’m tempted to say it can’t happen, but I suppose that would count as avoiding the question…

    Okay, so if it does happen, it means that you are not such a good Bayesian as you thought. Either your younger self is much less Bayesian than you remembered, or you have fallen into the traps of bias yourself without realizing it. It’s kind of hard to say where you go from there.

    In principle you should try to estimate the probability of each of these possible sources of the disagreement, but given that one possibility is that your own mental processes are suffering from bias, you can’t have confidence in your ability to judge the situation. It could provoke an existential crisis similar to discovering that two of your most deeply held beliefs were in fact completely contradictory.

    I’m not sure what I would do in this situation. It would not be as simple as merely coming to agreement. Some difficult soul-searching is in order. Probably as far as resolving the issues between us, it would be best to introduce a third party to act as a mediator and help to give us an objective view as to where the bias lies.

    But if we want to stick to a strict version of the scenario, where we can’t leave the room without giving our best estimates on some factual matter where we disagree (such as the longevity of true love), I would probably reduce my certainty levels on all beliefs substantially and greatly expand my uncertainty intervals since I could no longer have confidence in my mental judgements. I would hope that my younger self would be equally discomfited by this bizarrely unexpected situation and would share my response, so that we would in fact agree.

    But even if he did not, and he persisted in his strong beliefs even given my contrary statements, I would not take that as convincing evidence that he was right. My own uncertainty about my rationality must extend to serious doubt about the rationality of my younger self, hence I can no longer view us as mutually rational and the premises of Aumann’s theorem do not apply. His persistent certainty in this situation could be because he is right and I am senile, but it could also be because the flaw of significant irrationality has lurked unknown within me all these years. So I would disagree with him if that was his view in this situation.

  • Hal, if your young self is stubborn, your question is how much to take this a sign of irrational bias, versus a sign of strong relevant info. To evaluate how reasonable this is, what we need are some likelihood ratios. I’d like to see a concrete simple model in which we can explore these issues.

  • Robin, the problem is not just that your young self is stubborn, it’s that he is unexpectedly stubborn. I thought I knew him, and (assuming I thought I was rational back then) I expect him to be convinced by my superior knowledge and experience. I thought I fully took into consideration his views as I absorbed new information, and I expected him to understand and accept that. Rationality would seem to demand this of him.

    The fact that he is refusing to do so means that irrationality is present, either in him or me (or both). So it cannot just be a matter of “strong relevant info”. It is a question of irrational bias, and a priori there is probably a better than 50-50 chance that I am the one who is irrational, since at a minimum my memory of my younger self is wrong (if he is the one who is irrational) and it is possible that the very foundations of my reason are corrupted.

  • Hal, yes, the question is who is how irrational; so we want a model of how stubborn you each act under various irrationality scenarios.

  • Consider a modification based on your example from last week. Let your younger self be from just one day ago, a day which you spent researching some issue. Your “younger self” is revived and you tell him what you learned. I see no rational reason why he should refuse to believe what you say and accept your conclusions as his own.

    And if he does irrationally refuse, from your perspective as his (one-day) future self, that does not exactly reflect well on you, does it? You can hardly accept his failure to think rationally without casting doubt on your own rationality.

    I don’t see how models of your own irrationality are going to be that much help in these scenarios, because once you have called into question your rational abilities, you can’t be confident that you are reasoning correctly about your models. Instead, I think you need to seek help from third parties who can hopefully give you objective guidance.

  • Hal, don’t give up so easy! Models can be informative even when all their assumptions aren’t completely realistic. Of course that will be easier to believe in this context once one of us comes up with a plausible specific mode.

  • zhong lu

    pdf, if a person gives up one of their “precious beliefs” then by definition that belief wasn’t “precious” in the first place.

    Common example: X had faith in God, but lost it later in life. By definition, he never had true faith in God in the first place. (I’m not using a personal example here)

    We should be very careful about what to believe in. But once we’ve chosen a belief, we should stick to it.

  • Zhong Lu

    Haha, I reread your post and I still don’t understand the specifics of what you’re trying to say. The whole “Bayesian” thingymajiggy is like totally over my head.

    But I think I get the general gist of what you’re saying: what you’re trying to say is that “the young dude stepping out of the time machine should listen to the old dude with the more experience.” Is that right?

    If that is what you’re trying to say, then I have to disagree. I define rational as “beneficial to your personal well-being” and the young dude’s views is just as “rational” as the old dude’s views.

    For example: “Tommy at age 15 believed in God. Old, future Tommy at age 34 does not.”

    The young Tommy’s belief is rational because his faith got him through the drudgery of teenage life- i.e. his faith was beneficial to his well-being and therefore his faith was “rational.” The old Tommy’s lack of belief in God is also rational because he no longer needs God for his well-being and hence there is no rational reason why he should keep his faith.

    Thus neither side is being “irrational” and so they should agree to disagree.

  • Robin, the scenario of my younger self not updating is strange enough that I would need to know the specifics in order to decide how to react. It’s not quite in blue-tentacle territory, but it would be very surprising. Both of us have had repeated experience with racing far ahead of our younger selves, and both of us have thought through the scenario in advance.

  • PS: I point out that the older version of myself should have strictly more evidence. What reasoned basis could my younger self have for disagreement? What could he possibly know that I don’t? And he knows that. Sixteen-year-old Eliezer may not be able to prove Aumann’s Agreement Theorem, but he’s still a damned sharp rationalist by ordinary standards.

  • rcriii

    I say you should try hard to satisfy the rationality constraint “we can’t forsee to disagree.” … You should random walk your way to common estimates about your relative biases.

    Since you are positing that there should be agreement, aren’t we really trying to work out the mechanism by which we do the “random walk”? In this case I suggest that it comes down to the practicalities of the matter. What disagreements are most expensive for him to concede, versus me? I cannot make him stop dating GMU students, without alienating him, so I may let it go. He cannot make me like the Beegees again, so he agrees to stop listening in my presence. and so on.

    Yes, this is probably a sneaky way of ‘agreeing to disagree’, but it seems efficient.

  • Stuart Armstrong

    I see no reason at all to prefer my older self’s interpretation to my younger self’s – unless we believe that people’s bias decrease as we get older (it’s not a question of knowledge, but of bias – a very different thing. Specialists often accumulate knowledge and bias equally).

    My older self will gain more from the interaction, though – he knows much more about how trustworthy and reliable his younger self is, than vice versa.

    So they should set up a random walk to a common estimate, but the older self should put a narrow confidence value on that estimate, and the younger self a much wider one (using a “probabilities of probabilites” approach here).

  • Tom Crispin

    Given that older brains have degraded functionality relative to younger ones, even if both younger and older selves are perfect Bayesians and as rational as Robin wants ’em, we have insufficient data to answer the question.

    As a concrete example, master chess players get stronger with increased knowledge and experience and weaker (at some point) as they age. Even if the question is as simple as whether white or black stands better in a particular position, who defers? And speaking from experience, you aren’t aware of your decreased ability until that arrogant young bastard blows you off the board

    FWIW, the experience with chess software is that raw calculating power (the ability to calculate more completely and accurately) is usually more important than better evaluation functions (more knowledge, less bias, better priors). But not always.

  • TGGP

    Being young myself (the me of twenty years ago was only a few months old), I’m not in a position to answer the question Robin posed. However, people have told me that I seem “pre-maturely old” or “born too late” (as Saint Vitus put it) in my beliefs. Part of this has been due to my dislike of the young (a psychologist might say this is my unhappiness with my own youthful ignorance projected onto my peers), but the end result is that I do not believe in “true love”, expect any future fame, believe that what I plan on doing for the rest of my life (computer programming) or most other professions for that matter are noble and my political beliefs might be described as conservative (if the choice were between that and progressivism), except that I have far too cynical a view toward government to be anything but libertarian and at the same time lack the optimism/hope necessary for a “joyful” or “activist” (i.e not defeatist) libertarian. So the interesting question to me would be, how would I react if the me from twenty years in the future and resembled not the rational, more experienced adults I had tried to imitate, but the foolish youths I looked down on? Would I be more inclined to believe that utopian socialism will work if we all really believe in it and work together? Would I think that I could accomplish anything I set my mind to and really make a difference for the common good (although accepting the existence of such a thing as the “common good” would take some persuading)? I can’t honestly say. It would be something like Eliezer’s tentacle.

  • Many of you seem to find it hard to believe that you would in fact disagree with an older/younger version of yourself, even though you all constantly disagree with the many people around you. I suspect this is wishful thinking, and that the same cognitive processes that produce ordinary disagreements would be hardly diminished in this context. But of course to see if I am right we will have to wait for this technology to be developed.

  • Stuart Armstrong

    But of course to see if I am right we will have to wait for this technology to be developed.

    Working right on it! No, seriously, I agree that there is no reason that my older and younger selves should get along at all. The benefit of the experiment is that it sets up a situation (like the first Star Trek clone) where it is less easy to reject the other person’s opinions out of hand, and really forces us to accept that their points of view must be as objectively valid as our own. We can’t rationalise them away as we can do in general conversation.

  • TGGP

    Disagreeing with your future self is different from your younger self because you have some memory of what you were like when you were young, whereas the future is an open question we can only speculate about with certain traits Robin mentioned being probable. I feel more confident in disagreeing with my older self though now. I came to my beliefs because (although I likely didn’t have the kind of data from studies that Robin has seen and might be referencing) it was apparent to me that younger and dumber people tended to lean a certain way and older, more experienced ones leaned the other way. Certainly there are some young, green-behind-the-ears types that think more like older types than some of the older types themselves, but the correlation is persuasive enough for me. The older me could just be a fluke, and for all I know the me of even later could realize what a wrong turn I’d taken and go back to a more sensible position.

  • Those of you who are declaring that you would be comfortable to disagree do not seem to appreciate the force of the apparent symmetry here. It is not enough to say “well he might be wrong and I might be right.” It is not even enough to point to some superior feature you might have, as the other you might well grant that superior feature.

  • Daniel Janzon

    Original poster: You should random walk your way to common estimates about your relative biases.

    If this has any resemblance with negotiation, it might not be a good idea. A highly regarded book in negotiation theory, Getting To Yes by Fisher & Ury, strongly advises not to haggle. Instead one should agree on general principles for evaluating what’s fair (or what’s true in this case). And then work from there.

    The idea is that haggling tends to lead to random results, or favor the best manipulator.

  • Daniel, we are not talking about negotiation here, but perhaps the lack of random walks in such situations is in part due to people treating it like a negotiation.

  • rcriii

    So what is the mechanism for the random walk? Say for example I tell my younger self that he should do more of activity X, and he replies that even after evaluating my 20 years of history he is unwilling to put more time or effort into X. How do we resolve this?

    Do we roll dice? Flip coins? Bid with bias (“I say you have a 20% stubborness bias” “I’ll see your 20% and raise you 10% for your obvious cynicism” …)?

    With a negotiation, if there is bias to the process (I’m more manipulative, or you have better info, etc), won’t Bayesians try to account for that?

  • Rcriii, we are just talking about inference; each person listens and then concludes that it is rational to change his mind.