On the last two Sundays I gave examples where disagreement is justified, and where it is unjustified. Today let me try to give a harder example where disagreement is not justified: Again we have Star Trek style transporter modified to preserve the original while creating an atom-by-atom copy with identical personality, memories, and cognitive styles. But this time the information to create this copy was stored for twenty years, and you find yourself standing outside the transporter as your duplicate emerges.
Rcriii, we are just talking about inference; each person listens and then concludes that it is rational to change his mind.
So what is the mechanism for the random walk? Say for example I tell my younger self that he should do more of activity X, and he replies that even after evaluating my 20 years of history he is unwilling to put more time or effort into X. How do we resolve this?
Do we roll dice? Flip coins? Bid with bias ("I say you have a 20% stubborness bias" "I'll see your 20% and raise you 10% for your obvious cynicism" ...)?
With a negotiation, if there is bias to the process (I'm more manipulative, or you have better info, etc), won't Bayesians try to account for that?
Daniel, we are not talking about negotiation here, but perhaps the lack of random walks in such situations is in part due to people treating it like a negotiation.
Original poster: You should random walk your way to common estimates about your relative biases.
If this has any resemblance with negotiation, it might not be a good idea. A highly regarded book in negotiation theory, Getting To Yes by Fisher & Ury, strongly advises not to haggle. Instead one should agree on general principles for evaluating what's fair (or what's true in this case). And then work from there.
The idea is that haggling tends to lead to random results, or favor the best manipulator.
Those of you who are declaring that you would be comfortable to disagree do not seem to appreciate the force of the apparent symmetry here. It is not enough to say "well he might be wrong and I might be right." It is not even enough to point to some superior feature you might have, as the other you might well grant that superior feature.
Disagreeing with your future self is different from your younger self because you have some memory of what you were like when you were young, whereas the future is an open question we can only speculate about with certain traits Robin mentioned being probable. I feel more confident in disagreeing with my older self though now. I came to my beliefs because (although I likely didn't have the kind of data from studies that Robin has seen and might be referencing) it was apparent to me that younger and dumber people tended to lean a certain way and older, more experienced ones leaned the other way. Certainly there are some young, green-behind-the-ears types that think more like older types than some of the older types themselves, but the correlation is persuasive enough for me. The older me could just be a fluke, and for all I know the me of even later could realize what a wrong turn I'd taken and go back to a more sensible position.
But of course to see if I am right we will have to wait for this technology to be developed.
Working right on it! No, seriously, I agree that there is no reason that my older and younger selves should get along at all. The benefit of the experiment is that it sets up a situation (like the first Star Trek clone) where it is less easy to reject the other person's opinions out of hand, and really forces us to accept that their points of view must be as objectively valid as our own. We can't rationalise them away as we can do in general conversation.
Many of you seem to find it hard to believe that you would in fact disagree with an older/younger version of yourself, even though you all constantly disagree with the many people around you. I suspect this is wishful thinking, and that the same cognitive processes that produce ordinary disagreements would be hardly diminished in this context. But of course to see if I am right we will have to wait for this technology to be developed.
Being young myself (the me of twenty years ago was only a few months old), I'm not in a position to answer the question Robin posed. However, people have told me that I seem "pre-maturely old" or "born too late" (as Saint Vitus put it) in my beliefs. Part of this has been due to my dislike of the young (a psychologist might say this is my unhappiness with my own youthful ignorance projected onto my peers), but the end result is that I do not believe in "true love", expect any future fame, believe that what I plan on doing for the rest of my life (computer programming) or most other professions for that matter are noble and my political beliefs might be described as conservative (if the choice were between that and progressivism), except that I have far too cynical a view toward government to be anything but libertarian and at the same time lack the optimism/hope necessary for a "joyful" or "activist" (i.e not defeatist) libertarian. So the interesting question to me would be, how would I react if the me from twenty years in the future and resembled not the rational, more experienced adults I had tried to imitate, but the foolish youths I looked down on? Would I be more inclined to believe that utopian socialism will work if we all really believe in it and work together? Would I think that I could accomplish anything I set my mind to and really make a difference for the common good (although accepting the existence of such a thing as the "common good" would take some persuading)? I can't honestly say. It would be something like Eliezer's tentacle.
Given that older brains have degraded functionality relative to younger ones, even if both younger and older selves are perfect Bayesians and as rational as Robin wants 'em, we have insufficient data to answer the question.
As a concrete example, master chess players get stronger with increased knowledge and experience and weaker (at some point) as they age. Even if the question is as simple as whether white or black stands better in a particular position, who defers? And speaking from experience, you aren't aware of your decreased ability until that arrogant young bastard blows you off the board
FWIW, the experience with chess software is that raw calculating power (the ability to calculate more completely and accurately) is usually more important than better evaluation functions (more knowledge, less bias, better priors). But not always.
I see no reason at all to prefer my older self's interpretation to my younger self's - unless we believe that people's bias decrease as we get older (it's not a question of knowledge, but of bias - a very different thing. Specialists often accumulate knowledge and bias equally).
My older self will gain more from the interaction, though - he knows much more about how trustworthy and reliable his younger self is, than vice versa.
So they should set up a random walk to a common estimate, but the older self should put a narrow confidence value on that estimate, and the younger self a much wider one (using a "probabilities of probabilites" approach here).
I say you should try hard to satisfy the rationality constraint "we can't forsee to disagree." ... You should random walk your way to common estimates about your relative biases.
Since you are positing that there should be agreement, aren't we really trying to work out the mechanism by which we do the "random walk"? In this case I suggest that it comes down to the practicalities of the matter. What disagreements are most expensive for him to concede, versus me? I cannot make him stop dating GMU students, without alienating him, so I may let it go. He cannot make me like the Beegees again, so he agrees to stop listening in my presence. and so on.
Yes, this is probably a sneaky way of 'agreeing to disagree', but it seems efficient.
PS: I point out that the older version of myself should have strictly more evidence. What reasoned basis could my younger self have for disagreement? What could he possibly know that I don't? And he knows that. Sixteen-year-old Eliezer may not be able to prove Aumann's Agreement Theorem, but he's still a damned sharp rationalist by ordinary standards.
Robin, the scenario of my younger self not updating is strange enough that I would need to know the specifics in order to decide how to react. It's not quite in blue-tentacle territory, but it would be very surprising. Both of us have had repeated experience with racing far ahead of our younger selves, and both of us have thought through the scenario in advance.
Haha, I reread your post and I still don't understand the specifics of what you're trying to say. The whole "Bayesian" thingymajiggy is like totally over my head.
But I think I get the general gist of what you're saying: what you're trying to say is that "the young dude stepping out of the time machine should listen to the old dude with the more experience." Is that right?
If that is what you're trying to say, then I have to disagree. I define rational as "beneficial to your personal well-being" and the young dude's views is just as "rational" as the old dude's views.
For example: "Tommy at age 15 believed in God. Old, future Tommy at age 34 does not."
The young Tommy's belief is rational because his faith got him through the drudgery of teenage life- i.e. his faith was beneficial to his well-being and therefore his faith was "rational." The old Tommy's lack of belief in God is also rational because he no longer needs God for his well-being and hence there is no rational reason why he should keep his faith.
Thus neither side is being "irrational" and so they should agree to disagree.
pdf, if a person gives up one of their "precious beliefs" then by definition that belief wasn't "precious" in the first place.
Common example: X had faith in God, but lost it later in life. By definition, he never had true faith in God in the first place. (I'm not using a personal example here)
We should be very careful about what to believe in. But once we've chosen a belief, we should stick to it.