Philosophy Kills

Philosophy is often presented as a rather useless, if perhaps interesting, type of thought.  Arguably, however, defective philosophies of mind are a leading cause of death today!  Exhibit one, Bryan Caplan:

What disturbed me was when I realized how low he set his threshold for [cryonics] success.  Robin didn’t care about biological survival.  He didn’t need his brain implanted in a cloned body.  He just wanted his neurons preserved well enough to “upload himself” into a computer.  To my mind, it was ridiculously easy to prove that “uploading yourself” isn’t life extension.  “An upload is merely a simulation.  It wouldn’t be you,” I remarked.  …

“Suppose we uploaded you while you were still alive.  Are you saying that if someone blew your biological head off with a shotgun, you’d still be alive?!” Robin didn’t even blink: “I’d say that I just got smaller.” … I’d like to think that Robin’s an outlier among cryonics advocates, but in my experience, he’s perfectly typical.  Fascination with technology crowds out not just philosophy of mind, but common sense.

Bryan, you are the sum of your parts and their relations.  We know where you are and what you are made of; you are in your head, and you are made out of the signals that your brain cells send each other.  Humans evolved to think differently about minds versus other stuff, and while that is a useful category of thought, really we can see that minds are made out of the same parts, just arranged differently.  Yes, you “feel,” but that just tells you that stuff feels, it doesn’t say you are made of anything besides the stuff you see around and inside you.

The parts you are made of are constantly being swapped for those in the world around you, and we can even send in unusual parts, like odd isotopes.  You usually don’t notice the difference when your parts are swapped, because your mind was not designed to notice most changes; your mind was only designed to notice a few changes, such as new outside sights and sounds and internal signals.  Yes you can feel some changed parts, such as certain drugs, but we see that those change how your cells talk to each other.  (For some kinds of parts, such as electrons, there really is no sense in which you contain different elections.  All electrons are a pattern in the very same electron field.)

We could change your parts even more radically and your mind would still not notice.  As long as the new parts sent the same signals to each other, preserving the patterns your mind was designed to notice, why should you care about this change any more than the other changes you now don’t notice?  Perhaps minds could be built that are very sensitive to their parts, but you are not one of them; you are built not to notice or care about most of your part details.

Your mind is huge, composed of many many parts.  It is even composed of two halves, your right and left brain, which would continue to feel separately if we broke their connection. Both halves would also feel they are you.  It is an illusion that there is only “one” of you in your head that feels; all your mind parts feel, and synchronize their feelings to create your useful illusion of being singular.  We might be able to add even more synchronized parts and have you still feel singular.

We could also completely stop cell signaling in your mind, and then start it up again, and you would have no memory of feelings of that time.   Afterward, you would still think you were the same you, because your head would have saved all the info needed to start your brain cells talking to each other again.  If we instead moved that info to a new set of parts that then talked to each other the same way, why should you care?  You will still feel, just as you feel when we leave your parts alone, because you never feel your parts!  You have never felt anything other than the signals sent between your cells.  So what could possibly make these new parts only a “simulation,” and your current parts the “real” you?

What if we moved your info into two different sets of parts?  You could declare one to be the “real” you via some arbitrary definition like the one closest in space to where you were before.  But each of them would feel like they were you, as much as you feel like yesterday’s you, and more than your two disconnected brain halves would feel they are you.  If tightly linked, these two new you might feel singular, like the two halves of your brain do now.

We have taken apart people like you Bryan, and seen what they are made of.  We don’t understand the detailed significance of all signals your brain cells send each other, but we are pretty sure that is all that is going on in your head.  There is no mysterious other stuff there.  And even if we found such other stuff, it would still just be more stuff that could send signals to and from the stuff we see.  You’d still just be feeling the signals sent, because that is the kind of mind you are.

Accept it and grab a precious chance to live longer, or reject it and die.  Consider: if your “common sense” had been better trained via a hard science education, you’d be less likely to find this all “obviously” wrong.  What does that tell you about how much you can trust your initial intuitions?

Added: Tyler giggles and Bryan responds.

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • http://sophia.smith.edu/~jdmiller/resume.pdf James Miller

    To promote cryonics it might be best to downplay the downloading possibility and instead talk about how in 20-100 years nanotechnology will be able to restore a frozen body.

    This way you could argue to people like Bryan that if he is frozen and then revived in 30 years he will be more the same person in 30 years then he would be if he continued to live for the 30 years.

    Cryonics is strange enough without adding downloading.

    • http://hanson.gmu.edu Robin Hanson

      My main role here is not promotion, it is truth-telling. You’ll have to wait a lot longer for tech like that, and face much more risk of never getting there.

      • Carl Shulman

        Wait a lot longer? If you think that a world with uploading technology is going to see explosively rapid development (doubling the economy every couple weeks), then the chronological gap between uploading and brain-repair shouldn’t be that long.

      • http://hanson.gmu.edu Robin Hanson

        With faster growth comes a more rapid rate of disruptive risky events.

      • Carl Shulman

        If the distribution of disruptions that wipe out weak helpless legacy beings like cryonics patients and fleshy humans is that bad, being a late upload would seem almost as bad as waiting for brain repair.

        Are you talking about the hope that destructive uploading is developed first, and that you could be one of the first uploads by donating your brain to risky uploading experimentation likely to destroy your brain (if it was refined, there would presumably already be other uploads in play)?

    • Julian Morrison

      Downloading is far simpler. It can be explained in terms of contemporary tech: we slice, dice, stain, and use an electron microscope and image recognition to reconstruct a map of what goes where, and use that to lay out a neural network which we run on a computer. We would need incremental improvements in neuroscience and compute power, plus a lot of money.

      Nanotech requires fictional levels of materials technology – and even if we could, it would take us decades to learn the intricacies of what worked, and build up a library of patterns that would let us scale the abstraction-level up to “medical nanobot”. And then we’d need to know what to fix, and how, which is another Hard Problem.

      Likely in any reasonable timeframe it’s upload or stay frozen.

  • Grant

    Why didn’t Robin or Bryan mention consciousness? They both seem to be skirting around it. Robin seems to be claiming that our brains are just designed to look at non-brains differently, incorrectly assuming they are not conscious. Without an understanding of what conscious is or isn’t, this all seems like a bunch of hand-waving.

    Biological organisms exist because they reproduce. Electronic organisms may exist primarily because they can copy themselves. But no one would say an organisms’ offspring makes it immortal, so why would we say the same about a copy? Wouldn’t electronic organisms evolve to view their own destruction as a final death, even if they had made numerous copies of themselves?

    I would think the most evolved strategy would be to try to live forever and try to make as many copies of yourself as possible.

    • komponisto

      Why didn’t Robin or Bryan mention consciousness?

      When Robin used the word “feel”, that’s what he was talking about.

      • Grant

        Then the obvious question is, why do some things seem to “feel” but not others? What will happen to Bryan’s “feeling” when his brain is copied and he is shot dead?

        Unless we can answer this, the discussion seems unlikely to reach a resolution (hasn’t this always been the case with arguments over the nature of the mind?). What seems most likely to me is that Bryan’s consciousness is destroyed, then a copy of that consciousness is created, but aren’t we all just guessing at this?

    • michael vassar

      No, think about it, they would evolve to maximize their inclusive fitness, just as biological organisms do, and biological organisms DO evolve to value the survival of their offspring like their own divided by a conversion factor.

    • Eric Johnson

      > I would think the most evolved strategy would be to try to live forever and try to make as many copies of yourself as possible.

      Grant, one of the most elegant theories about why organisms die is that they can expect with probability f(t) to be killed by another organism by time t. It could be a lion, a sparrow, a virus, or a member of your own species that kills you, but one day it will happen if you dont die of accident or senescence. Or at least, it always has been that way so far. Therefore, even if organisms can in principle evolve to repair themselves and so never age — which is highly likely since bacteria do it — it is not profitable to waste energy on doing that. Better to save a little of the self-repair energy by slowly allowing some of the damage to accumulate unrepaired — put the savings into reproduction.

  • Constant

    We have taken apart people like you Bryan, and seen what they are made of.

    I don’t think threats are any way to persuade Bryan Caplan!

  • http://pendorwright.com Elf Sternberg

    I’m not entirely sure why Mr Caplan’s blog decided not to post my response.

    Basically, I get the impression that Caplan has either an incoherent notion of personal identity, or one that’s rooted in some nebulous spiritual essence independent of either material or agency. (If you’re curious about your bent, I recommend Staying Alive, a little quiz that’ll root out your biases about identity.)

    James: I strongly suspect that uploading will become viable much sooner than cryonic revival. I say so because I suspect brain systems must be observed in action in order to be emulatable, in much the same way that weather systems must be observed in action in order to make prediction about them. A successful upload system must make reasonable, bounded predictions about the kind of person you would be like and discard, id-like, the emergent impulses that would not be “like you”; such bounds are most easily determined by observing an operant system and not a static snapshot of one.

    As I said to Mr. Caplan, individuality is a technological limitation, not an edict necessarily enforced by the regularities of nature.

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      It seems to me to be a myth to claim we currently have a coherent notion of personal identity, if by that you mean the subjective conscious experience. I’m not coming at this from a mystical or anti-materialist perspective — but from a perspective of skepticism of overexpressions of certainty. I think some of y’all are seeking Caplan-types as foils because you have difficult accepting the proper level of uncertainty given our limited current scientific knowledge about the subjective conscious experience.

  • http://yama-mayaa.livejournal.com/16284.html Anton Tykhyy

    I agree with most of Robin’s argument, but does it really answer Bryan’s questions? I feel both of them are right, although in different ways.

    The tricky thing here is the subjective feeling of continuity of self. We experience a break of continuity when we fall asleep in a normal way:

    I lay me down and slumber
    And every morn revive.
    Whose is the night-long breathing
    That keeps a man alive?
    A.E. Housman (1859 – 1936)

    and also when we die in a normal way; because continuity of self is a feeling, it is subject to reaction times, so arguably a person who happens to be in the epicenter of a nuclear explosion won’t notice anything, no subjective break of continuity will take place — their brain will just cease to exist. I think this is the key to the whole argument: if you stop a person’s mind so fast that the mind doesn’t have time to notice the fact that it’s stopping, then both the mind (after it is restarted, possibly in a different medium) and any external observers can subjectively agree that continuity has been preserved. I understand that the current cryonics implementations don’t work this way, which is what Bryan argues.

    However, is continuity such a big deal? After all, we are all used to breaks of self-continuity, even if we don’t usually regard them as such. When I wake up every day, “I” am a different subjective continuity, and it is by sheer force of habit and by the fact that the new “I” has rather detailed “inside” memories of my previous continuities do I identify with them. Suppose a person falls asleep normally and, while asleep, dies of monoxide poisoning or heart failure — the person’s mind will never be aware of this, because its sense of self-continuity is not operational. It just ceases to exist. If it can be somehow restarted, how is this different from waking up? Essentially, a new person (self-continuity) will wake up and say, “Where’s my coffee?”

  • http://yama-mayaa.livejournal.com/16284.html Anton Tykhyy

    Also, if I woke up without any memories of “my” previous continuities (supposing further that they cannot be restored), what would it mean to say that I am me, as commenter Blackadder says in the other thread? In a trivial sense, I would be me because I definitely wouldn’t be you or any other person, but that’s about as far as it goes.

  • Peter Twieg

    Aren’t you using a loaded conception of “death” here? I don’t see any intuitive reason to use it to refer to the termination of all copies of a person instead of referring to each separate termination as a “death”.

    I think my underlying concern is that if you try to refute intuitive reasons for why we should care about psychological continuity, you’ll just have no reasons to care about psychological continuity at all. I could see that argument being made, but I don’t think that’s the direction that you’re headed. Why do you think it’s important that Robin Hanson continues to exist in some form? Because you simply find this outcome desirable? Is this not a function of your own arbitrary intuitions?

    • http://hanson.gmu.edu Robin Hanson

      I can be interpreted as using your preferred definition of “death” in the above. And I don’t see myself as trying to refute caring about psychological continuity.

      • hm

        ”Accept it and grab a precious chance to live longer, or reject it and die.”

        no offence, but what can be worse than a wacko with a PHd?

        You think philosophy kills? My inner intuition tells me that science kills too….

  • Jason Malloy

    What would make these new parts only a “simulation,” and your current parts the “real” you?

    The fact that they are two entirely separate interior persons, in the exact same way that monozygotic twins are two entirely separate interior persons.

    When people seek life extension, they are seeking to extend their own lives, not a simulation of themselves. How do we know the hypothetical future simulation isn’t us? Simple. The fact that we could hypothetically upload this same hypothetical simulation of ourselves right now, and yet we wouldn’t share the same consciousness. We would be two entirely different people with two entirely separate private internal experiences of self. When I clone Robin Hanson, neither clone will want to die, because they are two different people. The existence of one clone does nothing to extend the life of or prevent the other from dying.

    Nothing has been solved by the clone solution. The existence of a Robin Hanson clone does not make Robin Hanson live longer. Both Robin Hanson and his clone will now want to live forever. Making yet more Robin Hanson clones, and clones of clones, only multiplies the problem.

    • Zack M. Davis

      > The fact that they are two entirely separate interior persons, in the exact same way that monozygotic twins are two entirely separate interior persons.

      Jason, I’m surprised at you. Monozygotic twins are different people because they’ve had different life experiences. Whereas a perfect whole brain emulation of Robin Hanson would by hypothesis duplicate Robin’s memories and patters of thought—not at all the same as having an identical twin.

      • Jason Malloy

        Zack, you are missing the point. No matter how similar they are, they are two different people.

        If I create an exact clone of you, including all your memories, we’re still left with two separate people, with two separate conscious selves. If your clone proceeds to push you off a cliff so he can have his/your wife, then you aren’t still alive. You are dead, and he is alive.

        Your conscious self is as completely independent of his conscious self as two identical twins are from each other, or as you are from me.

      • Psy-Kosh

        Yes. But neither one has a claim to being the “the one true original”

        If right now I was paused, my mind state was copied, and then I was resumed (and the copy embodied and activated), then I think it wouldn’t be accurate to say that one is the original and one is the copy. Rather, I’d say the “me” from before the mind scan was performed had a future that split into two branches.

        As soon as we begin diverging in experience/etc, then we’re different.

        Until then… “which one am ‘I’?”

      • Jason Malloy

        Psy-Kosh,

        Even granting perfect physical duplication (which goes considerably beyond the claims being made, much less whatever best-case plausible realities one can posit), one Robin Hanson still experiences death as Robin Hanson, and the existence of the clone does nothing to change that in any way. Simulation has solved nothing.

        According to the Many-Worlds interpretation this kind of multiplicity of self is already the case and there are infinite Jason branches. This means there are already an infinite number of universes where Robin Hanson has achieved immortality. But even assuming this interpretation is true, this hardly helps the Robin Hanson of our reality who, at best, will only live another 50 years.

        This is entirely analogous to the clone non-solution. An immortal Robin Hanson computer simulation no more extends the life of the Robin Hanson participating in this thread, than an immortal Robin Hanson in multiverse #3454546666848848.

        If you were going to pay money and waste your time, just to try and ensure that an alternate version of your conscious self could have a better life somewhere out there, then don’t bother. Just believe in the Many-worlds interpretation instead. It’s the same thing only it’s free.

      • Constant

        Jason,

        Even granting perfect physical duplication (which goes considerably beyond the claims being made, much less whatever best-case plausible realities one can posit), one Robin Hanson still experiences death as Robin Hanson, and the existence of the clone does nothing to change that in any way.

        No, he does not. It is exactly as if Robin Hanson had a soul and the soul were somehow moved to the duplicate. The “soul” is of course simply the religious name for the personal identity of Robin Hanson, and the idea that there is a “true” personal identity is incorrect.

        We consider Robin Hanson the “one true” Robin Hanson simply because there aren’t any duplicates challenging him for the title. If Robin Hanson is duplicated, then the only reason the duplicate is not the “one true” Robin Hanson is that there is another Robin Hanson, i.e. the original. But by the same token, neither is the original one the “one true” Robin Hanson any more. If Robin Hanson is paused, duplicated, and the original destroyed while paused, and the duplicate unpaused, then Robin Hanson has survived just as really and truly as if his soul (to use the obsolete concept to explain my point) had been transplanted into the new body. Robin Hanson is not at that point dead with a mere other person, a mere duplicate, deludedly thinking he is Robin Hanson. No – that is Robin Hanson.

        Just believe in the Many-worlds interpretation instead. It’s the same thing only it’s free.

        It’s similar, but it’s not the same thing, because there is much less control over the conditions probably faced by your surviving versions if you simply leave your survival in the hands of manyworlds, which does not, after all, care about you. Imagine how much pain many people are in when they are at death’s doorstep, and now imagine being immortal and always at death’s doorstep, constantly dying but constantly surviving because in some universe you manage to live another second. Eventually you might get mangled to nonexistence anyway a la mangled universes.

      • Jason Malloy

        Constant,

        The “soul” is of course simply the religious name for the personal identity of Robin Hanson, and the idea that there is a “true” personal identity is incorrect.

        Again, which one is the “true” Robin Hanson is irrelevant. They have two entirely separate conscious experiences, and one of them dies right on schedule. The Robin Hanson that gets that experience just happens to be the one participating in this thread, and not the robo-clone.

        “If Robin Hanson is paused, duplicated, and the original destroyed while paused, and the duplicate unpaused, then Robin Hanson has survived just as really and truly as if his soul (to use the obsolete concept to explain my point) had been transplanted into the new body.”

        If you have already duplicated Hanson, then the clone experiences the continuity of being Hanson whether you delete the original or not. That does not mean you didn’t terminate the original Hanson and his personal conscious existence.

        Dying in your sleep is not the equivalent of not dying!

        “It’s similar, but it’s not the same thing, because there is much less control over the conditions probably faced by your surviving versions if you simply leave your survival in the hands of manyworlds, which does not, after all, care about you.”

        What’s care have to do with it? Many worlds = all possibilities. There is already an alterniverse where Robin Hanson takes a drug and gets to live 10,000 years. The immortal multiverse Robin Hanson is just as authentically Hanson as the immortal robo-clone Robin Hanson (not to mention the immortal multiverse robo-clone Hanson).

        Why waste money buying a separate conscious upgrade of oneself, when such simulacra are already naturally abundant? Either way, the duplicates don’t, won’t, and can’t improve your personally experienced quality of life. So it’s just a waste of money, which, of course, does detract from your own quality of life. You could use that wasted money to, say, take a vacation in Brazil.

      • Constant

        Jason,

        If you have already duplicated Hanson, then the clone experiences the continuity of being Hanson whether you delete the original or not. That does not mean you didn’t terminate the original Hanson and his personal conscious existence.

        You are assuming that there are separate entities, calling one “Hanson” and the other “the clone”. There aren’t two separate conscious entities. In the scenario I described, at any given time there is only one conscious entity, and that conscious entity is Hanson. You did not terminate Hanson, you left Hanson alive. All you did was to prevent Hanson from splitting into two entities.

        If you unpause both the duplicate and the original, then Hanson becomes two people.

        If you create the duplicate and then destroy the duplicate without ever unpausing it, then no second conscious entity ever comes into existence. But the same thing is true if you create the duplicate and then destroy the original without ever unpausing it. Once you create the duplicate, then it no longer matters which one you destroy before unpausing the other one: the outcome is the same.

      • Jason Malloy

        In the scenario I described, at any given time there is only one conscious entity, and that conscious entity is Hanson. You did not terminate Hanson, you left Hanson alive. All you did was to prevent Hanson from splitting into two entities.

        No, you duplicated Robin Hanson and then murdered the original. You are trying to hide the murder behind the duplication, but you can’t do that. Even if their experiences are simulated to be identical, they are two entirely separate conscious entities. Introduce a novel stimulus to one, it will not be perceived by the other. Their internal senses of self are therefore as completely shut off from one another as yours and mine. There is no Quantum Leap (starring Scott Bakula) of conscious self from one body to the other, just because you equalize their experiences, or dispose of one before their experiences diverge. Killing one of the Robin Hansons (be it the original or the duplicate) ends the independent experiential existence of this Robin Hanson, regardless if you duplicate his mind or not. Cloning Robin Hanson in the year 3000 means another similar entity gets to live as Robin Hanson in the future, but this has no further implications for the lived experience of the Robin Hanson in this thread. He is as forever cut-off from the mind of the future Robin Hanson as he is from my mind, and will receive no tangible benefits from this arrangement. Not even a postcard from the future.

      • Constant

        Jason,

        No, you duplicated Robin Hanson and then murdered the original. You are trying to hide the murder behind the duplication, but you can’t do that.

        I know what your interpretation is. You don’t need to explain it to me. It is a very standard, though wrong, interpretation, and repeating it is unnecessary. I know what it is.

        Meanwhile, what I am trying to do is not prove that my interpretation (which is I am very sure, Robin’s interpretation – or something close to it, at least closer than yours) is correct (which it is), but simply to make you aware of it. I am doing this because your responses previously seemed not merely to be a comprehending denial of that interpretation, but a complete failure to grasp that there is any such interpretation, since so far all you have done is to keep hammering your (wrong) interpretation, as if merely it were something that Robin had not thought of. You wrote originally:

        If I create an exact clone of you, including all your memories, we’re still left with two separate people, with two separate conscious selves.

        and variations thereof, over and over and over again. Now, if you were aware that there is a different interpretation of the same events, an interpretation known to many and believed by many (such as, maybe most famously, Derek Parfit, but by no means just him), then I think a better response for you to make would be something like, “I am aware that you believe this other thing, but I find it absurd” – at the very least. Instead you are writing as if your interpretation had not occurred to Robin, that it had slipped his mind (that the clone is and always will be a separate person from Robin) and that all you need do is keep repeating it until it finally occurs to him. Which is not the situation, I am very sure.

      • Jason Malloy

        Constant,

        “I am doing this because your responses previously seemed not merely to be a comprehending denial of that interpretation, but a complete failure to grasp that there is any such interpretation, since so far all you have done is to keep hammering your (wrong) interpretation”

        My first comment and all subsequent comments have clearly offered arguments and thought experiments explicitly designed to challenge a single consciousness interpretation of self and clone, e.g.:

        “How do we know the hypothetical future simulation isn’t us? Simple. The fact that we could hypothetically upload this same hypothetical simulation of ourselves right now, and yet we wouldn’t share the same consciousness. We would be two entirely different people with two entirely separate private internal experiences of self.”

        To this you added the argument that the two clones are different conscious actors in my illustration only because they were allowed to be conscious simultaneously, and their differing experiences split their subjective states.

        This is incorrect. Neither the order or overlap of their conscious states, or the differences in their experience takes away the independence of their conscious experience. It only changes how similar they are as people.

        Let’s say I take Robin Hanson (RH) and hook him up to a Matrix machine where I control all his sensory input. Then I make an exact clone of Robin Hanson (RH-C), complete with all his memories, and hook him up to the same machine and feed him identical sensory input. RH is laying down in bed 1 and is hooked up to Matrix helmet 1.RH-C is laying down in bed 2 and is hooked up to Matrix helmet 2.

        Here we have two Robin Hansons who are identical in every way– their thoughts are perfectly synchronized– but they are laying down in two different beds and are hooked up to two different Matrix helmets.

        According to your theory they are two identical people with one shared conscious mind. According to my theory they are two identical people with two completely independent conscious minds.

        Here is how we show my theory is correct, and yours is incorrect: At any time I can diverge their experiences (and thus reduce their similarity) by introducing any kind of dissimilarity into their controlled sensory environments. The minute I add the woman in the red dress to Matrix helmet 1, RH has experienced something that RH-C has not. According to your theory this is when their shared conscious mind diverges. But your theory violates causality. Obviously they were two separate conscious minds before I introduced the woman in the red dress, otherwise how did I feed the sensory input into one mind, but not the other? Your theory makes this logically impossible. The ability to send one brain private sensory information is predicated on the fact that it consciously independent from the other brain.

        Therefore there can be no Quantum Leap (starring Scott Bakula) between the two brains. If they were two independently conscious beings before I introduced the woman in the red dress, then terminating RH before I introduce the woman in the red dress, does not allow Robin Hanson to “live on” in RH-C. It simply murders Robin Hanson, but preserves a being very, very similar to Robin Hanson. It does not matter if I “delete” RH before or after I introduce the woman in the red dress; I’ve already established that they were two independently conscious entities before I introduced dissimilarity.

        Of course, the cloned Robin Hanson feels every bit as Hanson-y as the deleted original, but this has no bearing on the original, who is just as dead as if his clone had never existed. If the independent and unperceivable existence of identical beings who outlive us is “immortality,” then cloned robot versions of ourselves are superfluous. The many-worlds interpretation already predicts an infinite number of these beings– who are equally useless to us, but don’t require any money or effort.

      • hm

        ”Monozygotic twins are different people because they’ve had different life experiences. ”

        This is untrue. From the moment of cell division in the womb they are already 2 diff persons with no life experience yet whatsoever.

    • ChrisA

      Jason is correct, there will be many many very close physical copies of Robin in existence (either in the multi-verse and/or in the infinite universe), and Robin knows this, so really the desire of Robin to invest in cryonics so that “a Robin Hanson might exist in the future” is illogical.

      Obviously a fear of death is an evolutionary artifact, entities that did not fear death did not tend to reproduce. So I think, in reality, the desire of Robin to have copies of himself around is an emotional desire, embedded in human psyche by evolution. I bet that if I offered Robin the choice of an immortality pill or cryonics followed by certain resurrection as a clone, I know he would not be indifferent to the two choices, supporting the emotional driver, since his logical argument is that both are the same. Emotions are not logical by definition.

      • mjgeddes

        If MWI is true there is even more reason to invest in cryonics, because without life extension virtually all of these copies of Hanson in the other branches will die as well. If the loss of one Hanson is a tragedy, the loss of many is even worse, so all the more reason for cryonics!

        For the identiy puzzles, you have to rememeber that there is no sharp line between different branches or copies, and *all* the ‘copies’ are Hanson. Even when there is little *causal* interaction between branches due to QM decoherence, abstract things can still be transferred across worlds without violating the laws of physics.

        Example:

        It is possible to effect bank transfers across QM branches – you can shift money back and forth between copies of yourselves (here is a simple proof):

        Simply buy a lottery ticket that you know is run on a QM randomizer (most big lotteries are nowadays), you are in effect coordinating all the losing copies of yourself to transfer to a bit of money across the multiverse to a ‘winner’ you in alternative branches.

        Money can also be moved back and forth through time (and hence across QM branches), taking out a loan is equivalent to ‘withdrawing’ money from the bank account of your future selves.

  • Thomas M. Hermann

    You underestimate the amount of information necessary to capture *you*. There is the obvious set of information contained in the physical patterns in your brain. But, there is also the set of information that is contained in your DNA. Furthermore, *you* are designed and acclimated to operating in the medium of your brain. The medium of *you* is significant. The interplay between your DNA programming, the medium of your brain and the experiences that shape your pattern are significant. There is more to *you*, Robin, than you think.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    typo: elections -> electrons

  • Adam C Forni

    I am in your camp Robin, but certainly the technology has a way to go. The “Hanson Equation” of cryonics that Caplan mentions – is this available?

  • Jay

    “If we instead moved that info to a new set of parts that then talked to each other the same way, why should you care? ”

    Here’s where you lost me. A computer simulation doesn’t contain parts that talk to each other in the same way as the parts of a brain do. It contains modulated electric fields (usually interpreted as numbers) that interact by a very different mechanism in a very different environment. You may be able to structure the electric fields in some way that, through some ingenious but nontrivial mapping, corresponds to the activity of a brain. It is far from obvious that that simulated brain would be equivalent to a human brain in feeling or identity.

    • http://hanson.gmu.edu Robin Hanson

      Of course something that’s different isn’t exactly the same. The question is what reason do you have to think that those differences matter?

      • Jay

        Well, for starters:

        * Human beings can’t be edited. Software can.
        * Human beings can’t be exactly copied. Software can.
        * Human beings are individuals, with unique memories. This peculiarity is not likely to last long in software.
        * A human brain is only functional when wired into human sense organs and muscles. The changes necessary to use the senses and capabilities of a computer network directly are unprecedented.
        * Human brains evolved to manage human bodies in a struggle for reproduction. Relatively few humans have any facility with abstract reasoning or math. If simulated brains did feel, they would likely find their circumstances extremely frustrating. Their major drives (food, sex, sleep, etc.) would be completely irrelevant to their digital circumstances.

      • Jay

        Forgot a big one. Digitized humans, if they existed (whether as “real people” or as philisophical zombies), would share the world with ordinary humans. Ordinary humans don’t like the thought that something is both different from them and smarter than them. Ordinary people (not all, but enough) would hate, and probably destroy them.

      • Jay

        One more. Real people’s moods and behavior is influenced by all sorts of biological factors, including but not limited to diet, exercise, external temperature, fatigue, satiety, hormones, oxygen levels in the bloodstream, etc. A simulated brain will not experience these factors (but may have its own anomalies associated with, for example, power supply fluctuations). My guess is that the present models do not even try to model the neuron as a living system, but reduce its role to that of an electrical switch.

  • http://lesswrong.com/lw/r9/quantum_mechanics_and_p Eliezer Yudkowsky

    Discussions about cryonics often disintegrate into the Personal Identity Wars Part XXIV, but it’s not even all that large of a factor in the cryonics version of the Drake Equation. If you’re that attached to your “particular atoms”, then the kind of advanced molecular nanotechnology required to do cryonics revival at all, could easily enough rebuild each neuron out of the “original atoms” that went into it.

    I’m kind of wistfully sad about the fact that no one remembered or linked to the huge sequence and discussion previously on Overcoming Bias (now at Less Wrong) wherein I tried to discuss these issues in sufficient thoroughness to finally lay the Personal Identity Wars to rest – at least that part of them dealing with the physically impossible notions of “original atoms” or “copies versus originals”.

    It seems that while some “philosophical” issues are indeed resolvable (especially if you’re lucky enough to live in a universe that fortuitously happens to run on configuration spaces instead of tiny billiard balls), the discussion required to resolve them is so long that people will just go on offering their same local intuitions and watching them clash, over and over. Even those who know about the long discussion won’t bother to link to it or bring it up in conversation, because they expect that the other parties won’t be interested enough in “philosophy” to follow long arguments – or will perhaps be offended at the suggestion that there is knowledge they don’t have about the subject, since everyone knows that philosophical questions are matters where personal opinion is sovereign.

    (The same complaint would also apply to the classic discussion of personal identity in Parfit’s “Reasons and Persons”, if anyone is going to complain about my own work not being sufficiently mainstream.)

    Still, let’s at least link to the Cryonics page on the Less Wrong Wiki, which links to posts at both Overcoming Bias and Less Wrong. Mightn’t that save us all at least a little effort?

    • http://hanson.gmu.edu Robin Hanson

      The parenthetical comment was an allusion to your QM and identity sequence; I’ve now added a link. Honestly I don’t think most folks like Bryan have “same atoms” concept of identity, and grokking QM is also a bit more for such folks.

      • http://yudkowsky.net/ Eliezer Yudkowsky

        Fair enough. I guess I’m depressed at how much these conversations just end up restarting over and over without building incrementally. I guess I have no right to complain about that to someone who teaches economics.

    • Jason Malloy

      Yudkowsky: “If you’re that attached to your “particular atoms”, then the kind of advanced molecular nanotechnology required to do cryonics revival at all, could easily enough rebuild each neuron out of the “original atoms” that went into it.”

      Hanson: “Bryan, you are the sum of your parts and their relations. We know where you are and what you are made of; you are in your head, and you are made out of brain cells that send signals to each other.”

      Yudkowsky and Hanson are seriously tilting at their own windmills. Nothing about Bryan Caplan’s response whatsoever indicated skepticism of materialism.

      He certainly didn’t make some juvenile claim about “original atoms,” or ghosts in the machine.

      • michael vassar

        Caplan does make claims about non-deterministic ‘free will’.

      • Carl Shulman

        Caplan explicitly defends non-materialist dualism about consciousness, libertarian free will, and moral realism. Robin and Eliezer both know this.

      • Jason Malloy

        Caplan explicitly defends non-materialist dualism about consciousness, libertarian free will, and moral realism. Robin and Eliezer both know this.

        Ugh, Caplan’s follow-up post makes this more explicit. I was not aware Caplan had such an annoyingly mystical world-view.

        Regardless, the arguments in his original post are correct within a materialist framework, and do not require mind-body dualism or essentialism.

    • anon

      This is wrong. Since quantum mechanics is linear, quantum states cannot be copied, only teleported. What if what we call “consciousness” is just an extremely complex entangled state? Then you could “upload” onto a quantum computer, but only by moving your consciousness, not by cloning it.

      • mitchell porter

        anon, the no-cloning theorem in quantum mechanics only forbids a process which can exactly copy every state in some Hilbert space. Exact copies of some states, or approximate copies of all states (and with a known bound on the error), are both possible. So multiple quantum copies of a quantum mind-states might be possible – they’ll just have some small changes. It’s impossible to say at present whether such changes would be insignificant, small but consequential, or identity-destroying, since quantum-mind hypotheses remain so vague when it comes to relating quantum properties to cognitive properties.

    • mitchell porter

      Eliezer, even if I were to agree with you about electrons lacking identity, the relevance of that to discussions like this seems rather remote. When people talk about mind uploading, physical copies, Moravec transfer, etc, they are talking about simulation, replacement, or duplication of mesoscopic entities like neurons and transistors. Are you going to say that there is no sense in which we can track the identity through time even of big objects like those? And if you think that we can track their identity, despite being unable to do so for the very smallest objects, isn’t it this mesoscopic sense of persistent identity which would be relevant here?

      • http://yudkowsky.net/ Eliezer Yudkowsky

        How can a high-level pattern have an ontologically real, consciousness-relevant persistent physical identity when it is decomposable into smaller parts which are known not to have such identities?

        Our universe runs on patternist physics: identical configurations add amplitudes. Non-patternist theories of identity are refuted at all known levels of organization by being refuted at the lowest known level of organization.

      • http://www.rationalmechanisms.com Richard Silliker

        It is a complex mechanism. Complex Mechanism : any given machine that implements its acquisition through its expression and implements its expression through its acquisition – metabolism.

      • http://silasx.blogspot.com Silas Barta

        How can a high-level pattern have an ontologically real, consciousness-relevant persistent physical identity when it is decomposable into smaller parts which are known not to have such identities?

        By being color.

        (Incidentally, I’ve criticized this exact argument when Mitchell Porter makes it.)

      • mitchell porter

        Forget about consciousness for a moment. Suppose you manage to make an atomically precise replica of the Mona Lisa in your basement. Most of us would say that the one hanging in the Louvre is the original. Would you?

      • http://www.rationalmechanisms.com Richard Silliker

        “By being color.”

        Colour is just one of the attributes of constraint on the flow of mass.

      • http://silasx.blogspot.com Silas Barta

        @Richard_Silliker: Yes, color is that. The purpose of my post was to identify color (either phenomenal or detectable) as being a case of an “a high-level pattern” that has “an ontologically real, consciousness-relevant persistent physical identity when it is decomposable into smaller parts which are known not to have such identities” in response to Eliezer_Yudkowsky’s question.

      • http://silasx.blogspot.com Silas Barta

        Wait, never mind ignore the color example; it violates the persistence criterion.

      • mitchell porter

        I suggest that discussion of Eliezer’s specific argument move to LessWrong.

  • http://transhumanism-russia.ru/ Matvey

    Excellent explanation of uploading for common-sense-thinker! But still there are doubts for me.

    For example, uploading may require not only the connectome of the one’s brain, but also the exact positioning of every spike in action, which means that we need to capture every ion in every ion channel and every neurotransmitter molecule in synapse to make a perfect copy. Is cryonics capable of saving this level of details today?

    Of course, loose frozen copy much better then digital data and DNA, but as far as I understand we are don’t know yet what level of detail is enough.

  • alex

    “What if we moved your info into two different sets of parts? You could declare one to be the “real” you via some arbitrary definition like the one closest in space to where you were before. But each of them would feel like they were you, as much as you feel like yesterday’s you, and more than your two disconnected brain halves”

    I can’t believe that it makes a difference whether someone “feels” they are you – else the madman who thinks, completely sincerely, that he is Napoleon, and truly remembers being Napoleon, would have a claim on actually being Napoleon.

    Moreover, the argument at this paragraph is at some odds with the argument offered in the rest of the post – that “you” are essentially a pattern, and the exact parts this pattern is composed of is unimportant. Your feelings have nothing to do with anything.

    Anyway, I find it quite unsettling that you offer a theory that allows for the simultaneous existence of two “you”s – and that it doesn’t seem to bother you in the least. If you’ve reached a notion of self-identity that allows this, perhaps its time to pause and consider if you’ve gone too far.

    I wonder if you would make as many copies of yourself as possible when instant cloning technology becomes available – you know, to maximize your chance of survival.

    • http://hanson.gmu.edu Robin Hanson

      If the madman really did remember everything Napolean did, and really would respond to others just as Napolean would, I’d call him Napolean.

      • Anton Tykhyy

        But would others respond to him as to Napoleon? Probably no.

      • Constant

        They would be mistaken.

      • michael vassar

        They would eventually if he had Napoleon’s human capital.

      • Jay

        Chaos theory strongly suggests that you would never be in a position to judge whether the madman, or the simulation, reacts to every stimulus the way Napoleon would.

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      Alex,
      I share your dumb-foundedness.
      I’ve narrowed it down to three possibilities.
      (1) Not everybody has a subjective conscious experience. Their life is the equivalent of sleep-walking and sleep-talking, or perhaps the black-out periods of drunks. So the distinction we find so crucial isn’t salient for them.
      (2) For whatever reason they like the easy answer. Weird for folks like Prof. Hanson who are otherwise capable of complex thought, but there it is.
      (3) The paranoid solipsistic hypothesis. They’re trying to kill me (I suppose you can insert you) by advancing the arbitrarily cloned mind approach over the conservative, preserve-hopefully-anonymous’-subjective-conscious-experience approach.

  • Simon

    I wonder where you all stand on ontology and the personal/identity debate psychological continuity as a person/brain or biological continuity as a body/organism/animal?

    That would seem a good place to start. At least on this subject I’m reasonably well acquainted with the lit.

  • alex

    Here is another question I have for Robin Hanson.

    Suppose a time traveler from the future proposes a deal for you. He has taken some scans of you and he claims to be able to reconstruct you precisely in the future – molecule for molecule. You believe him. Furthermore, in the future improved advances in medicine have made eternal youth possible, so that what you get is not just more life but eternal life.

    In return, he will use a shotgun to kill you right now – he needs your body to fuel his time machine, which apparently runs on humans.

    Do you accept?

    • http://hanson.gmu.edu Robin Hanson

      Well I’d want to ask about a few more details, but if those check out, yes, I’m quite tempted by the offer.

      • alex

        All right, here’s another one.

        I am a brilliant psychologist, who happens to be a billionare. I have kidnapped a random person off the street – he is in a cage in my lab right now. I propose that:

        i. I kill you right now.
        ii. I will convince the person in the lab that he is Robin Hanson.

        I have a copy of your complete diary for the entirety of your life. I will fool that person into really believing that he actually did all of those things. I will make sure that he has your personality, tastes, desires, beliefs, everything – down to the last bit.

        iii. Once I’m done, I will set him free and give him some quite large amount of money once I’m done with him ( say a billion dollars).

        You believe I will do all of these things. Do you accept?

      • http://hanson.gmu.edu Robin Hanson

        Sounds like the setup of “The Button”; will I kill a stranger else if that will get me lots of money? I’ll pass on that for now.

      • alex

        I don’t understand. You will be killing yourself, not a stranger. Surely you have the right to make that decision?

      • alex

        Or do you mean the person kidnapped? If so, I’ll stipulate that I already “killed” him by making him forget all his memories and confusing him about who he is.

      • http://yudkowsky.net/ Eliezer Yudkowsky

        The person in the lab dies in the “convincing” process.

      • alex

        These kinds of “moral” considerations take us away from what I was asking here, which is how things look from RH’s “selfish” perspective.

        If you want, just stipulate any number of things that make this morally OK for the other person: perhaps they agreed to it voluntarily (a way of committing suicide); perhaps I will kill them anyway, so if you have some sort of consequentialist ethics, you should be OK with agreeing; perhaps I have made a separate deal with this person to “resurrect” them later when humanity figures out how to “grow” bodies. Etc etc etc.

      • http://yudkowsky.net/ Eliezer Yudkowsky

        But Earth needs a Hanson now more than it needs a Hanson in fifty years. Surely we must apply a discount rate here!

      • http://hanson.gmu.edu Robin Hanson

        Indeed, but I am human and can be tempted by selfish gains.

  • http://www.sportstwo.com MikeDC

    Robin,
    Would you agree or disagree with the idea that having (for lack of a better term) a “chain-of-consciousness” is essential for life?

    My apologies if this has already been dealt with, but I look at it as follows. Suppose I were cloned or uploaded, but I, MikeDC, remain alive as I’ve always been within this body. I wouldn’t consider my clones or uploads to be me. We may share unique memories and capabilities up to the moment of upload, but after that we grow apart. We’re separate instances of the same program.

    My instance of the program is the only one I care about as “my life”. I might think it’d be nice to have other instances of myself out there, but it’d be in the same way it’d be nice to have an identical twin. At the end of the day, a copy would be another conscious, sentient life. Not my own.

    Thus, if immortality were a movement of my conscious mind from my living (but perhaps soon to no longer be) body to a computer simulation, I’d certainly consider myself to still be alive.

    Perhaps problematically from a philosophical perspective, I think I must consider myself dead if (as in your hypothetical above), the power were totally shut off to my brain, and I was then I was totally “restarted”. My understanding is that much of what is “me” is stored in “volatile RAM”, and even if my body were brought back to life, or a way were found to access the” non-volative RAM” in my cryonically frozen brain, much of that would be lost. Thus, the new life created when I’m unfrozen and uploaded would be based on me, but not me.

    • Nick Tarleton

      My understanding is that much of what is “me” is stored in “volatile RAM”

      Actually, long-term memory is stored in stable neural structures.

      • michael vassar

        Probably.

      • http://www.sportstwo.com MikeDC

        But I am more than long-term memory. I’m also stored in short-term and “working” memory.

      • Nick Tarleton

        Not very much of you is. Would you say a concussion, anesthetic, etc. that erases short-term memory, kills you and creates a different person?

      • http://www.sportstwo.com MikeDC

        Nick,
        Define “not very much”. Again, I’m not a doctor, but my understanding is that receiving a concussion or most anesthesias don’t completely “turn off” our short term memory and experience generating systems. Just as going to sleep doesn’t.

        As a practical matter, I don’t know where I’d place “death”, but some level of continuity in neural activity and experience is surely important. Internally, are you the same person if everything is shut off and turned on 500 years later? I tend to think not. And converse to your question, where you ask if that “kills you and creates a different person”, would you call yourself “alive” for the 500 years you were turned off? The same when you wake up?

  • Simon

    So Robin the work on embodied cognition that basically -if I have it right- says the mind is embodied in the whole organic system doesn’t empress you at all?

    Also mereologically speaking one could also say the brain is just one cognitive subsystem and if you were to transplant my brain it is my brain in another body not me.

    • http://hanson.gmu.edu Robin Hanson

      There is of course a sense in which I change every time I move to a new physical or social environment. But I’m still me there.

  • Eric Johnson

    Do all y’all with pro-cyronic pro-materialism intuitions *also* believe that you could be revivified on a “dumb” but sufficiently large computer, such as a vacuum-tube computer, or a computer made of rocks bouncing of each other in a room lined with springs or rubber, orbiting the earth in zero-gravity (if thats possible)? This computer would also be large enough to emulate your virtual world I guess.

    I’m agnostic on mind-body, but I find panpsychism to actually be the least philosophically irritating form of “materialism.” Obviously its not really a materialism properly, but it does comport with materialism in virtually every way. I dont think this appeals to me because its comforting, because it suggests I will still die, without cryonics at least. My atoms would contain the basis of qualia — a trait not yet discovered which must interact with the known traits of matter in some way. But no consciousness need result unless the atoms are arranged the right way — and certainly not my consciousness or my level of consciousness.

    The main problem is that this awareness property of matter needs to interact with the already-understood, already-canonical material properties of matter, in order to allow us the mind-brain connection we appear to have. And physics seems to already work fairly well without this, I guess — I’m not certain.

    • Eric Johnson

      > Do all y’all

      Specifically, I intuit that this thought experiment will mess up your intuitions.

    • Nick Tarleton

      Do all y’all with pro-cyronic pro-materialism intuitions *also* believe that you could be revivified on a “dumb” but sufficiently large computer

      Yes. (I don’t even think I find it intuitively absurd.)

      • Eric Johnson

        Far out

      • Eric Johnson

        I think I could sort of intuit it your way for a second, then I snapped back.

        I once had an epiphany, in the morning after not sleeping, consisting of just being able to intuit more deeply than usual the possible true reality of the “machine world”. But its just an intuition of course.

      • Eric Johnson

        > But its just an intuition of course.

        Either way, of course, is what I meant

  • http://www.merkle.com Ralph C. Merkle

    The 4th quarter 2008 issue of Cryonics describes a scenario using molecular nanotechnology (MNT) to revive someone who has been cryopreserved (see http://www.alcor.org/cryonics/cryonics0804.pdf).

    This scenario involves analysis and repair of the existing structure, not uploading.

  • http://depravitydepravity.wordpress.com/ holmegm

    Consider: if your “common sense” had been better trained via a hard science education, you’d be less likely to find this all “obviously” wrong. What does that tell you about how much you can trust your initial intuitions?

    It tells me that people with similar narrow talents of symbol manipulation tend to arrive at similar philosophical errors.

    Or perhaps that people who are sorted into groups tend to have, you know, cultures, with commonalities.

    • http://hanson.gmu.edu Robin Hanson

      And what does that tell you?

  • Blackadder

    Prof. Hanson’s post reminds me of the story of the guy who was taken on a tour of Yale, and at the end said he was disappointed because he had wanted to see the university, whereas all he had seen were classrooms and students and teachers and dorms, etc.

  • mjgeddes

    A lot of the philosophical debate is simply not relevant to the issues at hand. Confusions such as the ideas of Searle, Penrose, Caplan, etc come from viewing the mind as sensitive to *physical* characteristics (the *structural* properties of the system, e.g. melting points, chemical signal speeds etc etc.), whereas in fact the mind is obviously about *informational* characteristics (the *functional* properties of the system, eg. Complexity, entropy, etc etc).

    The confusion can be cleared up simply by noting that the latter properties are not sensitive to the former (the *same* information processing can occur in things with many *different* physical properties, e.g., brains, computers, water or even beer cans).

    Acceptance of the above two points are all that is needed to secure the case for cryonics.

    None the less, more complicated philosophical issues such as reductionism, materialism, identity etc etc. are still under debate. But really, there is no need for a general ‘philosophy’ – anything of value should fall under a specific clearly defined subject area (e.g epistemology is really about symbolic logic and probability etc etc.). If the discussion is on the ‘philosophical level’ it just means there are ill-defined terms and little can be said that is sensible. Words such as ‘consciousness’, ‘intelligence’ are merely place-holders for lack of understanding.

    Consciousness is obviously about Information Theory and the science paper that finally explains it will likely be filled with terms such as ‘Kolmogorov Complexity’, ‘Information Entropy’, ‘Minimum Message Length’, ‘Mutual Information’ etc etc, most probably centred around some definition of coordination of multiple sub-agents (as Robin says, we are not a single entity but are likely composed of multiple sub-agents and consciousness is the concurrency/coordination system of all these different agents – it’s an internal communication/signalling system).

    There are some problems with the informational approach, for instance it’s not clear why information needs to be correlated with physical matter at all, in mathematics for instance there are things like infinite sets which contain information but don’t appear to have a physical interpretation, so it’s not clear that consciousness is in fact reducible to merely physical parts, but as noted, this is not really of practical relevance.

    Incidentally, if the informational approach is correct, and the laws of physics in principle allow the retrieval of all the information relevant to minds, then cryonics may not be necessary, a super-intelligence (SAI) in the future might simply be able to retrieve all the information about the all the people that have died and reconstruct them all any way (i.e. resurrection!)

    PS Readers should notice how quickly and simply I was able to cut through to the truth in a few paragraphs (note I explained consciousness in one sentence!) ‘Truth telling’ is very different from ‘looking impressive’ isn’t it ;)

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    I think you’re overconfident, Prof. Hanson, about our state of knowledge regarding the subjective conscious experience. I prefer Prof. Koch of Caltech’s formulation regarding the state of our knowledge to yours.

    I think the more honest answer is, we’re not sure what happens to one’s discrete subjective conscious experience as we leave its more normal arc of occurance and existence. A twin consciousness performer (or even a twin subjective consciousness experiencer) will not necessarily be an experience I share. To claim authoritatively or absolutely that it is (or that it isn’t) seems to me to go beyond our best data and models. I suspect that it may be a tricky problem to get right to preserve my subjective conscious experience. To treat what could be a tricky problem as a no-brainer (sad pun) could be a very harmful approach particularly for first-generationers as we hope to be.

    Shame on you, Prof. Hanson. Because you really should know better.

    • http://hanson.gmu.edu Robin Hanson

      I disagree. We also have no absolute proof that the stuff in my room is still here when no one is in the room, and no convenient measuring devices probes it. Shall we refuse to have an opinion on that topic either?

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        I feel here like I’m arguing color with the colorblind. I accept that it’s not necessary to have an observer subjective conscious experience to be an interractive human, which may be completely determined anyway (free will being an illusion for the observers).

        Do you think it’s possible that, somewhat arbitrarily, a minority of us have this discrete, observer, subjective conscious experience? If I went back through this thread, I think I could separate out those of us who have this experience from those of us who don’t. It’s the ones who (irrationally to the rest of you) make a distinction between the doppelganger who can fool the world into thinking they’re us vs. the vessel or algorithm that our observer experience can actually perpertuate in.

        Other than that, we seem to me to be at an impasse. I find it hard to believe you’re unable to or afraid of engaging in complex thought on this topic. It seems somewhat more believable to me that I’m experiencing the frustration of describing color to someone that’s colorblind.

        What odds would you attribute to the possibility that the observer subjective conscious experience isn’t universal in functioning, social humans, and that you and others who are confident it exists in any doppelganger that can fool detection technology of its time don’t yourselve have a subjective conscious experience?

      • Constant

        Do you think it’s possible that, somewhat arbitrarily, a minority of us have this discrete, observer, subjective conscious experience?

        We all do, I’m sure. Not just a minority.

        If I went back through this thread, I think I could separate out those of us who have this experience from those of us who don’t. It’s the ones who (irrationally to the rest of you) make a distinction between the doppelganger who can fool the world into thinking they’re us vs. the vessel or algorithm that our observer experience can actually perpertuate in.

        No, you are not arguing with philosophical zombies. You are arguing with people who have exactly the same subjective experiences as you. What’s different isn’t their subjective experiences, but their conceptual apparatus, their theory, and their willingness to bite certain bullets.

        What odds would you attribute to the possibility that the observer subjective conscious experience isn’t universal in functioning, social humans, and that you and others who are confident it exists in any doppelganger that can fool detection technology of its time don’t yourselve have a subjective conscious experience?

        Zero. It’s not a question of subjective experience, it’s a question of accepting certain views about reality and about biting certain bullets as a consequence of that acceptance, however counterintuitive the conclusions are.

      • http://www.rationalmechanisms.com Richard Silliker

        Gentlemen; we sacrifice the whole truth of any given experience for the value to which we are constrained.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        Constant, I’m intentionally not using the term “philosophical zombie” because folks like Eliezer and TGGP (and perhaps you, I’m not as familiar with your opinions on this topic) like to load the term with strawmen, in my opinion.

        My concern is more with detection technology, and the possibility that social, interactive people may not be completely (or even highly) correlated with the observer aspect of the subjective conscious experience. You seem to me to be certain that the corellation is perfect, and that currently imagined detection technology is sufficient, but that certainty doesn’t seem to me to be justified or well-defended.

        This may be more a problem for the minority of us actually experiencing the observer aspect of subjective conscious experience. I don’t expect much sympathy for people that see color in the colorblind.

  • mike

    Accepting that one’s entire being can be digitally recorded, wouldn’t the act of learning and experiencing be in effect killing your past self; deleting parts of yourself and replacing it with new information?

    For you to truly live forever, you would have to make sure that your original data is never modified, and new experiences are only appended to old ones. Your quest for immortality will transform into a perpetual hunger for more informational capacity. Either that, or you’ll need to stop experiencing / existing in time, by shutting yourself off.

    If you can accept that new atoms replacing old atoms in your body doesn’t mean that you are no longer you, then you should also accept that new information replacing old information doesn’t mean that you are no longer you. In other words, you are not the information that you are made of.

  • Daniel Burfoot

    Thought experiment: say instead of cryogenically preserving yourself, you had the option of cloning yourself and raising the clone. Thus, you would have the ability to closely monitor the clone’s development, teach the clone everything you know, tell him the key narratives and principles that guide your life. You also have the option of leaving out all the “junk food memories”, like the stupid, petty things you did in college.

    This route is actually quite attractive to me. This is because I value myself as a collection of ideals, experiences, memories, thoughts, and so on – but some of these are more valuable than others, and some are definite liabilities. By raising a clone of myself, I could emphasize the valuable ideals while dampening out the junk food memories.

    • mike

      Get a kid, or write a memoir. Better yet, do both!

  • http://reason.wikia.com/wiki/User:Rimfax Rimfax

    You guys are arguing about the Chinese Room, only more so. You say that it “understands” Chinese and he says that it doesn’t. Moreover, you say that it can be considered an extension of the identity of the author of the manuals, while Bryan would say that the author’s identity is locked in his meat brain which becomes worm food when his body dies, regardless of what happens with the Room.

    I am somewhere between the two of you. I see no requirement for the meat substrate to define identity, but I do see the need for a process of integration of new thinking material to maintain identity. Merely copying a thinking machine does not extended that identity.

    Just as a brain integrates new material for cells and retasks cells, a brain can task new thinking resources, regardless of their nature. A brain that assimilates semiconductor thinking resources and gradually adjusts to the eventual loss of biological thinking resources would continue to be identifiable as the person whose brain it is.

    • Constant

      Actually, the Chinese Room is about whether there is a mind, not about preservation of an individual identity. There are two separate questions:

      1) If, at death, your brain is perfectly copied into a Chinese room, does the Chinese room have consciousness.

      2) If, at death, your brain is perfectly copied into a Chinese room, and if the Chinese room then has consciousness, is that you?

      As it happens, I think these two questions are very closely linked. I think puzzles about consciousness are very closely tied to puzzles about personal identity, and that fully solving either one is the key to solving the other. This, however, is more a strong suspicion than something I have proven.

  • http://blog.efnx.com Schell Scivally

    Forgive me if I’m mistaken but I think Robin and Bryan are arguing over the definition of self. I like to think about the scenario you two are talking about in terms of source code. Imagine a brain as a code repository. Interaction with the outside world (a brain’s experience) alters the code slightly, and now we have new source code. Through these incremental changes we fine tune our program (brain) and carry out this process until death. Now if an outside party checks out our repository onto some other machine (body) – we would have two separate instances of the same program running in two separate places. Once the two instances have interacted with their respective local systems, they will have become branches of the same trunk – similar yet different code. At this point, if the original repository is deleted, the new one lives on. In this regard Robin is correct. The trunk of the original code is active in the new instance and when running it is a valid instance of the original. From the point of view of the original repository, however, interaction has stopped and it ceases to be an evolving body of code. Yes, the new instance carries on, but the original knows nothing of this new instance and is completely _deleted_. In this regard Bryan is correct.

    • Anton Tykhyy

      Nice analogy, but source code is not self-aware (so far). Using again the copy-and-destroy-the-original example, how is the destroyed original subjectively differ from the copy? Since I am speaking of subjective difference, it cannot be anything which is not experienced by the original, so it seems that the process of killing — as experienced by the original — makes the difference. Similarly, in the case of the twins paradox in special relativity, it is the period of acceleration which distinguishes the twins and determines which one of them will become older than the other. However, if the original is destroyed so that it does not become aware of its destruction, the subjective difference vanishes, and Bryan’s point evaporates. The original might not know anything about the new instance, but neither does the original know that it ceased to exist.

      • Schell Scivally

        In this thought experiment we can assume that the coded brain is self aware. If the two repos’ are modified AFTER the branching, then each repo would have a different state, so they would be subjectively different. If the original were killed before any modifications happened they could be considered subjectively equal. What I’m saying is that I think Robin is arguing that the two instances ARE subjectively (or structurally) equal, while Bryan is arguing that the two instances ARE NOT objectively (or physically) equal. Essentially it seems they might be arguing past each other over something akin to this: http://en.wikipedia.org/wiki/Relational_operator#Object_identity_vs._Content_equality and could possibly come to an agreement in that they are both supporting valid points.

  • Anton Tykhyy

    I am flabbergasted. Every single one of us experiences a cessation of consciousness at least 300 times per year, and some people here are arguing as if tomorrow’s you is somehow more you than a faithfully reconstructed/revived/uploaded/whatever you, when actually you can’t even be certain when you go to sleep that your body will eventually wake up! Reasonably sure, yes, by (incomplete) induction; but that’s it.

  • Bill

    Would someone who unplugged an uploaded electronic brain of a dead person be guilty of murder?

    • Doug S.

      If the unplugged upload could be restarted roughly where it left off, then the unplugger is probably only guilty of assault, not murder.

  • toto

    The Star Trek paradox: Teleportation is murder.

    Proof:

    When Scotty beams you down from the ship, you are dematerialised from your original location and rematerialised in another one. Good. No kid ever got nightmares watching that (I think).

    Now imagine that, due to some glitch, there is a short delay in the dematerialisation process – a copy of you is rematerialised somewhere (maybe even in the same room), but you are not dematerialised. So you stare at your own copy for half a second, and then…

    You: “Now wait a sec…”
    Scotty: “Sowwy captan” – *click*
    You: *puff*

    Now how would that not constitute murder of the most blatant kind?

    And yet if you think of it, the only difference between these two scenarios (one apparently innocuous, the other plainly criminal) is a few seconds delay.

    That’s the paradox.

    • Constant

      toto,

      You can alter the star trek paradox to prove that you are ceasing to exist constantly and being replaced by duplicates – pod people, if you like. The star trek transporter creates a copy of you all at once while simultaneously destroying the original. But you yourself are gradually (via ordinary biological processes) creating a copy while gradually shedding the original molecule by molecule. A similar paradox can be created from this situation. The key to the star trek transporter paradox is to introduce a delay into the destruction of the original body so that it remains around even after the copy has been formed, and so, similarly, the key to creating this new paradox is to introduce a delay into the piecemeal destruction of your old body as your old molecules are gradually replaced by new ones.

      We shed our skin one flake at a time but snakes shed their skin all at once, and the new skin sometimes looks like a separate snake, and this old skin is gradually destroyed by the elements (if it is not eaten by the snake). So there is a delay in the destruction of the snake’s old skin which does not exist for us, since our old skin is destroyed flake by flake as it sheds.

      Imagine that we started shedding intact skin like the snake, but imagine more than this: imagine we started shedding fully intact bodies. Imagine, that is to say, there there is a delay in the destruction of our old body. Now, assuming there is truly a delay in the destruction of our old body, then the old body continues to function for a while after we have shed it, and in particular, it continues to be conscious.

      There’s your paradox.

      By the way, the star trek paradox was explored in an Outer Limits episode called Think like a Dinosaur. But don’t expect much enlightenment from the episode.

  • Jackson

    This is bizarre stuff… I’ve often pondered this sort of thing, if I’m not going too far off track – like being teleported only to find that you’ve not gone anywhere, but there is another you who has indeed ended up in a different location. I daresay there is a film to this effect, I’ve considered it as an idea for a short film. The two persons are living out their new separate lives, somehow ignorant of their counterpart, the film literally bisects (not a new technique) following the two characters… and so on.
    Crazy stuff.

    • http://www.cmp.uea.ac.uk/~jrk Richard Kennaway

      This happens in an episode of Star Trek: TNG. During an emergency beam-up, a teleporter malfunction leaves a duplicate of Riker on the planet. Neither is aware of the other’s existence: the one on the planet thinks he just didn’t get beamed up, and no-one on the spaceship knows about the duplicate. A few years later, the duplicate Riker is discovered and rescued. Everyone considers them to be two different people, and at the end of the episode the rescued one leaves the Enterprise to make a separate career for himself in Starfleet.

      See also many stories by Greg Egan.

      But what threads like this really show is that a lot of thinking simply falls over when contemplating the subject. The debate typically consists of people setting up their favoured intuitions like so many cannons, and declaring the opposite side to be destroyed. It’s worse than Monty Hall.

      • http://www.rationalmechanisms.com Richard Silliker

        You are in surrealism when you find yourself arguing. What else would you expect in this case?

      • http://www.cmp.uea.ac.uk/~jrk Richard Kennaway

        I think you just set up another line of cannons and declared the enemy destroyed.

      • http://www.rationalmechanisms.com Richard Silliker

        Suicide?

        Pogo; “We have met the enemy and he is us.”

        Pogo Possum: a friendly, personable, philosophical everyman opossum. The wisest (and probably sanest) resident of the swamp, he is one of the few major characters with sense enough to avoid trouble.

      • Jackson

        Thanks for your reply, though you may not be aware of this late response… now that you come to mention the Star Trek, it does sound familiar, I may have seen some of it; I was pretty sure it was hardly an original idea; not that you were suggesting as much.

        thanks

  • mattmc

    Every single one of us experiences a cessation of consciousness at least 300 times per year, and some people here are arguing as if tomorrow’s you is somehow more you than a faithfully reconstructed/revived/uploaded/whatever you, when actually you can’t even be certain when you go to sleep that your body will eventually wake up! Reasonably sure, yes, by (incomplete) induction; but that’s it.

    I agree. There exists the possibility that this is the only day I will be alive and a new consciousness will arise tomorrow. And that tomorrow me will feel like he is the one that has been alive all along. Carpe Diem.

    On the other hand, the brain doesn’t completely shutdown or shut off. If it did, I really don’t think it would be me in there the next day, not that any of the new yous would be able to tell. I am the continuity, because that is all there is I can be.

    • Constant

      If it did, I really don’t think it would be me in there the next day, not that any of the new yous would be able to tell.

      It would be you.

      I am the continuity, because that is all there is I can be.

      Why would you not be continuously dying and being replaced by a continuum of duplicates? I don’t think continuity gets you the unity you’re looking for, it merely gets you an infinite supply of new people each thinking they are you because their memories tell them so.

      I don’t think positing continuity gets you out of the puzzle of the persistence of personal identity.

  • http://www.rationalmechanisms.com Richard Silliker

    “What does that tell you about how much you can trust your initial intuitions?”

    Nothing.

  • Pingback: Accelerating Future » Hanson: Philosophy Kills

  • Aron

    A fear of death is encoded into our nature. It would be plausible to not have this concern, but in practice, that would lower the chances of gene propagation. I suspect the wiring that’s been put in place to give us this fear is responsible for much of the complexity of the conversation.

  • http://modeledbehavior.com Karl Smith

    Robin,

    As a general matter I think you’re right. The problem that I see is trying to understand what it is about the physical interactions of the brain that produce “us.”

    Perhaps, you have a better sense of the science, but I am not sure what would suggest that a really sophisticated computer is conscious like us as to supposing that a really unsophisticated set of neurons is conscious.

    That is, how do we know that consciousness arises from the sophistication of the arrangement of materials and not something about the particular compounds that we are dealing with?

    What I would like to see, and perhaps this has been done, is parts of a brain are replaced by electronics and the mind doesn’t notice.

  • A dude

    The assumption that your copy can be “perfect” is quite likely false. Reality is not “80 thousand polygons”. Reality is infinite. You can digitize it on a blue ray DVD to fool the eye, but it still not a perfect copy.

    Technology is likelty arrive to a point where you can start reproducing self in some, but not perfect, resolution.

    So the interesting practical question is what level of signal degradation will Robin accept as still being himself?

    The question to ponder if you already had your second beer (as some commenters seem to): humans have physical barriers to how much they can abuse their endorphin-indiced highs. With a digital self, it is likely to be very easy to get a permanent high with no damaging side effects. Do you have to castrate your digital self to avoid becoming an immortal hedonist?

    • Jake

      I think that your point regarding signal degradation is a really good one. Even if the we could make the amount of signal degradation negligible, in the strictest sense the uploaded Robin would not be an exact duplicate.

      A less clever point that I’ve been pondering is the sheer possibility of being a conscious, “human” entity in an electronic medium. The human brain has evolved very specifically to exist inside a human body. For example, more of the brain is dedicated simply to processing light patterns captured by the retina than to any other single function. Massive portions are also concerned with manipulating the physical apparatus that is the human body in various ways. In fact, the share of the brain that we think is responsible for what most people would consider our “personalities” is actually quite unimpressive. And, importantly, there can be little doubt that the workings of this conscious portion are hugely affected by the physical medium in which it is implemented. An uploaded “Robin Hanson” would probably bear little resemblance to its human counterpart.

    • http://entitledtoanopinion.wordpress.com TGGP

      Reality is quantized. If we live in an infinite universe, then reality is infinite, but a subset with finite volume contains finite information.

      • Constant

        I’m reminded of a science fiction story in which there was a technology that stored vast amounts of information in the exact length of an item. All you had to do was measure the item to many trillion decimal places and the vast information would be right there in the digital representation of the length. A clever idea but not very credible, of course.

      • Eric Johnson

        T, is that really true? Space is not quantized, right? So what if two particles are separated by a distance described by a non-repeating irrational number — wont that number contain infinite information?

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    Could someone of the same point of view as me sort posters in this thread into two categories: (1) seems to experience the observer aspect of the subjective conscious experience, and (2) doesn’t have that experience (is the equivalent of a sleep walk/talker mistaken for an awake person).

    I’ll try to do it if I get a chance, but I don’t have the time now.
    I think the acid test is if they accept something that fools another person as being them, as being themselves. If they ipso facto accept that, then I would sort them as not having the observer aspect of the subjective conscious experience.

    • Eric Johnson

      Everyone here makes self-observational statements — so I think you are going to end up using the word qualia.

      • http://www.rationalmechanisms.com Richard Silliker

        Subjective idealism?

    • http://entitledtoanopinion.wordpress.com TGGP

      Do you think there might be a subjective-experience gene? What evolutionary pressures might have given rise to it, and why might it not have reached fixation? It’s not impossible that the mutation first appeared in you, but such a scenario seems unlikely a priori.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        TGGP,
        Well, first of all, we’re starting with the minority of the population interested in discussing solving the problem of mortality through science. Within that population it’s not clear to me that only a minority makes a distinction between immortality for a doppelganger and immortality for whatever vessel can maintain their subjective conscious experience. The portion of the population that makes that distinction, I intuit would share that “subjective-experience gene”.

        I trust I will not have to add a hundred footnotes explaining every short-hand term or allusion here for third parties.

      • http://entitledtoanopinion.wordpress.com TGGP

        You’re discussing the minority of the population with a certain interest, I’m discussing the entire population because I have no a priori reason to think that subset is unusual in any particular way when it comes to frequencies for a hypothetical subjective-experience gene. If large numbers of people have it, I would expect that the gene began its march to fixation long before there was anything like a feasible science of radical life extension. I had not made explicit that my question was partly intended to ask how reasonable it is to believe there is such a gene corresponding well to our hypothetical, but we can also stay in the hypothetical assuming it does exist and make guesses about its nature.

      • mitchell porter

        Within your population of interest (scientific immortalists), I think these differences may have more of a conceptual than a phenomenological origin.

        Let’s consider three differences of opinion that have been seen on this site and elsewhere:
        - willingness to believe that the number of persons in a given physical situation is not an objective fact
        - willingness to believe that a replication of the causal structure of one’s cognitive system (or whatever) in some new physical medium will produce a consciousness like one’s own
        - willingness to identify with such a copy of oneself, to the point that the elimination of the physical original is no longer regarded as death

        The first opinion is a response to the “conscious sorites paradox”, the inability to come up with an objective physical criterion as to whether a particular physical configuration contains a conscious mind. It is not a tenable position, and to reject it as untenable you have to take your own subjectively revealed existence seriously, and reason a little on this basis; but for someone to hold such an opinion does not mean they are completely lacking in introspective self-awareness, it just means they haven’t taken those further steps, or refuse to do so, perhaps because they believe very strongly in a certain physical ontology.

        The second opinion is a theory about the nature of the physical correlate of consciousness, namely, that it consists of a modular causal structure of some sort. The problems with that view have been enough in the end to make me advocate a monadic interpretation of quantum physics in which the self is a single bundle of entanglement rather than a cluster of disjoint parts, but that’s a fairly intricate debate. The first opinion is much easier to rebut, since you only have to admit that if a stream of conscious experience exists, it exists definitely, for that opinion to be falsified. Rebutting the second opinion requires a lot of wrangling about “subjective unity” and “binding relations”, it’s extremely hazy in places, and so the second opinion remains, with some reason, materialist orthodoxy and far more common that the first opinion.

        The third opinion is something else again – a willingness to identify with one’s copies. As seen elsewhere on this thread, some people say that consciousness is already periodically terminated by sleep, and the elements that give you your individuality, like memories, beliefs, and desires, also change during your life, and so it’s not such a big thing to regard a copy as another you, to regard the capacity for restoration from a backup as a form of immortality, and so on. Most of those people would be willing to admit that each copy has its own stream of consciousness, that copies will increasingly diverge as they experience different things, that when one copy dies it is indeed the local end of consciousness, and so on – they are just making a different value judgment from the person who only cares about the continuation of the original stream of consciousness. There is no evidence here of an impaired faculty of self-awareness (though I guess that would help).

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        Mitchell,
        Interesting comment, my response to a couple salient parts.

        “some people say that consciousness is already periodically terminated by sleep, and the elements that give you your individuality, like memories, beliefs, and desires, also change during your life,”

        Only some people? I think that’s the widespread consensus, of which I’m a part. That’s an important element in thinking carefully about consciousness, not a fact that leads inexorably, or even strongly, to “copy X or substrate jump Y must retain one’s (highly punctuated) conscious stream”.

        As we depart from areas where the punctuated conscious stream seems to persist (something closer to normal human life arcs), it seems to me we’re being riskier and less conservative about preserving it.

        I’ve had these discussions in great detail on my blog, and the natural end point (until scientific insight changes the details) seemed to be with me acknowledging Carl Shulman’s point that discussion of solving mortality becomes messy under sustained analysis, but that I’m still motivated to maintain my earthy (not referring to the planet here) life despite the shitstorm of paradoxes and impossibilities that stand in the way after the little things like heart disease and dementia are solved.

        “a willingness to identify with one’s copies.”

        That seems supremely arbitrary to me. I sort persistence desires into two basic categories. The first is the persistence of one’s observer subjective conscious experience (call it theatre of consciousness, threshold minimum of qualia, or something else that captures that experience). The second is desires for any other arbitrary thing to persist. It could be copies of oneself to a particular degree of accuracy, it could be one’s genetic line, it could be concept of “social justice” or one’s nation-state. For people like me in the first category it’s all fucking arbitrary if I’m not around to observe it. The strangeness of the fact that there are a bunch of people in the second category who think that the first category conflates with a doppleganger that doesn’t involve careful thought of the problem of sufficiently good detection technology, leads me to think that the observer subjective conscious experience is an artifact that doesn’t correlate 100% with social, interactive humans.

        At this stage, I feel like I’m repeating myself to people with very different intuitions, and perhaps who are neuroanatomically different in some way our technology can’t yet identify. The two different perspective are worth intense study as natural phenomena, in my opinion. And my life (in the sense of the word meaningful to me) may depend on it.

        And TGGP, I don’t have time to answer you in depth, no I’m not endorsing a single gene hypothesis. I accepted your shorthand for it, but you seem to be latching on too strongly to that one, god knows why, you’re a GNXP reader. I’m interested in exploring a undetected-as-yet biological difference, but I’m not endorsing a narrow explanation like “theatre of consciousness is a single gene mutation”, so I don’t see the point of us using that limiting framework to discuss an interesting topic.

      • Xplat

        Hopefully_Anonymous:

        Although I found the premise that exponents of a Parfit-like theory of identity lack a theater of consciousness to be absurd, I decided to humor you by reflecting on the experiences engendered by several sensory stimuli immediately available to me, including:

        - the redness of a red plastic Glad container lid (a little off from true red, similar to the color of watermelon flesh)
        - the bumpiness of a textured doorknob as I ran my finger over it
        - the sharp, but very minor pain of poking my thumbtip with a piece of metal
        - the coolness of water as I drank it

        I also spent some time on attempting to experience qualia unrelated to me personally, including qualia of other people and of my future selves, although having no idea how one would go about this it would more fall under the heading of ‘sitting around being confused’ than ‘an effort’.

        My view on personal identity allows for my identifying with
        - my brain restarted after a prolonged period of inactivity
        - a sufficiently accurate copy of a future version of my brain (if my original brain remained active, or there was more than one copy made, my *present* self would identify with all of them, although some of them might eventually come not to identify with each other very much)
        - probably an upload of me, although there remains a slight doubt over whether there might be a meaningful difference between a simulation of a conscious being and a conscious being. However, if a (sufficiently high-quality) upload of me would be a mind at all, I am quite certain that it would be *my* mind.

        I am also certain that if being uploaded were ever found to be a mistake it would be a reversible one, at least for a while. While perhaps I would have missed out on some conscious experiences during my time as an upload, a reembodiment (or other adaptation to a suitable substrate) would be able to begin having *my* experiences again, and could even (re?)experience what I missed as memories. I evaluate the chance of there being a difference as less than 5% anyway.

        I am fairly certain that had I participated upthread you would have judged me as likely to be lacking in conscious experience, especially since I would have been unlikely to have mentioned a 5% doubt earlier. Yet nonetheless my qualitative experiences exist and are particular to me.

        If I had needed any convincing to begin with, I would probably be at least somewhat convinced by now that your theory was false. However, I have no idea how/if any of this will convince you.

        I can only assure you that you are being paranoid.

  • John Maxwell IV

    What if I have a conjoined identical twin brother who I share everything with, who shares everything with me, and who agrees with me on everything? Should I treat this twin brother as part of me? I’m inclined to think not. If we’ve been conjoined all our lives and someone disjoins us and my twin walks into another room and sees a picture that I can’t see, I won’t know what the picture was of. So we’re two different minds.

    A collection of parts is a mind to the extent that it’s got internal communication going on. If you separated someone’s right brain from their left brain, to my way of thinking they would very quickly qualify as two separate minds. If you brought them together they would likely learn to communicate with one another again and become one mind once more.

    So I’m only concerned with an uploaded version of myself being present in the future to the extent that I am interesting to others. Although I might be very interested in a gradual transition between a biological brain and a digital one: maybe a button I could push that would replace 1% of my brain’s mass with digital circuitry over a period of 5 minutes, and another button that would reverse this change, so I could play with it and convince myself that the digital me was still the real me. Probably it would be most useful to tell me what component of me had been made digital, so I could try out thinking thoughts with that component and ensure that things felt normal.

    • Cyan

      So I’m only concerned with an uploaded version of myself being present in the future to the extent that I am interesting to others

      You are failing to anticipate being the upload, an anticipation you would have just as much right to as the anticipation of continuing in your organic body.

      • Aaron Denney

        To make this point more colorfully, suppose you’re an epileptic, and the doctors want to cut your corpus callosum, which should alleviate the seizures. When you wake up after the operation will you be the left brain, or the right? And yes, there’s plenty of evidence that describing them as two separate entities is more appropriate in those with severed corpus callosums.

  • Pingback: ShrinkWrapped

  • Pingback: Weekend Readings - Ross Douthat Blog - NYTimes.com

  • http://google.com Authese

    …please where can I buy a unicorn?

  • Pingback: How Many of Me Are There? » Death Is Bad

  • Pingback: New York Times Features Robin Hanson and the “Hostile Wife Phenomenon” in Cryonics | Accelerating Future