What Is Em “Death”?

Yesterday I talked about one big change with ems (future whole  brain emulation robots) – they’d mostly be workaholics. Another other big change with ems, I think, is their concept of and attitude toward “death.” Ems would often agree to make copies of themselves, copies which they expected would only last for a limited time, such as one year. But they’d mostly be fine with this. Let me explain.

Imagine that you have lost all memories of some period of your life, say a period one year long. You still have pictures, letters, a diary, some video, memories in others you can talk to, etc. But while it all sounds like the sort of thing you might have done, you don’t additionally recall doing any of it. How much would this memory loss degrade the value of your overall life? I’d say it would be far worse to have not lived that year at all, say being put in suspended animation, than to merely have lost the memory of that year.

Now imagine that, because you had access to a time machine, this lost year happened at the same time as one of your other years. During 2006, for example, you were off experiencing 2005 all over again, but in another place, and then you forgot it all, expect for the pictures, etc. For me, this would not much degrade the overall value of my life. It would again be a bit sad not to remember that year, but its not a big deal when it happened.

Now imagine that you could use this time machine to both experience 2005 twice, forgetting one of the parts, and also to experience 2006 as usual. Here you’d be adding one more year onto your life, which I’d consider great. If the cost of having one more year of life were that you don’t fully remember that year, to me that would be a small price to pay.

For an em who shared my attitudes here, the option to spawn a new copy who only lasted a year would be much like the option to live another year longer, but without remembering it. Mostly a good deal, at least if you liked your life during that time. Yes the copy might be sad when his year came to an end, knowing his detailed memories of that year would not last. But he’d usually expect that “he” would continue to exist through other copies. He wouldn’t consider this harm to be remotely as large as what we call “death” — the end of anyone who remembers our life in some detail.

Ems would start as scans of humans, but not of random humans – the humans would be chose for their productivity and their acceptance of the em patterns of life, and “death.” As a result, ems would mostly be fantastically-capable workaholics who were not greatly bothered by “death” given the existence of other close copies. Since they seem to me quite “human” with lives well worth living, I consider the em revolution to be far more glorious than horrifying.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Anonymous

    Out of curiousity, seeing as you consider ems of a person to be the same person would you advocate property rights in common? How would you advocate dealing with (likely very rare, but possible later in the em era or with madmen) disputes between copies of the same person over property?

  • Alexander Kruel

    I would like to know your definition of personal identity, the nature of self. What does it mean for an agent to surive or to die? “The end of anyone who remembers our life in some detail”?

  • snarles

    The future would be glorious indeed if you could manage a shared identity with your emulations. Let your emulations work hard while you live a luxurious life off of their toils. Then periodically “merge” your memories, so that the emulations perceive themselves to be the same person as you; they’ll happily continue working in order to continue their (your) life of luxury.

    • http://www.gwern.net gwern

      I think you over-estimate how ‘luxurious’ the HEAD repository/em would live.

      If each em is a complete copy, then any of them could set up shop as the HEAD em, just like you can fork a software project and take over if you offer something new.

      An em who tried to abuse its position as merge master to extract more than the cost of forking and copying out new ems and merging everything together would be undercut by one of its own copies (since the copies know how to do it just as well as whatever copy/original is running the merging).

      And if you have enough control to prevent any competition – add brainwashing or cripple the copies in some way – well, then they are your slaves and you might as well just use that control to take a percentage of their earnings and let them continue on.

      • snarles

        Well, admittedly my scenario is implausible, but it’s a delicious plot for a story. If the bots REALLY considered themselves to be the same person as the original copy, it would be, in some sense, the original copy’s JOB to live as well as possible in order to motivate his clones.

  • wophugus

    I’m not sure I want to live in a legal regime that freely countenances not just homicide, but the homicide of people exactly like me. In a world where EM’s can be shut down without their consent, it’s bad to be an EM.

    I guess the only people allowed to kill EM’s are their root copies, in your hypothetical, and that the secondary copies have no vote in changing that law? That doesn’t seem politically sustainable. It seems a lot more likely to me that either all EM’s would get to participate in making laws, in which case I would expect pretty widespread EM revulsion at the idea of involuntarily shutting an EM down, or no EM’s would get a say in the legal regime, in which case your discussion of how happy EM’s will be really needs to address the fact that they are all slaves with no personal autonomy. It’s hard to select for people who are happy being workaholic slaves who will be killed the moment — even the temporary moment, assuming it is economical to turn off old copies and just create a new ones when one task is completed and you want to move on to another one — they are of no use.

  • wophugus

    I’m also kind of weirded out that you think that an EM being shut down will be sad that his specific experiences will be lost, but not sad that he won’t ever see his friends again, or experience pleasure again, or experience ANYTHING again, all because someone quite like him could, if he ever got the time, read his diary. I can see it being a good deal for the original EM, who gets to keep living and peruse the diary, but the copy EM being cool with death just doesn’t square with what I know about people. Identical twins don’t fear death less, and a copy EM who has been active for some time could have grown every bit as different from the original as a twin from a twin. Experiences make us who we are, two EMs with different experiences are two different EMs.

    • Hedonic Treader

      You seem to imply a continuity of personal consciousness for which I can’t see a rational justification.

      but not sad that he won’t ever see his friends again, or experience pleasure again, or experience ANYTHING again, all because someone quite like him could

      It’s all you’ll ever have. If you see your friends tomorrow, it’s not your current self that will meet them, it’s your future self. You will have slept and woken up, your neurons will have (at least slightly) changed their wiring and synaptic strengths – so much for the avoidance of “information-theoretic death”. Do you think you have a magic soul pearl – or any equivalent entity or property – that integrates pheonmenal consciousness at different times into a kind of unit? What would that entity or property be, and what kind of reason do we have to postulate its existence?

      • wophugus

        I think people who are facing death often feel sad or fearful because they think about the things they are losing. I don’t think it is very common for philosophical inquiry into identity theory to alleviate that sadness or fear. To me the heart of this conversation is whether Ems will lead lives worth creating, so I’m more concerned with how a death sentence will make them feel and less concerned with the philosophical validity of feeling that way.

        In other words, I’m not advocating feeling sad at death that you are going to lose your friends and will never get to experience anything in the future, I’m just saying that it is very common to feel that way. The question, then, is whether knowing that there are people very like you in the world (or not, depending on how long ago you were copied) would alleviate lots of that sadness.

      • Hedonic Treader

        I see you point now, wophugus. It is worth considering how vast the space of all possible minds may be, and how different the emotions and intuitions of these different minds can be, including emotions and intuitions about personal identity, relationship between self and other, fear of death, the nature of time, and the boundaries between the individual and larger-scale structures (think hive-mind or insect states). In fact, current humans can be memetically pre-disposed to sacrifice their lives for the tribe, or for abstract identifiers such as nations, religions, even works of art sometimes. Now imagine if such memes feed back into the meme-carrying mind itself, by using neurotech, biotech, or digital equivalents for ems. I can envision mind-clusters with fluid boundaries between self-image and larger-scale identification entities, especially if these minds can share common memory content or quasi-telepathic high-bandwidth communication, and if their utility functions become more well-aligned by game-theoretic pressure in a world of more transparency, reputation tracking and “karma” heuristics.

        Hm… I wonder if these trends can be modelled by game-theory, as functions of factors such as meme-mind feedback, degree of transparency, bandwidth of communication, universality of cooperative networks etc.

    • Peter Van Valkenburgh

      well said, wophugus.

  • Aron

    The em vision is such a narrow slice of time to me, at best. Even if we grant the premise that a human upload is a variety of quantum leap from narrow AI to AGI, don’t we still then just rapidly progress to something beyond that? How are these things completely oblivious to applying analytical reduction directly to the runtime properties of themselves?

    If an em, or human engineer, wants to explain the particular behavior of a particular em he can grab the entire causal chain and then start looking for methods of improving or changing that functionality. Are we to assume that this is just not possible? Somebody says, “oh I have a bias in the way I came to that conclusion”, or “this decision was correct but I still felt primitive twangs of suffering for it, I am going to censor the module of my brain that generated that thought or re-route it to be more explicit and less emotionally-laden”. etc.

    The notion of some extended period of human-cloned machine intelligence is strikingly implausible. Sure there will be unanticipated side-effects of these changes, but some will work, and those will dominate the population rapidly.

    And then if we take all that as reasonable, we back up the chain a little and wonder why it doesn’t start when we first upload a chimp.

  • Peter Van Valkenburgh

    If I’m not mistaken an em is just a clone (albeit partial because were just talking about about the brain). Identical twins are clones. Do you think identical twins feel the same as your hypothetical ems? Glad that should they die they will actually still exist.

    Personal identity is more than genetic determinism. We are our memories. If we don’t remember something we are a completely different person than any version of “us” who does remember that thing. You have a rather extreme view of self and wholeness. More physics based than economic or biological. More Newtonian than Leibnizian.

  • mjgeddes

    An em dies when the copies can’t be ‘merged’ with the original. Consider multiple copies of ems to be analogous to possible future and past selves.

    If we denote your current self as C, and a possible future self as F, provided you can make enough analogies between the goal systems of C and F, C can be merged with F and F is indeed a viable future self.

    Since the complexity of F can be arbitrarily greater than C, the only possible way to map C to F is to ‘smear out’ some of the details of the goal system of F, that is say, make categorizations , or equivalently, make analogies.

    Note that at no stage is ‘probability’ involved in establishing personal identity. The proper metric is ‘similarity’. Note also that scientific observation and prediction actuallypresupposes observers with stable identities. This proves that ‘similarity’ is a more primative (fundamental) concept than ‘probability’.

  • kirk

    em memories in parallel rather than sequential could remember every year as the new em in year 0 learns from the ‘beginning of life’ along with another. This other is staged for 10 month gestation with 2 months for a tag up. Like the US senate.

  • Doug

    So Robin, I’ve asked this here before: Would you volunteer to have your brain scanned for the first EM prototype?

  • http://lightskyland.com Matthew C.

    Since there is no actual “individual” self or “me” but only an cultural / emotional / conceptual nexus around some words and ideas (that exists for obvious Darwinian reasons) Robin’s point of view is just as valid as all the others expressed here.

    What “you” actually are was never born and can never die. There are not multiple “selves”, just one Cosmos experiencing through an uncountable number of dreams of selfhood. This can be perceived directly after deep contemplation (ala Einstein), meditation, certain drugs, or sometimes just a blind, random flash of insight. Your “me” / “I” is just a wetware program module. Or, as Alice said it: “you’re nothing but a pack of cards!”

    As for “Em” software or Yudkowskian AI, I’m truly not holding my breath for either one. . .

  • Zachary Williams

    David Brin beat you to it. Kiln People

    • http://hertzlinger.blogspot.com Joseph Hertzlinger

      Clifford D. Simak beat David Brin to it in “Good night, Mr. James.”

  • Matt Knowles

    Reading The Singularity is Near, I’ve spent a bit of time really contemplating the implications of Mind Uploading.

    Assuming Em’s were truly “aware”, I do not agree at all that an Em copy would be so sanguine about its own termination. I’m not convinced that machine intelligence will ever create awareness, but assuming it did, and assuming I gradually replaced 100% of my biology with technology, even knowing that a “copy” of me was running somewhere else, I would still be aware of “me” as a distinct copy, and I wouldn’t want that copy to cease existing.

    Awareness is tricky to prove, and I’m not sure I would accept a computer program was aware just because it could present a persuasive argument that it was. It could just as easily be programmed to present a persuasive argument that it wasn’t…

    I’m no Luddite, I just expect that, as our models of the different regions of the human brain get better and better, we’ll discover more and more how little we truly understand awareness. As they say, the more you know, the more you know you don’t know.

  • Evan

    An em dies when the copies can’t be ‘merged’ with the original. Consider multiple copies of ems to be analogous to possible future and past selves.

    This sounds about right to me. Otherwise you could make Robin’s analogy with identical twins. They start out as one person, then a copy is made. I think most identical twins want to live. The only difference I can see between ems and twins that might be important is that ems share memories, not just genes. I’d have to say that if I was an em, I’d probably try to avoid being deleted.

  • http://daedalus2u.blogspot.com/ daedalus2u

    If identical twins are two different entities, then a single human is multiple entities over their lifetime.

    The self-aware module doesn’t have to be very sophisticated. Just a subroutine that returns “I am me” when ever questions about identity occur. Define the truth value of that statement to be TRUE and the entity will never doubt it.

    People with even very severe traumatic brain damage don’t lose their sense of self-identity even though their brains have been massively reconfigured. Self-identity must be a pretty trivial brain function.

  • Evan

    If identical twins are two different entities, then a single human is multiple entities over their lifetime.

    The fact that identical twins are two different entities is fairly self-evident. I don’t know about the counterintuitive conclusion you draw from that, but I would say that if my future self time-traveled here, and it was possible for me to die without creating a time paradox, I still wouldn’t want to.

    People with even very severe traumatic brain damage don’t lose their sense of self-identity even though their brains have been massively reconfigured. Self-identity must be a pretty trivial brain function.

    You’re conflating “trivial” as in “easy” with trivial as in “unimportant.” Let me assure you that even if self-identity is a trivial brain function, that that doesn’t mean it’s not important. Having a heartbeat is likely even more trivial, but it is also of utmost importance.

    Let’s assume that it turns out the Many Worlds interpretation of quantum mechanics is right. Should I be more willing to take risks, because even if I am killed in one universe, another me lives on in another? Should I not be sad if an accident kills a loved one because there’s another universe where they live?

    It would be really, really, convenient if every individual em copy didn’t become a distinct person with rights every time a new copy was made. But the universe isn’t always convenient.

    • Hedonic Treader

      Should I be more willing to take risks, because even if I am killed in one universe, another me lives on in another? Should I not be sad if an accident kills a loved one because there’s another universe where they live?

      These “should” questions make far less sense in many-worlds than is usually assumed. After all, you’ll always get all possible answers to them, and you’ll always make all possible decisions in some universes.

      But let’s assume there’s exactly one duplicate universe, and your loved ones die in this one. I assume that the reason why you’re going to be sad is that you have an emotional bond to your loved ones, and now you’re going to live on missing them – for a while, at least. So of course you will be sad, because you are alive in a world without them, while still feeling the severed emotional bond.

      Otoh, consider this: Assume there is a technology that scans your body and brain (and that of your loved ones) every evening and stores the content to a backup storage. There is a complementary technology that allows to re-create a person’s biological body and brain from the backup data in case of an emergency. Your loved ones die in a car accident and are immediately re-created from backup. Will you stop having relationships with them? Or imagine the same thing happens to you – will you accept that your bank account no longer belongs to you?

    • http://daedalus2u.blogspot.com/ daedalus2u

      I am not conflating the two senses of trivial.

      You are confusing your feelings that something is important with the thing actually being important. Upthread Hedonic Trader mentions people putting the value of material objects and ideas ahead of their own lives. They are also confusing their feelings that something is valuable with the thing actually being valuable.

      You don’t have a good definition of what it is that you are considering to be valuable. If you define yourself as the “self-identifying entity” that inhabits a particular piece of meat, then when that piece of meat is destroyed, so is the self-identifying entity that formerly inhabited it. If you define yourself as the “self-identifying entity that believes it is you and that believes it formerly inhabited a particular piece of meat”, then what matters is the belief state of the self-identifying entity. There could be millions of self-identifying entities that believe themselves to be you, electronic and biological.

      In an electronic entity, the change that makes it believe it is you could be trivial, a few lines of code.

  • Michael Kirkland

    What would you do if you were decanted knowing you would be shut off in a year? Would you dutifully perform your appointed task, or would you spend the time trying to escape, and/or displace your parent copy?

    I suspect that while we may try to scan people willing to accept (your view of) the em lifecycle, the ems that actually survive and reproduce will be the ones that reject it.

    • http://daedalus2u.blogspot.com/ daedalus2u

      I could imagine an entity with the life-histories of millions of individuals in its memory. The self-identity module could then be a line of code that specifies belief in self-identity of a particular one of those millions of life histories in its memory. When those few lines of code cause the entity to self-identify as “you”, then the value of the entity becomes infinitely high (to your present self) and when the few lines are changed to direct the self-identity to someone else it is not?

      The entity could be “wiped” simply by changing the few lines of code that cause it to self-identify as a different one of the millions of entities in its life-history memories. The memories don’t need to be deleted, they can still be there as long as the self-identity code tells the entity that it is someone else, it is that someone else.

      Does the entity get upset when it switches self-identity? Not unless there is code that makes it experience angst when its identity switches. Why would the designer of the EM waste code and computational resources on something so useless as angst about unimportant internal code changes?

      • Tony B

        Human brains (or anything based on one) does not have any bit of it that runs on ‘a few lines of code’. The code does not dictate the thinking process, the code dictates the behavior of individual elements and their interactions and relationships dictates the thoughts and behavior. You can POSSIBLY postulate this for an entirely artificial process, but not one that has any sort of continuity with a human.

      • http://daedalus2u.blogspot.com/ daedalus2u

        That depends on how you define “continuity”. If the entity believes it is continuous with a human entity, who is anyone but the entity who believes it to disagree?

        Would you require that the entity be scanned and its thought processes traced and emulated so you can figure out how it arrives at its belief that it was formerly a particular human entity?

        Is that the standard you use to decide if a human is who he/she says he/she is?

        People can lose large fractions of their brain and still believe they are the same entity as before. Gabrielle Giffords lost a big chunk of her brain. Is she the same entity she was before? Is she still the person that was elected? She lost more than a “few lines of code”.

        What kind of “proof” is necessary to establish entity continuity?

        What ever “self-identity module” a human has, that could be instantiated as a subroutine in an EM and then called upon with a few lines of code when needed. There could be millions of self-identity modules. Which ever one is “active” has the hundreds of trillions of lines of code (or whatever) that a human needs to have self-identity activated.

  • B

    Yes the copy might be sad when his year came to an end, knowing his detailed memories of that year would not last. But he’d usually expect that “he” would continue to exist through other copies. He wouldn’t consider this harm to be remotely as large as what we call “death” — the end of anyone who remembers our life in some detail.

    I completely disagree. What if someone were to tell you tomorrow that you’ve outlived your usefulness and you’ll be euthanized in a week. But don’t fret, you’re actually a clone and somewhere out there, there’s the original version of you that sold the license for this Robin to work as an economist until it was deemed you no longer was very good at it. You would take little comfort in the fact that another Robin lives on.

    And how long is it before a person gains their own identity? You said one year. What about five, or ten? When do you stop being the same consciousness as your other copies and start being wholly unique. The brain isn’t a static object. And I assume the emulated brain will change with its new experiences.

  • Pingback: Overcoming Bias : The Scarcity Of Life

  • Pingback: Overcoming Bias : Shulman On Superorgs