New Yorker on Age of Em

Joshua Rothman, in The New Yorker, on Age of Em:

It may be, too, that we should look with some trepidation toward the transitional period—that strange era in which our real-world ways will be disrupted by the introduction of new and bizarre simulated life forms. In “The Age of Em,” a nonfiction work of social-science speculation published earlier this year, the economist and futurist Robin Hanson describes a time in which researchers haven’t yet cracked artificial intelligence but have learned to copy themselves into their computers, creating “ems,” or emulated people, who quickly come to outnumber the real ones. Unlike Bostrom, who supposes that our descendants will create simulated worlds for curiosity’s sake, Hanson sees the business case for simulating people: instead of struggling to find a team of programmers, a company will be able to hire a single, brilliant em and then replicate her a million times. An enterprising em might gladly replicate herself to work many jobs at once; after she completes a job, a copied em might choose to delete herself, or “end.” (An em contemplating ending won’t ask “Do I want to die?,” Hanson writes, since other copies will live on; instead, she’ll ask, “Do I want to remember this?”) An em might be copied right after a vacation, so that whenever she is pasted into the simulated workplace, she is cheerful, rested, and ready to work. She might also be run on computer hardware that is more powerful than a human brain, and so think (and live) at a speed millions or even trillions of times faster than an ordinary human being.

Hanson doesn’t think that ems must necessarily live unhappy lives. On the contrary, they may thrive, fall in love, and find fulfillment in their competitive, flexible, high-speed world. Non-simulated people, meanwhile, may retire on the proceeds from their investments in the accelerated and increasingly autonomous em economy—a pleasant vantage point from which to observe the twilight of non-emulated civilization. Many people have imagined that technology will free us from the burden of work; if Hanson is right, that freedom could come through the virtualization of the human race.

This was in an article about the simulation argument. Two years ago I compared em and sim conversations, noting that in both cases many discuss using them as fiction settings, the chances that they are true, clues for inferring if they are true, and what they imply for identity, consciousness, physics, etc. But few discuss social consequences, such as how to live in a simulation or what a em world is like socially.

Oddly to me, Rothman didn’t go that direction; he didn’t even mention my (or anyone’s) analysis of how to live in a simulation.

Oh, and running trillions of times faster than humans is quite a bit faster than I’ve guessed; I’ve said maybe millions of times faster.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • one of the dudes

    About “Do I want to remember this?” as a de-dramatization of death… The key question here is identity. Who is that “I” asking the question?

    Performing highly cognitive work would require the entirety of the person to be emulated. This likely means the copy will contemplate the questions of life the universe and everything, along with its place in it. Then, when its time is up, the binary choice becomes — continue the life of discovery and contemplation or just end here? Why any EM would choose “end here” is unclear to me.

    • Ely Spears

      One aspect I would have liked to see more speculation about in Age of Ems is branching and merging like version control. There’s plenty of talk about branching (copying) an em, but I think if technology enabled merging two ems (not even necessarily two ems that shared a common em ancestor from which they were copied), that could change things substantially. And especially so if it were possible to selectively merge.

      If I understand your comment, it’s sort of like the way humans can live through a bad experience, but have “discovered truths” about themselves. Lots of great art is borne out of surviving bad experiences, say. In an em world, this might mean a copy suffers through some boring, stressful, or unpleasant work. They might not wish to remember it, but also their persona, outlook, hopes, dreams, motivations, philosophy, etc., would have been shaped and changed and formed as a result of that tough experience, and they may feel coming out of it that the net result is personal growth that they value and don’t want to throw away.

      If they could selectively merge the valuable parts back into some kind of “master branch” of that em clan, they might choose to, thus gaining those personal growth transformations without having to viscerally remember the hard parts that got them there. But of course, for bad enough circumstances, they may not wish for there to be a record of any of it.

      When I think about the idea that an em would contemplate whether they want to remember an experience vs. whether they want to “end” or “die,” I think the point is that Robin speculates that the way an em would conceive of the possibilities doesn’t include the option of their consciousness being fully terminated. They don’t see it as their self ending, but rather some specific branch of their self being prevented from diverging further.

      I don’t think Robin is saying they would *always* choose to prevent further divergence because of an unpleasant experience. Only that they would consider it on those terms rather than terms that treat them as a wholly separate entity. Sometimes they’ll choose to “end” — sometimes they won’t choose to. It will depend on resources, who they are beholden to, whether they valued the experience, whether merging technology exists, etc. But it will depend a lot less on an innate drive to “personally” stay alive as long as possible. Of course, with trillions of ems, that particular case is probably bound to happen too, and there may even develop strong em social norms making it taboo for em copies to feel like their divergence from their ancestor em entitles them to individuality and their own separate life span — many human cultures already make things about the greed or lust for prolonging life taboo, or leaving your family’s heritage or religious tradition behind and ‘diverging’ to have a different way of life. It’s not hard to imagine ems would too and that some ems wouldn’t care about would rebel to stay alive and pursue individuality regardless, like someone in feudal Japan refusing to commit ritualistic suicide.

      I could be totally off the mark.

      • one of the dudes

        Well I think Robin’s focus on EM originated in the idea that we don’t know (and maybe it’s unknowable) the secret sauce of self-consciousness. So we just copy existing self-conscious entities, short-cutting 5bn years of evolution.

        This means that any selective EM merging technology is beyond the level of understanding that will be available during EM era.

        EMs will probably have lower barriers to suicide than humans, so the instances of self-termination will be more frequent. But to motivate them to deny individuality and sacrifice themselves for the greater good like humans did in some historical circumstances would only be possible at subsistence level of resource scarcity.

        But if these were the circumstances, would EMs choose to make copies of themselves? I posit that feelings of empathy among copies will not be unlike those of parents and children.

  • Andrew

    Oh man, the more I read about this book, the more excited I become regarding the future; particularly the immediate future where “science fiction” becomes “nonfiction social-science speculation”.

  • AVT

    1) Where does the 1000x speed improvement figure come from? Egan uses it in his novels but I don’t imagine he had any particular reason for it other than it sounded like a nice round number.
    Cognitive science seems to fairly consistently find that the “software” of the brain is highly optimized for energy efficiency so shouldn’t the default assumption be that the “hardware” is similarly optimized, and runs near the physical limit of computation?
    Of course it’s possible that evolution got trapped by design constraints that kept performance many orders of magnitude shy of the Landauer limit, much as animal communication never moved from sound waves to far superior radio waves, but there doesn’t seem to be any obvious a priori reason to presume this.
    2) Isn’t the forgotten party a sort of “intuition pump”? The presumption is that taking the drug is the same as creating a spur, but I question the plausibility of such a drug.
    The sort of memory loss induced by alcohol or Rufenol can hardly be compared to something that SEVERS THE CAUSAL DEPENDENCE of the state of a brain at time t from that at t-epsilon. The superficial similarity of the two would seem to obscure more than it illuminates. If you were halfway through speaking a sentence when the drug took effect you’d finish that sentence (for the second time) when it wore off. If you forgot something while at the party you’d remember it again when the drug wore off.
    Somehow this drug is causing a complete copy of the brain’s internal state to persist within the subject’s skull while the brain itself somehow just blithely forges onwards wholly oblivious to its presence in there. It seems like this is just sneaking something basically tantamount to the spur scenario itself into a seemingly mundane event. It’s like Searle’s “I’ll just memorize the whole Chinese Room” reply turned on it’s head.
    3) Even if one grants for the sake of the argument that natural selection favors spur-creation, does it follow that natural selection would disfavor seeing spur termination as death?
    Isn’t it just as possible that it would favor a willingness to die under certain circumstances? Such self-sacrificing behaviour is seen in e.g. honeybees and ants.
    4) Speaking of Greg Egan, are you familiar with the TV series Galaxy Express 999? It predates Permutation City by some fifteen years but is strikingly reminiscent of Greg Egan’s posthuman stories. (Though it takes a nearly opposite tack–I’ve come to think of it as “Bizarro Greg Egan”.)

    • AVT

      Disqus seems to have eaten my carriage returns. Sorry about the wall of text. 🙁

  • AVT

    1) Where does the 1000x speed improvement figure come from? Egan uses it in his novels but I don’t imagine he had any particular reason for it other than it sounded like a nice round number.

    Cognitive science seems to fairly consistently find that the “software” of the brain is highly optimized for energy efficiency so shouldn’t the default assumption be that the “hardware” is similarly optimized, and runs near the physical limit of computation?

    Of course it’s possible that evolution got trapped by design constraints that kept performance many orders of magnitude shy of the Landauer limit, much as animal communication never moved from sound waves to far superior radio waves, but there doesn’t seem to be any obvious a priori reason to presume this.

    2) Isn’t the forgotten party a sort of “intuition pump”? The presumption is that taking the drug is the same as creating a spur, but I question the plausibility of such a drug.

    The sort of memory loss induced by alcohol or Rufenol can hardly be compared to something that SEVERS THE CAUSAL DEPENDENCE of the state of a brain at time t from that at t-epsilon.
    The superficial similarity of the two would seem to obscure more than it illuminates. If you were halfway through speaking a sentence when the drug took effect you’d finish that sentence (for the second time) when it wore off. If you forgot something while at the party you’d remember it again when the drug wore off.

    Somehow this drug is causing a complete copy of the brain’s internal state to persist within the subject’s skull while the brain itself somehow just blithely forges onwards wholly oblivious to its presence in there. It seems like this is just sneaking something basically tantamount to the spur scenario itself into a seemingly mundane event. It’s like Searle’s “I’ll just memorize the whole Chinese Room” reply turned on it’s head.

    3) Even if one grants for the sake of the argument that natural selection favors spur-creation, does it follow that natural selection would disfavor seeing spur termination as death?

    Isn’t it just as possible that it would favor a willingness to die under certain circumstances? Such self-sacrificing behaviour is seen in e.g. honeybees and ants.

    4) Speaking of Greg Egan, are you familiar with the TV series Galaxy Express 999? It predates Permutation City by some fifteen years but is strikingly reminiscent of Greg Egan’s posthuman stories. (Though it takes a nearly opposite tack–I’ve come to think of it as “Bizarro Greg Egan”.)

    • http://overcomingbias.com RobinHanson

      1000x is tradeoff between fit a career into an econ doubling time and be able to interact with a whole city without noticeable time delay. We already know that the brain eats far more than one bit of entropy per gate operation, while much less is physically feasible. Didn’t see GE999.

      • AVT

        I’d strongly recommend it to you. As far as I know GE999 was the first and so far only mass media production to try to portray the socio-economics of a post-human world in a semi-serious fashion, though it paints a rather grim picture most of the time.

        The overall paleocon/Luddite slant is a stark contrast to Egan’s liberal/Utopian sensiblilties.

        This list covers almost all of the posthuman-centered episodes but episode 33 is also quite significant and I’d probably throw in episodes 21, 36 and 67 for good measure.

      • AVT

        Oh, and 96 & 97 as well.
        On second thought, “Luddite” is probably too strong…the series ridicules knee-jerk opposition to technology most of the time.