Hurry Or Delay Ems?

My best guess for the next big enormous thing, on the scale of the arrival of humans, farming, or industry, is the arrival of whole brain emulations, or “ems.” This raises the obvious question of whether we should try to hurry or delay the techs that would enable this change.

I see seven relevant considerations:

  1. Some think subsistence-wage ems an abomination, and so prefer to delay or prevent them. Conversely, others think that vast em numbers times lives worth living makes the em world a good well worth hurrying.
  2. Some want to delay the em transition, to give more time for its serious consideration. Others want visible em efforts to start sooner, fearing that serious consideration won’t start before then, and expect an earlier start to give a better total discussion. Still others think that, as with nanotech, early public anticipation of such events tends to make them go worse.
  3. The richer and more capable our civilization gets, the lower seem its chance of being extinguished by most disasters. Ems would make us richer faster, and ems survive biological disaster especially well.
  4. During the em transition our civilization is especially vulnerable to collapse, or to a central power grab. This transition is less disruptive when the last tech to mature is computing power, and most disruptive when that last tech is cell-modeling. This argues for hurrying scan and cell-model tech, relative to computing tech.
  5. Many fear that a single self-improving AI will suddenly grow vastly in power and take over the world. Some want to delay this event until they see how to pre-provably control such an AI. So such folks want to delay most other AI tech advances, including ems.
  6. Assuming pre-provable control is infeasible, on-the-fly control seems better when the people controlling are many and fast relative to the controlled AI. Since ems can be much faster and numerous than humans, this argues for hurrying ems.
  7. Great filter and anthropic selection considerations greatly raise our estimates of existential risks that could leave the universe empty. These do not much raise AI risk estimates, however.

On #1, I confidently estimate em lives to be numerous and worth living. On #2, I weakly estimate little benefit from delay or early publicity. Points #3,4 are the strongest I think, especially #4, and both argue for speedup. Since I think a single machine suddenly taking over the world is pretty unlikely, I give #5,6 less weight, especially when taking #7 into account. So on net I favor hurrying em cell-modeling tech most, em scan tech less, and weakly favor delaying em computing tech.

Added 11a: More considerations from the comments:

  1. Future people may evolve to differ from us via competition and changed circumstances. Some hope Earth will soon collectively organize to regulate to prevent such change, and so want to minimize change and competition before then. Since ems give more faster change, they prefer to delay ems.
  2. It seems humans can live on as ems, and non-poor ems need never die. Not dying is good, suggesting we hurry ems. Conversely, if uploading really kills humans, perhaps we should delay ems.
      GD Star Rating
Tagged as: ,
Trackback URL:

    The question I have, has to do with slipovers. If each emulated mind produces more than it consumes, it will generate spillovers. Where do those spillovers go?
    I mean the marginal emulated mind will be at zero net spillover, but the infra-marginal ones will presumably have a lot of spillover.

  • Carl Shulman

    5. could be clearer. The worry of folk like Nick Bostrom is that accelerating tech for brain emulation will get us AI that is only loosely brain-inspired, not robust emulations of specific people.

  • Evan

    Some think subsistence-wage ems an abomination, and so prefer to delay or prevent them. Conversely, others think that vast em numbers times lives worth living makes the em world a good well worth hurrying.

    Don’t forget the fact that a subsistence em scenario with heavy competition greatly increases the odds that some of the ems might evolve into non-eudaemonic agents and then outcompete the other, more human ems. That would be a disaster, instead of “numerous and worth living” ems that lead happy lives we’d get a future ruled by emotionless drones. Even if you accept the “repugnant conclusion,” a major problem with implementing it in real life is that you create unacceptably high odds that some disaster will cause people’s lives will transition from “barely worth living” to “not worth living.”

    I would say to delay or control ems so that institutions can be put in place to ensure they don’t evolve into noneudamonics. I must admit that I am biased in that regard, since I am an average preference utilitarian, not a total happiness utilitarian, and therefore don’t find a malthusian em scenario all that desirable. But I think the objection that a malthusian em scenario will make the odds of noneudaemonic agents, and all other sorts of disasters, unacceptably high. Note that to Robin’s point three doesn’t apply to this as noneudaemonic agents are a form of undesirable evolution, not destruction, and there are many disasters that could make a large chunk of the population’s lives not worth living without destroying civilization.

    • I’m something of a total (non-average) Utilitarian, so I’ll try to relieve your potential bias and agree that legions of “noneudaemonic” agents seem like the natural endgame in Robin’s em scenario.

      Robin, I’m totally behind the idea that the poor do smile, but I just don’t see any reason that ems developed in a vigorous open market are much more likely to be a net positive than negative. This probably has to do with the fact that I find it unlikely that ems will much resemble human brains for more than a tiny instant at the beginning of the em era; their long-term rough equivalence to humans seems to be an assumption in much of your arguments. Maybe I’m missing something?

      • (Here, “tiny” just means relative to the length of the em era, which we can define as the time for which it still makes sense to talk about localized conscious agents. I certainly don’t presume an AI explosion. A gradual modification of em nature will suffice.)

    • I think the danger is not so much that ems evolve to be non-eudaemonic, but rather they evolve to believe that non-ems are non-eudaemonic and so non-ems are not living lives worth living and so the best thing to do is to euthanize all non-ems.

      Humans already do this, this is the essence of xenophobia and bigotry. The bigot believes that the objects of his/her bigotry are not living lives worth living because they don’t have the capacity to do so. Humans have implemented the “euthanize those living lives not worth living” meme. That is the essence of what genocide is, the meme of “kill them all and let God sort them out”.

      This is the essence of what regressive taxation is. It is the disproportionate taking of resources by one group from another group because the first group feels that the second group does not generate sufficient eudomonia with those resources. This is analogous to theft, where thieves values self-generated eudomonia but not eudomonia generated by their victims.

      To a large extent, social status is simply a weighting factor with which to value eudomonia. High status individuals have a high eudomonia weighting factor, low status individuals have a low weighting factor (according to that social status hierarchy). If the weighting factor is low enough, then they can be considered non-eudomonic and should be euthanized.

      If the first ems are derived from humans that exhibit xenophobia, bigotry and social status, it is very likely that those first ems will exhibit xenophobia and bigotry and will eventually euthanize all others according to social status.

  • MichaelG

    Excuse my ignorance here:

    I assume that humans evolved in an environment like modern chimps and did not compete over food. Food is there for the taking. Of course it eventually runs out when there are enough people, but the losers aren’t the ones that can’t grab the fruit off the tree fast enough. We’re not lions.

    Instead, we compete over social status in the group, and the losers are the ones who aren’t owed enough favors. When food grows short, these losers are driven into the jungle and die. Hence we are terrified of embarrassment, unpopularity or other loss of status.

    So what will ems compete over? If they are simulations of human brains, they will want to compete over status. How will they derive status in completely simulated worlds, or in a real world with cheap manufacturing? (How will humans?)

    If they compete over reproductive ability (control energy/materials needed to make copies of themselves), we’re doomed. We can’t compete with them.

    If they compete over status, perhaps they all sit around and do art at accelerated rates and don’t want to talk to us slow stupid humans.

    On the other hand, they can edit themselves or produce new variants that don’t care about status. What’s the stable condition here?

    I find it difficult to imagine a world with a mix of ems free to control their lives and humans. Can anyone spell out a scenario?

  • Jeffrey Soreff

    Good post!

    Most of the answers, and the way the question is posed, assume that ems
    can reproduce or replicate. If they are set up so that they cannot
    replicate, a huge part of their hazard goes away. In this model, they are
    essentially a medical technology, an option to use when one’s biological
    body is failing, rather than a source of swarms of competing workers.

    If this option were followed, hazard #1 is reduced, so an earlier em
    transition would be desirable.

    Consideration #6 is an interesting one. Under a medical option, ems
    could still be fast, and could still help to monitor an AI, though they would
    not be as numerous.

    I don’t believe consideration #3. Our civilization is much richer than it
    was say a thousand years ago. There are some natural hazards, e.g.
    asteroid impacts, that we can now deal with that we couldn’t previously.
    Much more important than this, though, are hazards that we generate
    ourselves. We have thousands of nuclear weapons in place – real,
    deployed, and quite lethal – that we did not have previously. Both AIs
    and certain biological weapons are potentially self-replicating doomsday
    devices. Richer is not always safer, particularly when wealth buys weapons

    #5 is believable. In particular, the substrate for an em
    is essentially a tabula rasa human model. Frankly, I think it is more
    likely that such a technology will be used more as a route to AI than as a
    medical technology. The scanning technology needed for ems/uploads
    would be specifically for preserving individual human minds. I’m
    skeptical that development of it will actually be funded.

  • NAME, consumption + savings = production.

    Carl, I was terse but not incorrect I think. Yes, em tech could inspire and inform AI tech, so those who want to delay all AI techs could want to delay em tech.

    Michael, humans could live off of wealth invested in the em economy. But they would fade and be marginalized as the future would be dominated by ems.

    Evan and Jess, I quite disagree with the claim that we should expect competition to robustly and quickly evolve human minds into agents in which humans see no value. Nick Bostrom only argued that such no-value scenarios were possible over the very long run, not that they were overwhelmingly likely very soon. Yes, as I replied to Chalmers, people today probably place less value on changed descendants, as they always have. But less value is far different from no value. I’ve posted several times on specific ways I expect ems to evolve, all of which retain things we value. Note that great filter and anthropic considerations do not raise our estimate of the chance of no-value scenarios.

    • Jeffrey Soreff

      Evan and Jess, I quite disagree with the claim that we should expect competition to robustly and quickly evolve human minds into agents in which humans see no value.

      If ems are created, and if either they compete reproductively or if their
      employers/owners do copy-and-modify experiments on them, I’d expect
      them to change very fast. Their equivalent of a neural
      phenotype would be directly exposed to experimentation. There are ~100
      distinct regions in the human brain. Ems could be altered by cranking the
      equivalent of conduction speeds and neural firing thresholds up or down
      in each of those independently, precisely, and repeatedly. Those would
      be easy experiments to do, and there is every reason to expect that the
      human-normal settings will not be the economically optimal ones (even
      today they are probably not, let alone in an em work environment). This
      sort of experimentation is more like tuning an operating system than
      like evolving to retain lactase in adults. The turnaround time for the
      experiments, even in the subjective time for the ems, is going to be of
      the order of the time for evaluating employee performance, not of the
      order of the time for one generation to succeed another.

      Hopefully Anonymous said it well in a response to your workaholics post:

      At some point does personality, “theatre of consciousness”, and expenditures to determine if “life is worth living” become an efficiency drag for algorithms in competition for resource control/perseverence?

      I think that would predict for the end state of both ems and “non em” humans. If productivity rapidly rises with ems, I think that end state might come comically soon, like a few years or less after the appearance of ems.

      • Some change would be very fast, but other change will take a lot longer, at least in units of global doubling times. The creatures you get by the variations you suggest on emphasis in 100 brain regions would be quite recognizably human.

      • Jeffrey Soreff

        Many Thanks for the reply!

        The creatures you get by the variations you suggest on emphasis in 100 brain regions would be quite recognizably human.

        Perhaps we differ on what is encompassed by “recognizably human”.
        Would you count the other great apes within that class? They are
        certainly close cousins. As far as I know, tweaking the emphasis
        in 100 brain regions is enough to morph between them and us.

        One would have to go a little further than I suggested: In addition
        to neuron speeds and thresholds, I think we’d need to change
        neuron counts, and the slows the experiments a bit, but they would
        still be easy experiments.

        Cranking down neuron counts in regions
        would still be a fast experiment. Cranking them up might not be
        too slow. We might not need the full development time, since there
        is considerable plasticity even in adult brains. Copying neurons and
        putting them in parallel with the same synaptic weights, then
        allowing learning to make them diverge might be good enough.

      • How do you diagnose insanity in an em?

        How do you diagnose anti-social behavior in the early stages?

        How do you diagnose psychosis before it happens?

      • Jeffrey Soreff


        How do you diagnose insanity in an em?

        Insofar as they are being simulated at a low, neural level:
        The same way we do with humans: By observing their behavior.

        And that sort of filtering will have the same problems we have now.
        Sometimes even filtering by nature’s gold standard, long term
        fecundity, still gives whole subcultures wearing sacred underwear
        and believing other batshit insanities.

        My point is not that fiddling with neural parameters of ems is safe,
        but rather that it is fast enough, and likely enough to yield
        results that are more competitive (at either the individual or
        organizational levels) that, if not stopped, it will probably move the
        em population as far from the current human norm as we are from
        chimps in a rather short time.

    • Vladimir M.

      humans could live off of wealth invested in the em economy.

      You assert this as if it were obviously true, but to me it seems that most people would be priced out of habitable land (and also out of land in the general economic sense, i.e. all fixed-supply natural resources), no matter how much total wealth (by whatever measure) goes up. This remains an issue even if we make the can opener assumption of secure property rights for humans.

      • I don’t expect ems to have huge demand for raw land and natural resources. I expect most to be very concentrated in a few dense cities.

      • Carl Shulman

        I don’t expect ems to have huge demand for raw land and natural resources. I expect most to be very concentrated in a few dense cities.

        For how many weeks/doubling times?

      • Carl, at least ten doublings.

      • Ems don’t need to have huge demand, they just need to be willing to pay a higher price than humans can afford. If the marginal wages of ems for the equivalent of a human year of mental labor is 25 kwhr, how will a human be able to afford to live?

        Wheat yield is ~ 2,243 kg/hectare. A kg of wheat has ~15,000 kJ, a 2000 calorie a day diet is 8,400 kJ. 2000 calories of wheat would take about 0.1 hectare. Much of the US receives an annual average of 4-5 kwhr/m2 per day. Solar photo voltaics are ~15% efficient. 0.1 hectare is 1,000 m2. A landowner can grow 1 years worth of food for a human, or can produce ~25,000 kwhrs.

        The land to grow a years worth of food can generate many times 25 kwhr with solar panels. Why would the owner of the land grow food instead of generate 1,000 years worth of em labor?

      • Carl Shulman

        Carl, at least ten doublings.

        So old-style humans can survive on the margins of an em society for at least 20 weeks (with your estimates of doubling time from elsewhere)? Great news, that!

      • Carl, on longer timescales, see this post.

    • Thanks very much for your reply, Robin. I’ve re-read all three posts you linked to (having read them when you originally posted them) and I am still struck by the underlying assumption that ems will be very, very human-like. Why the heck should we expect this to be the case? Are you basing this off an assumption about the difficulty in modifying an emulated brain, or on the optimum towards which you expect competition to drive em design?

      You draw analogies between (a) how historical humans relate to modern humans, and (b) how modern humans relate to ems. This, to me, is next to useless. Yes, cultures change dramatically over the centuries, and it can be startling how different people used to think. But all humans are running the same hardware and 95% the same software. The transition to ems offers vastly more opportunity for drastic changes. How can you expect these analogies to offer any guidance above pure noise?

      I quite disagree with the claim that we should expect competition to robustly and quickly evolve human minds into agents in which humans see no value.

      I certainly don’t think it’s robust or extremely probable, but it certainly seems like a very real possibility. My error bars are huge, and I don’t see how yours aren’t.

      • Eventually of course our descendants could in a wide range of directions, but I expect a substantial era where they are quite human like, and think it is important to think about that era.

    • Khoth

      You say that humans could live off of wealth invested in the em economy. But most humans don’t currently have wealth to invest. Are we left to die?

  • Anonymous

    Another consideration: Ems can be tortured better and longer than humans without biologically dying. It could make sense to anticipate the game theory possibly leading to this, and the potential for institutional, technological or social frameworks to mitigate it.

  • Matt Young

    “weakly favor delaying em computing tech.”
    Say it ain’t so boss, deny this. The vote it actually coming up in the various standards organization. Do we crate the web environment that suits roaming web bots. You are saying we are not ready for them?

  • Matt Young

    OK, this is worth expanding on.
    The best simulation we have of neurons is the serial one bit computer, at least that was the neuron model in existence ten years ago. If the neuron was a pulse frequency computer with plenty of analog corners conditions, this an arry of these one it processors got you close. Then cam the arrays of Hopfield network and feedback networks, sort of multi-bit specialized version of a general array. These were the neural nets. They worked, and had their place, and still do and can’t be ruled out. The IBM research on he cat brain as like that.

    The IBM work on Watson was much different, I do believe it was strictly an ontology data base, huge graphs of language patterns. One is emulation, Watson right?, the other was simulation.

    Where is the best pay off? Make Watson a protocol, a set of micro instructions for finding ontology matches in the web, a set of limited formats that is self directing, allows web bot. But with simple enough instructions that is movements can be mechanized, and run at super high speeds.

  • William Newman

    Robin Hanson, could you point to a summary of your current thinking on
    why we should expect that rote emulation of brain functionality in a
    relatively low-level way will outcompete thinking machines designed
    from scratch? Today from-scratch designs (informed by study of brains,
    but not slavishly emulating them) seem to be far ahead on everything
    we know how to do, up through rather nontrivial tasks like face
    recognition and passive optical navigation.

    I agree that the current rate of progress makes detailed emulation of
    brain functionality likely within our lifetimes. I just don’t
    understand why that capability should appear before from-scratch
    designs which are at least as capable and much more economical. That
    would be a reversal of what seems to me a very strong trend for the
    from-scratch designs to perform better. Roughly speaking, today’s
    from-scratch designs seem to rival capabilities that nervous
    systems evolved up through the Jurassic. How practical is it to faithfully emulate anything more complicated than a nematode?

    • Jeffrey Soreff

      That would be a reversal of what seems to me a very strong trend for the from-scratch designs to perform better.

      I attended a presentation on the internals of Watson,
      and the plans for its future. Basically, the intended upgrade
      path is to replace human-coded knowledge with learned knowledge.
      I asked whether the architecture would permit Watson to build and
      incorporate entirely new modules, based on external information
      that it found (the example that I gave was the signal processing
      literature). Yes, the architecture permits this, so it is potentially
      recursively self-improving.

      I’d like to see at least medical uploading become possible.
      After all, I’m mortal myself. That is not the way I’d bet.
      I’d bet on “foom” sometime in the next two or three decades,
      probably with some variant of clippy as the result. Oh well,
      at least the thing will probably know Maxwell’s equations.
      In that weak sense, it might be construed as a sibling EE.
      C’est la mort.

  • jan

    another (very) relevant point:

    Humans can live on as ems, stop dying. Not dying is a good thing, so we should hurry ems.

    (or, conversely, uploading is really killing, so we should delay ems)

    • I agree that is a relevant point, and added it to the post.

    • All of Robin’s original concerns are about long term effects on massive future populations. (Existential risk, etc.) Your concern just involves the humans living (or rather, dying) during the time over which we could delay the em transition, which seems many orders of magnitude less significant.

      • Carl Shulman

        All of Robin’s original concerns are about long term effects on massive future populations

        Elsewhere in this thread Robin talks about focusing on humanlike ems in era measured in “doublings” i.e. a period of a handful of years or less (with Robin’s current growth rate estimates) out of trillions, concentrated in the Solar System rather than the accessible universe. From a total utilitarian point of view that’s pretty negligible.

    • How would you tell the difference?

  • Evan

    I think, while we’re on this topic, that I should go ahead and post a more detailed list of objections to Robin’s point one (“I confidently estimate em lives to be numerous and worth living”), since I don’t know when the topic will come up again.

    First, I would like to point out that even if one accepts the repugnant conclusion (and I don’t think anyone should) that a malthusian em scenario fails to fit its parameters for the following reasons:

    1. I’ve already mentioned the danger of ems evolving into non-eudaemonic entities. In this scenario future ems would not be happy because they would have lost the ability to be happy in order to be more reproductively efficient. Robin argues that this is unlikely, but how likely does it have to be for us to be willing to risk it happening?

    2. I think there is a huge contradiction between Robin’s work on personhood theory and his claim that a malthusian em scenario would create many new creatures to enjoy life. If Robin is correct that multiple em copies are all the same person (and I think he may be) then a malthusian em scenario would probably result in the death by resource starvation of all but a few hundred people. The remaining people would have lots of copies of themselves, but that would just be lots of the same creature enjoying life, not “many new creatures.”

    3. Robin maintains that “workaholics smile” and therefore workaholic ems would be happy. But most workaholics are happy because they feel they’re accomplishing something big and important. Scraping by just to survive is a whole different story. I doubt workaholics in gulags and Haitian slave plantations were much happier than the average person.

    4. There is a difference between “liking” and “wanting” that economic theory is poor at discerning. It may be that we end up with trillions of workaholic ems who “want” to work but are unhappy because they don’t “like” it. That doesn’t sound like a life worth creating or living. I think the odds of such miserable people being selected to become the trillions of future ems are unacceptably high.

    5. If ems are very poor, even if their lives are “worth living” at the moment, there is a huge risk that even a minor disaster may reduce the quality of their lives until they are not worth living.

    Even if you accept the repugnant conclusion, a malthusian em scenario doesn’t fit it. Robin’s confidence is misplaced.

    Secondly, I think one should not accept the repugnant conclusion for the following reasons:

    1. Happiness utilitarianism, the system that generates the “repugnant conclusion” is a bad system to follow. Humans value other things than happiness, such as knowing the truth, being moral, being wealthy, being accomplished, etc. Preference utilitarianism is far superior moral system because it takes these things into account.

    2. The main argument people field against “average utilitarianism” when they are arguing for “total utilitarianism” is that average happiness utilitarianism implies killing people who bring the average down if they cannot be cheered up in some other way. However, this critique is also true of total happiness utilitarianism. In a malthusian scenario where the only way to create more people is to allow some to starve, total happiness utilitarianism implies killing people who bring the total down to make room for happier people. The real solution to this dilemma is to reject happiness utilitarianism in favor of preference utilitarianism. In preference utilitarianism people strongly prefer not to die regardless of how happy they are, so they mustn’t be killed.

    3. Robin already seems to accept preference utilitarianism according to some of his other posts. He is in favor of mandatory paternity tests and against lying about virginity. However, according to total happiness utilitarianism, cuckolder and faux virgins, if they get away with it, are heroines. They have increased their happiness, while not at all decreasing the happiness of their oblivious husbands, thereby raising the happiness total. If Robin was a true total happiness utiliarian he’d support banning paternity tests and making it easier to lie about virginity. Opposing those activities only makes sense under the paradigm of preference utilitarianism, where you should respect people’s preferences regardless of happiness. Why is Robin a preference utilitarian for those scenarios, but a happiness utilitarian for malthusian em scenario?

    4. Preference utilitarianism (which Robin apparently already accepts in some case) does not lead to the repugnant conclusion. It would conclude that, since most people prefer their descendant to be wealthy and moderately populated as opposed to malthusian and poor, the malthusian em scenario is an abomination. These might be bad preferences to have if they violated other people’s preferences, but they don’t. The trillions of future ems don’t exist yet, so their preferences don’t either. No one’s preferences are being violated. Attempting to create a malthusian em scenario, however, would violate current people’s preferences, so they have good reason to try to stop it. And if Robin is correct that future em copies are all really just the same person, then even if the em scenario happens, the preferences of billions of distinct people would matter more than the preference of a few hundred people who have a few trillion copies.

    5. While impoverishing or killing someone by outcompeting them economically may be economically efficient (since it isn’t an externality) I still think that it is immoral, since it violates very strong preferences of other people. Economic efficiency is a powerful tool for making moral judgements, but it isn’t the be-all and end-all of ethical judgements.

    6. There has been a frequent claim that since happiness research shows acquiring wealth only increases happiness momentarily, and doesn’t increase a person’s average happiness over their lifetime, it is okay to take actions that reduce the amount of wealth people have (i.e. “Poor folks do smile”). This idea can easily be dismissed by pointing out that kicking someone in the nuts, while decreasing their momentary happiness, doesn’t decrease a person’s average happiness over their lifetime. Yet that is still wrong. The fact the happiness (and other preference satisfaction) gains caused by wealth are temporary, doesn’t make them any less real.

    Thirdly, Robin strongly opposes acceptance of death and wants us to” rage against the dying of the light.” I agree with him wholeheartedly. But accepting a malthusian em scenario would be tantamount to accepting death. The most efficient and ruthless ems would quickly outcompete everyone else, and then devour us for raw materials to make more copies of themselves. To avoid everyone dying we need to stop the few hundred most efficient people on Earth from eating everyone else. The best way to do this is work for a non-malthusian future.

    I can’t make sense of the contradictions in Robin’s view except to assume that he is introducing new and interesting views to his reader in order to provoke thought amongst them, without necessarily needing to have all these views be consistent with each other. If that’s the case it’s working, Robin has really made me think hard about these issues and I thank him for that.

    So yes, ems should probably delayed. This saddens me, as I would like people to become immortal by uploading themselves. But there’s little point in trying to do that if workaholic ems will eat you a few decades later.

    • A few comments:
      I try to consistently count preferences over happiness in general.
      Even if em copies are the “same” person they can count for more person-moment preferences to satisfy, just as a person with a longer life can so count.
      I count gains to potential creatures as well as to current existing creatures.
      I do not think ems would quickly “eat” humans for their natural resources.
      I think there’s almost no chance of ems quickly falling to zero value creatures.
      I think near-subsistence wage workaholics can be plenty happy and preference-satisfied that they nowhere near a borderline life-not-worth-living.

      • Evan


        I count gains to potential creatures as well as to current existing creatures.

        I can understand that to some extent. If you don’t want people to have unsatisfied preferences today you wouldn’t want them to in the future, even if they don’t exist yet. But I don’t see any inherent moral problem with people in the present taking steps to shape the future so that the types of potential creatures capable of existing in the future are limited to types that satisfy present-day preferences to some extent. Future creatures don’t have preferences yet since they don’t exist yet so the ones that will end up not existing will not have their preferences violated. I see no moral problem with attempting to restrict the population of future creatures to ensure a higher average level of preference satisfaction.

        I do not think ems would quickly “eat” humans for their natural resources.

        Knowing that humans would be eaten slowly instead of quickly isn’t much comfort. And what about other, less efficient ems? I wouldn’t want them to be eaten either.

        I’d be less bothered by this scenario if I thought that every em clan that was created would survive and live decent, moderately wealthy lives (“clan” is the word you’ve been using to refer to a series of ems copied from the same uploaded mind, right?). When I read the novel “Kiln People” I didn’t consider that to be a horrible future (on the contrary, I thought it was awesome!), even though it contained many ems who slaved for their entire short lives at work. That was because, even though individual ems were poor workaholics, the em “clans” were wealthy and managed to find plenty of time for fun. And since the “clans” shared all their memories, each em got to share in the experience of having fun and being wealthy, even if it was another em that was doing that while they were working. If the em scenario you described sounded more like “Kiln People” I probably wouldn’t object to it nearly as strenuously.

        That’s not the impression you’ve given me, however. It seems like you think that ems will constantly be working and constantly be poor. And that some entire clans of ems will be “eaten” by other ems that are more efficient at being workaholics. That doesn’t sound like a future worth creating to me.

        Also, I want to address your statement that:

        I predict we will not coordinate to prevent a great population increase and wage fall.

        Isn’t this a situation where a simple minimum wage law would do the trick? In the modern world the minimum wage is cruel because it restricts the amount of jobs available, preventing already-existing workers from getting jobs. However, it seems to me that a simple job-destroying minimum wage law during an em explosion could easily limit em population. I doubt anyone will go to all the trouble and expense of running off a new em unless they think they have a job lined up for them. In this world the minimum wage law, instead of throwing already-existing people out of work, it would simply prevent new people from being created.

        Of course, it may be that such a policy, if not adopted universally, would result in some areas with no such laws becoming hugely populated with ems. But if every country in the world managed to sign and enforce some kind of treaty or something, would it work?

        Also, on a somewhat related note, I thought I’d semiseriously suggest that sometime you write about the sort of society in portrayed in the classic cartoon “The Flintstones.” While that world is obviously economically and sociologically improbable, it has a feature that you might like, almost all household appliances, vehicles, and machines, are sentient creatures capable of having preferences and enjoying life. Would you consider that society superior to ours, with its cold, emotionless household appliances? It seems like a fruitful avenue for moral analysis on your part, and would probably make a really fun post. Just a friendly suggestion.


        I think your points 3 and 4 against em happiness can be resolved by mind (re)design

        That sounds utterly horrifying. Totally rewriting an innocent person’s values like that is not ethical. To make an analogy, imagine if the plantation owners in the antebellum South had discovered a foolproof brainwashing technology for their slaves.

    • Anonymous

      Even, I think your points 3 and 4 against em happiness can be resolved by mind (re)design. Wanting and liking can be designed to converge (it mostly does for us most of the time), and selected em workaholics can be psychologically different from today’s workaholics or poor workers.

  • On #1, I confidently estimate em lives to be numerous and worth living.

    How can anyone be “confident” that some really bad life is “worth living”? Most people wouldn’t even know how to begin answering the question of whether a given life is “worth living.” More than that, those who find the question inherently unanswerable would be correct. (See “Why do what you “ought”?—A habit theory of explicit morality”—

    • Stephen R. Diamond, yeah you are the latest in a series of commenters saying this to Prof. Hanson. As far as I can tell, he’s doing the equivalent of dunking a basketball and putting his balls in your face. I don’t see the value in this statement “On #1, I confidently estimate em lives to be numerous and worth living” beyond his hedonistic enjoyment of the reaction it provokes in people like you and me. I read his rationalizations, but I think it’s a garden variety apish corruption of the social epistemological commons.

  • Jesse M.

    What motivation would mind uploads have to endlessly copy themselves, to the point where they take up almost all the available computing resources and have to live in subsistence conditions? If instead they avoid overcopying, most likely by passing laws to prevent this, then their would be plenty of cheap computing power to allow them to each live in lavish simulated environments.

    Also, it seems strange that you just assume capitalism in its present form would survive into an age of mind uploads. An upload seems like the type of thing that would tend to usher in a post-scarcity economy where consumer goods are easy enough to duplicate (due to so much work being automated) that something like a guaranteed minimum income would make sense, with no real need for anyone to work unless they wanted to. In a society with mind uploading this could go even further than just driving down the price of physical goods–even work that required an intelligent mind, like teaching and scientific research and engineering, wouldn’t need to cost much as long as the population of mind uploads included even a small number who were both competent and willing to volunteer their services for free as long as their lives were physically comfortable. (I think quite a lot people in creative jobs are in it for more than just the money–how many scientists would quit if they won the lottery?) In such a world, wouldn’t a fairly socialist system where everyone is guaranteed a comfortable lifestyle by the government (with no obligation to work for it) make a lot more sense than pointlessly continuing the capitalist choice between work and poverty?

    • Economics is generally applicable to a quite wide range of of social situations, and local motives and what “makes sense” do not usually directly determine what happens in societies. I predict we will not coordinate to prevent a great population increase and wage fall.

      • Jesse M.

        I’m not suggesting that no form of economic analysis would apply to a post-scarcity economy, just that it would differ in some important ways from an economic analysis which takes our current form of economy and government as a baseline. Would you disagree that economic analysis depends a lot on the institutions and government that exist in a society? For example a barter economy is different in many ways from one with a central government-backed currency and banking system, and an economy with a government monopoly on use of force is fairly different from one where a common way for the wealthy to accumulate more wealth is to kill and plunder from one’s wealthy peers. And don’t your conclusions about copies being driven to subsistence conditions depend somewhat on the idea that the economy is still structured in a way that allow individuals (or groups of copies of individuals) to accumulate larger and larger shares of the total resources of the society without limit? In a society where the government prevented too much inequality in shares of the total available computing power (just as some modern governments keep the levels of income inequality, and to a lesser extent wealth inequality, significantly smaller than they are in the US), you might have a system where some individuals chose to fill up their personal share with a huge number of copies, while others only created a smaller number of copies with their share and used the remainder for other purposes like a comfortable living environment.

        I also think that you may be thinking of copying in overly irreversible terms similar to biological reproduction–the assumption being that once a given “clade” has produced a much larger number of copies than other clades, then that clade is likely to dominate the population in the future. What if a future society of uploads tends to see copies that haven’t diverged too much as fairly disposable, so that individuals routinely create huge number of copies of themselves for specific tasks, then delete all but one once the task is accomplished? (A scenario something like this is presented in David Brin’s science fiction novel Kiln People, but with physical copies rather than uploads.) This would also be useful as a sort of Darwinian strategy for boosting one’s intelligence without falling prey to mental disorders, as one could create a large number of copies with variations in how their simulated brains have been modified (different possible ways of adding new neurons to the existing brain in different numbers and patterns of connectivity), then delete all but the most “successful” ones. In a society of uploads there might be a sort of memetic selection pressure for belief in something like quantum immortality, where copies don’t worry too much about the possibility they’ll be one of the ones deleted, since they expect to always subjectively experience being one of the survivors. After repeated rounds of copying-and-culling, the survivors would feel as though they had direct experiential confirmation of this theory, since they would each have repeatedly experienced being one of the few “lucky” ones who survived each round.

        Finally I think there is a sort of anthropic argument against the scenario you propose. Unless there is a great filter which makes it unbelievably improbably that a society with our level of technology should make it to the point where mind uploading is possible, wouldn’t your scenario suggest the vast, vast majority of sentient beings throughout all of past and future history would be copies living at subsistence levels in a post-singularity world, hugely outnumbering beings with experiences like the ones we have? Anthropic reasoning would suggest that more likely possibilities are A) that the simulation argument is correct, and a large proportion of the population of a post-singularity world would consist of beings living in ancestor simulations whose experience of reality is similar to our own, or B) there wouldn’t really be a large population of individual minds vastly outnumbering our own in a post-singularity world, perhaps because post-singularity intelligences will tend to merge into some kind of immortal singleton or group mind.

      • Jesse, you say “work … wouldn’t need to cost much as long as the population of mind uploads included even a small number who were both competent and willing to volunteer their services for free as long as their lives were physically comfortable.” If those few were allowed to copy themselves, that would produce subsistence wages for such tasks. Especially of sick if such folks “see copies that haven’t diverged too much as fairly disposable.” Even in a barter economy – no further “capitalism” is needed. Governments could only limit copies within the scope of their power, and ones that did not would enjoy large competitive advantages. On anthropics, any scenario with a vast future population has similar issues.

      • Jesse M.

        “If those few were allowed to copy themselves, that would produce subsistence wages for such tasks.”

        I was specifically imagining that there would be no wages at all, that’s why I said “volunteer”–it would be the equivalent of editing wikipedia for fun. Again, in a post-scarcity economy it seems plausible that something like a comfortable guaranteed minimum income would be seen as a default right, just as health care and education are considered default rights in western welfare states. I’d think quite a lot of people would spend time volunteering any in-demand skills they had if they they had no pressing need to work for a living. Do you have some economic argument to think the creation of a guaranteed minimum income would be unlikely in a post-scarcity society where all the work that’s necessary to keep society running (particularly an upload society, whose only real necessities are the manufacturing, maintaining, and powering of lots of computers) can be done in a fully automated way by non-sentient AIs?

        Especially of sick folks “see copies that haven’t diverged too much as fairly disposable.”

        Why do you call it “sick”? I’m not suggesting that copies would be involuntarily deleted, screaming, against their will, just that it would become a common attitude to have no problem with creating a bunch of copies of oneself and planning in advance that all but one will be deleted after some task is completed (with no copy knowing in advance that it will be one of the ones deleted, as opposed to being the lone survivor). If one believes in something like the theory of quantum immortality, one should have no problem with this, because one will always experience being the copy that avoids deletion. And for anyone who believes in quantum immortality, the main ethical argument against Tegmark’s quantum suicide experiment (discussed in the section of his site “The Interpretation of Quantum Mechanics: Many Worlds or Many Words?”, about 1/4 of the way down the page here) is that it would cause suffering for your friends and family in the worlds where you died (even if you never experienced these worlds), but this argument wouldn’t apply to the case of creating a bunch of copies and then deleting all but one.

        The point I’m arguing here is that regardless of whether quantum immortality is “true” on a metaphysical level, I think it’s an idea that would tend to become mainstream in a society of uploads, for two major reasons:

        1) Those that are willing to regularly run these sorts of Darwinian experiments on themselves would likely be more “successful” in many ways, particularly in the case of trying to alter their own brains to increase their intelligence, but also just in the sense of there being a lot of demand for their skills by others (if you want to use some of the bits you own to host an upload who will perform some useful task for you, would you rather temporarily rent out those bits to an upload who will delete themselves after a time, with the bits then reverting back to you, or would you rather permanently spend them to create a new copy who will need them indefinitely to keep running, and who will possibly gain dangerous political power if there is a lot of demand for copies of that individual so lots of people are donating their bits to make more of them?)

        2) Even if an upload might feel anxiety about the prospect of creating a lot of copies and deleting the majority the first few times, after multiple rounds the belief in quantum immortality would become very natural and intuitive, regardless of whether it’s metaphysically “true”, since the copies remaining after several rounds would naturally be those who had memories of repeatedly finding themselves to be a survivor in each round. As an analogy, all of us biological humans find it natural and intuitive to feel some sort of persistence of identity over time, feeling that we are the “same person” as the version of us who existed a few years ago, even if we know that all the atoms making up our current brain are different from the ones that made up “our” brains a few years ago, and even if we might find it intellectually reasonable to take the metaphysical position that persistence of identity is an illusion. Our memories are too convincing on a gut level for us to really act as though we believe it’s an illusion! Similarly, although people today often question whether their consciousness would really survive a destructive mind uploading procedure or whether “they” would die and the upload would simply be a copy with false memories, the upload itself would no doubt find its memories to be just as convincing as we find ours, and on a gut level would feel just as much like the same person regardless of what it thought about the issue intellectually. What I’m saying is that this sort of gut-level belief in something like quantum immortality would be similarly convincing for any upload that had survived multiple rounds of copying-and-deletion, so there would be a sort of selection process in favor of uploads that at least act as though the belief were true.

        So, as a thought-experiment, imagine if you were an upload who found the notion of quantum immortality every bit as believable as the idea of persistence-of-identity…would you then see anything “sick” about temporarily creating a bunch of copies of yourself and then deleting most of them?

      • “of sick” was a typo – has been corrected in the comment.

    • If future governments have a component of democracy, then wealthy ems will dominate it by copying themselves. Once specific ems dominate the government, they will use government to subsidize their greater replication. The first em to do that will have such an advantage over all others that it is game over as the dominating ems divert all resources to their own expansion.

      • Jesse M.

        It’s not clear what “wealthy” would mean in the context of a post-scarcity society–ownership of things like shares of the total raw materials, energy, and computing power available to the society could be determined by democratic institutions like governments rather than the model of endless rights of individuals to accumulate “private property”. Certainly in a context where no one really needs to work for society to keep functioning (because lower-level AIs are perfectly capable of doing things like mining/energy production/manufacturing of machines and computers on their own, without getting tired or demanding compensation), there would be a lot less reason for the bottom 99% to put up with the top 1% owning a hugely disproportionate percentage of the society’s total available raw materials/energy/computing power, so at the very least you could see something like an extremely progressive tax structure. And if computing power wasn’t concentrated in the hands of a small number of “wealthy” individuals, then the way to get the most copies of yourself made might be to volunteer services that were in high demand (and couldn’t be replicated by simpler forms of AI) so that large numbers of other uploads would volunteer some of their share of computing power to run a version of you (and a comfortable simulated environment for you to live in when not doing services for your patron). In this case the future might belong to uploads with talents like creating entertainment tailored to their clients’ interests (an uploaded Shakespeare could create plays based on the lives of each client, or the kinds of stories they wanted to hear), or being particularly good at teaching things that a lot of other uploads wanted to learn (or wanted their “children” to learn, assuming this society can also simulate brain development from an embryo to an adult in order to create “new” individuals), etc.

      • What basis is there for thinking there will ever be a “post scarcity society”?

        As long as status has some value, status will be zero sum. If status is zero sum (which it is and inherently must be) then there will be motivation for those with less status to make other things zero sum (i.e. cause artificial scarcity) so as to use value in one zero sum system to trade for status (which is and can only be zero sum). That is one of the main uses of wealth now, to acquire status. Wealth is only useful to acquire status to the extent that wealth is scarce.

        If democratic governments control resources, then whoever controls the government controls those resources. How much profit did Halliburton make from the Iraq war? On 01/10/2002 Halliburton stock was trading at 5.77. On 4/13/2006 it was 38.56. Were all of those no-bid contracts that the US government let to Halliburton contracts that Halliburton lost money on?

        Why are we not in a “post-starvation society”? There is plenty enough food to feed everyone, but shortages in food and food delivery are used by some to gain other things. North Korea is negotiating for food aid to prevent its people from starving. Why is there famine in North Korea? Because the government in North Korea values retaining power more than it values the North Korean people not starving.

        Why is the US not a “post-no-health insurance society”? Many other countries are, and they spend less per capita on health care than the US does.

  • I hesitate to weigh in – because of the ridiculous premise that ems might come first – but it seems likely that there will be winners and losers, and the faster vs slower issue might change who those winners and losers are. So this is partly a personal question, with no general answer.

  • IVV

    What does a subsistence-wage em look like, anyway? Do we program them to feel cold and hungry?

    • Anonymous

      Hopefully not. If we think ems might be motivated by threats of torture, then we should delay them until we can predict the probability better, and find ways to prevent it.

      If ems have a minimal set of personal rights, they will be more likely positively motivated by an interest in continued existence or making more copies of themselves. In this case, we should see selection effects for strong positive motivations toward existence even under high productivity pressure. That would be good news.

  • daedalus2u, I don’t think bigotry/xenophobia and euthanasia are that tightly linked. People engage in euthanasia for their relatives & pets, purporting to serve the interests of those who cannot effectively act for themselves (I am not stating whether that belief is correct). Those who carry out genocide are aware that their victims (often they are viewed as enemies) are capable of experience happiness or pleasure and have similar instincts for self-preservation. The problem is not one of ignorance, its disregard and the prioritization of some end furthered by the extermination of others.

    You are using “regressive” in a non-standard way if you think regressive taxation NECESSARILY results in a net transfer from the poor to the rich. The terms “progressive” and “regressive” are framed relative to a flat tax on a percentage of income (or more rarely, consumption). But a hypothetical tax which resulted in no net transfers would be a head/per-capita/poll tax which is a constant amount of money rather than percent, and each person receives the same amount they give. Relative to that, any positive correlation (even a log or piece-wise that stops rising at a certain level) will result in a net transfer from richer taxpayers to poorer ones. Personally, I think the ideal is MR=MC.

    Hopefully Anonymous, you are the last person I’d expect to make the objection about lives being worth living. You’ve stated that you’d prefer an immortal life consisting only of torture to a finite life, and expressed bafflement that nearly everyone else doesn’t share that preference.

    • “Hopefully Anonymous, you are the last person I’d expect to make the objection about lives being worth living. You’ve stated that you’d prefer an immortal life consisting only of torture to a finite life, and expressed bafflement that nearly everyone else doesn’t share that preference.”

      More basketball-dunking balls in the face. I think my life is worth living, so I should have no objection to a flood of other lives that have a constituent saying they’re worth living, in your logical chain. I think this is some sort of positional theater, not a real aesthetic preference of folks like Prof. Hanson. Either way, we’re all probably doomed, DOOMED ah-ah-ah-ah-ah.

      • Hopefully Anonymous, it seems there’s a couple of issues we need to distinguish. There’s the quality of the lives of others, which Robin and some commenters express concern for but you don’t care about. There is the question of whether your consciousness will persist if you are uploaded, which you have earlier expressed concern about. And finally, assuming your consciousness does persist, there is the question of what your quality of life will be like. So the logical chain of inference is really only relevant for that last question.
        Also, are you saying that you believe Hanson to be similarly egoistic and lacking in a sort of kin-altruism for partial descendants?

      • ” There’s the quality of the lives of others, which Robin and some commenters express concern for but you don’t care about. ”

        Posturing as pro creating ems because you are confident of some quality of life they’ll experience is different than expressing concern for the quality of lives of others.

        I’d contrast earthy survivalist concepts of concern for others with the “flood the market with ems” concept of concern for others.

    • You can’t consider a tax rate to be regressive or progressive without specifying what government pays for. Even then, some benefits of government are proportional to wealth (protection from thievery), some are proportional to remaining lifespan (protection from murder).

      If ems had the government subsidize electricity prices and also had zero pollution control regulations, biological entities would be adversely affected. There are plenty of people who make arguments for zero pollution regulations now. If the entities voting for and running the government didn’t need to breath, eat food, or drink water, how much laxer could the regulations get?

      Non-biological entities would consider pollution control regulations to be an unfair and regressive tax on their electricity needs. When ems outnumber biological entities 10,000 to one, the free market would do away with regulations that favor the 0.01% if they add more than 0.01% to the costs of living to the 99.99%.

      If 99% of ems are living on subsistence wages, they are not going to tolerate a significant fraction of those subsistence wages going to subsidize a tiny population of biological entities. Especially when much of the time those biological entities are non-productive, during sleep, in utero, before education. Look at the hue and cry over welfare, public education and universal health care now, by entities that are not living on subsistence wages and where a tiny fraction of their taxes go toward those things (and are actually fellow human beings!). What should be expected with ems?

      If ems required 3.1 watts of electricity, that is ~25 kwhr/year. “Subsistence” (post tax) would be ~30 kwhr/year. A human requires the land area equivalent of 25,000 kwhr/year to grow food. Would 99% of ems tolerate being taxed at 90% (250/280) to support 1% of humans?