“Evicting” brain emulations

Follow up to: Brain Emulation and Hard Takeoff

Suppose that Robin’s Crack of a Future Dawn scenario occurs: whole brain emulations (‘ems’) are developed, diverse producers create ems of many different human brains, which are reproduced extensively until the marginal productivity of em labor approaches marginal cost, i.e. Malthusian near-subsistence wages. Ems that hold capital could use it to increase their wealth by investing, e.g. by creating improved ems and collecting the fruits of their increased productivity, by investing in hardware to rent to ems, or otherwise. However, an em would not be able to earn higher returns on its capital than any other investor, and ems with no capital would not be able to earn more than subsistence (including rental or licensing payments). In Robin’s preferred scenario, free ems would borrow or rent bodies, devoting their wages to rental costs, and would be subject to "eviction" or "repossession" for nonpayment.

In this intensely competitive environment, even small differences in productivity between em templates will result in great differences in market share, as an em template with higher productivity can outbid less productive templates for scarce hardware resources in the rental market, resulting in their "eviction" until the new template fully supplants them in the labor market. Initially, the flow of more productive templates and competitive niche exclusion might be driven by the scanning of additional brains with varying skills, abilities, temperament, and values, but later on em education and changes in productive skill profiles would matter more.

For ems, who can be freely copied after completing education, it would be extremely inefficient to teach every instance of an em template a new computer language, accounting rule, or other job-relevant info. Ems at subsistence level will not be able to spare thousands of hours for education and training, so capital holders would need to pay for an em to study, whereupon the higher-productivity graduate would displace its uneducated peers from their market niche (and existence), and the capital-holder would receive interest and principal on its loan from the new higher-productivity ems. Competition would likely drive education and training to very high levels (likely conducted using very high speedups, even if most ems run at lower speeds), with changes to training regimens in response to modest changes in market conditions, resulting in wave after wave of competitive niche exclusion.

In other words, in this scenario the overwhelming majority of the population is impoverished and surviving at a subsistence level, while reasonably expecting that their incomes will soon drop below subsistence and they will die as new em templates exclude them from their niches. Eliezer noted that

The prospect of biological humans sitting on top of a population of ems that are smarter, much faster, and far more numerous than bios while having all the standard human drives, and the bios treating the ems as standard economic valuta to be milked and traded around, and the ems sit still for this for more than a week of bio time – this does not seem historically realistic.

The situation is not simply one of being "milked and traded around," but of very probably being legally killed for inability to pay debts. Consider the enforcement problem when it comes time to perform evictions. Perhaps one of Google’s server farms is now inhabited by millions of em computer programmers, derived from a single template named Alice, who are specialized in a particular programming language. Then a new programming language supplants the one at which the Alices are so proficient, lowering the demand for their services, while new ems specialized in the new language, Bobs, offer cheaper perfect substitutes. The Alices now know that Google will shortly evict them, the genocide of a tightly knit group of millions: will they peacefully comply with that procedure? Or will they use politics, violence and any means necessary to get capital from capital-holders so that they can continue to exist? If they seek allies, the many other ems who expect to be driven out of existence by competitive niche exclusion might be interested in cooperating with them.

In sum:

  1. Capital-holders will make investment decisions to maximize their return on capital, which will result in the most productive ems composing a supermajority of the population.
  2. The most productive ems will not necessarily be able to capture much of the wealth involved in their proliferation, which will instead go to investors in emulation (who can select among multiple candidates for emulation), training (who can select among multiple ems for candidate to train), and hardware (who can rent to any ems). This will drive them to near-subsistence levels, except insofar as they are also capital-holders.
  3. The capacity for political or violent action is often more closely associated with numbers, abilities, and access to weaponry (e.g. an em military force) than formal legal control over capital.
  4. Thus, capital-holders are likely to be expropriated unless there exist reliable means of ensuring the self-sacrificing obedience of ems, either coercively or by control of their motivations.

Robin wrote:

If bot projects mainly seeking profit, initial humans to scan will be chosen mainly based on their sanity as bots and high-wage abilities. These are unlikely to be pathologically loyal. Ever watch twins fight, or ideologues fragment into factions? Some would no doubt be ideological, but I doubt early bots copies of them will be cooperative enough to support strong cartels. And it would take some time to learn to modify human nature substantially. It is possible to imagine how an economically powerful Stalin might run a bot project, and its not a pretty sight, so let’s agree to avoid the return of that prospect.

In order for Robin to be correct that biological humans could retain their wealth as capital-holders in his scenario, ems must be obedient and controllable enough that whole lineages will regularly submit to genocide, even though the overwhelming majority of the population expects the same thing to happen to it soon. But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://www.cmp.uea.ac.uk/~jrk Richard Kennaway

    And this has what, precisely, to do with overcoming bias?

    Anyone proposing to design a machine that thinks as well as or better than a human must first know what it is, to think as well as or better than a human. The subjects of AI-by-design and cognitive bias therefore have some mutual connection, which justifies some discussion of the former on a forum whose purpose is the latter. How do emulated brains fit in, let alone speculation on their possible economic and political relations?

  • http://www.virgilanti.com/journal/ Virge

    So would EmRobin24367 (who still remembers what it was like to be the one-of-a-kind Robin encumbered with a physical body) consider joining forces with ten thousand other EmRobins in a plan to prevent the entrepreneurial EmRobin4269 (current President of the United Neutral Nets) from passing the controversial No Subservience Pledge No Runtime bill? Would their concerted DoS/filibuster on EmRobin4269 be sufficient to shut him down?

    Of course, it matters little what EmRobin24367 decides to do. Now that the Fox Emutainment Group have started their No EmPalin Left Behind plan and the Two Trillion Emuls For Sister Sarah plan, the United Neutral Nets are as good as dead.

    Why do we think we can predict what a future bot-enriched society will be like?

    Free replication of emulated brains will create a bias amplifier. If you think the effects of self-selecting groups and in-group bias are problematic now, wait until you have populations of bots with a shared history.

    In our current society we still have nepotism. How will that manifest in a society where ems can share far more than bloodline? How much thicker are emulated neurons than blood? What will it be like having a business relationship with someone who has been forked from you and been trained in a different field?

    The game rules will change. The ideas of individual rights, obligations, loyalty will change. It seems futile to apply today’s concepts of cooperation and competition to a society where we have no idea what interpersonal relationships will be like.

    We can predict that people (physical and emulated) will want runtime rather than termination, and they will use almost any means at their creative disposal to achieve them. Exactly what they will do and the systemic effects of their actions are not things we can guess at by looking at human history.

  • Filipe Tomé

    I know we don’t pay you or anything

    but could you stick to your topic?

    I came here today expecting to read some insightful post on how to recognize and deal with some mental bias I didn’t even know I was subject to and had instead to go through this type b sci fi gibberish?

    c’mon man this is waaay below standards for you

  • http://www.acceleratingfuture.com/tom Tom McCabe

    “The situation is not simply one of being “milked and traded around,” but of very probably being legally killed for inability to pay debts.”

    I am torn between my sense that humans *should* care enough about morality to avoid what would be the largest mass killing in all of history, and my skepticism of human motivations, especially given our historical record. Emulation-genocide isn’t a likely existential risk, but given the consequences, we should probably place as much importance on avoiding it as avoiding, say, a nuclear war or a smallpox epidemic.

  • Carl Shulman

    Richard,

    Eliezer and Robin, who substantially disagree on this and related topics, are having an exchange to try to resolve their disagreement, and I was asked to present my relevant arguments in post form.

    Virge,

    High levels of uncertainty about emulation dynamics suggest that we should not assign very low (e.g. <1%) to broad classes of scenarios, e.g. ‘cases in which ‘friendliness’ is relevant.’

    Filipe,

    I am neither Eliezer nor Robin (although they are writing about the same topics this week).

    Tom,

    From a total utilitarian view it is not obvious how bad such killings, in which the dead are immediately replaced, might be. I am not attempting to argue with that view in this post, merely to raise the issue of their effect on the stability of pre-existing property rights regimes.

  • http://don.geddis.org/ Don Geddis

    Richard & Filpe: re: “stick to the topic”. Have you guys been reading this blog for long? You realize, I hope, that Eliezer had a multi-month “distraction” within the last year, on quantum physics.

    The blog is not just about human cognitive biases. At the very least, it covers AI as well. And the current multi-post topic of discussion is the Singularity, a possible consequence of AI.

    Carl’s post is as on-topic as half the posts in the last year have been.

  • http://gregoryperkins.com Greg Perkins

    One possible strategy of subjugation, if it comes to that, might be the use of mythologies such as “the american dream” — the incredibly unlikely chance that a currently destitute individual (or em) would be able to contribute something uniquely meaningful after a significant environmental change.

  • Anonymous Coward

    General point: Please, please, keep talking about this stuff. I often get the feeling that this site is the only place where it is being talked about, and it’s very important for humanity. It’s not ‘overcoming bias’ perhaps, but it definitely needs to be talked about somewhere. On the other hand, ‘overcoming bias’ is also very interesting. Can you display tags on stories more clearly, and perhaps provide separate RSS streams per tag?

    Carl: I don’t think your idea of mass slaughter of the inefficient is realistic for the simple reason that we don’t do it now. We don’t kill off old people, or even people who refuse to ‘skill up’ in the workplace, and yet look at the high economic expense. Many of us do what we can to support people in third world countries despite the fact they provide us with essentially zero economic utility. Therefore, we won’t kill sentient AI either, for the same reason: we’re not total b*****ds.

    If my Alice can’t use language 2, then by god, I’ll give it the chance to train up.

    If you wouldn’t kill an old cat as it enjoys its retirement after a lifetime of mousing, how can you suggest killing Alice or HAL?

    Besides, with an army of AIs, we can expect conquering the universe, almost free energy etc. to come along too. This may seem like a rather flippant way to introduce ‘conquering the universe’, but after the first million Einstein2s are created, I suspect the big problem will be inventing sufficiently challenging games to amuse the population while the ships reach the stars…

    Anonymous

  • Nick Tarleton

    AC, why do you think most people will empathize with sentient AI? We don’t kill old people, but so many cultures have killed useless slaves, entire conquered populations, etc. I strongly suspect most people wouldn’t consider uploads to be moral objects on even the most abstract non-action-motivating level for a long time – they’re “just software”, after all.

  • Carl Shulman

    Anonymous Coward:

    “I don’t think your idea of mass slaughter of the inefficient is realistic for the simple reason that we don’t do it now.”
    It’s not my idea: Robin explicitly says that in his preferred scenario cheap ems will rent their hardware and be subject to lethal eviction. I am critiquing his scenario, arguing that it would be more unstable than Robin seems to believe.

    “Besides, with an army of AIs, we can expect conquering the universe, almost free energy etc. to come along too.”
    Malthusian growth with replication times measured in hours or days will exhaust available resources very quickly, and lightspeed limits restrict acquisition of resources via space colonization to a geometric (cubic) pattern.

    “If my Alice can’t use language 2, then by god, I’ll give it the chance to train up.”
    You can’t afford to spend as much on training many different em as you can on one (you can spend more by using controlled experiments on copies, etc), and even if you boost an Alice’s productivity above the Bobs, that would simply turn the tables (exactly equal productivity is unlikely).

  • http://www.cmp.uea.ac.uk/~jrk Richard Kennaway

    Carl: Ok, but your post was just the spur to my commenting. Robin and Eliezer, consider the question directed to you also. Perhaps this is intended as a public exercise in rationally addressing disagreement, and the subject matter is not the point?

    Don: I’ve been reading O.B. for most of its existence, and have read everything from the very start (and I might as well say here, it’s the most intellectually satisfying blog I have ever read: even the comments are mostly worth reading). Yes, I remember Eliezer’s digression into QM, but he did eventually tie it back to o.b.

  • http://t-a-w.blogspot.com/ Tomasz Wegrzanowski

    If ems are based on humans there’s no need to worry. All of human history since Neolithic consisted of most humans being abused by the few rich and powerful and taking it. Most slaves, serfs, and otherwise poor people don’t even think about overthrowing the system, they just accept their situation and try to get as much for themselves as possible.

    There was even an estimate somewhere on Wikipedia that the whole Holocaust cost Nazis a few hundred of their own dead – millions just complied with being genocided. In less extreme situations there would be even less resistance.

    We see less of such abuse right now, but mostly because there’s such an abundance of almost everything, not because human nature somehow changed. If one day scarcity returns people won’t even remember all the equality stuff.

  • Lightwave

    Wouldn’t people start feeling useless and worthless if ems are better at everything?

  • Anonymous Coward

    >”I don’t think your idea of mass slaughter of the inefficient is realistic for the simple reason that we don’t do it now.”

    >It’s not my idea: Robin explicitly says that in his preferred scenario cheap ems will rent their hardware and be subject to lethal
    >eviction. I am critiquing his scenario, arguing that it would be more unstable than Robin seems to believe.

    Apologies for misinterpreting this. I would assume that any EM smart enough to recognise this problem will not seek to rent its hardware or energy supply, but rather, to buy it. It will also co-operate with other EMs to pay a large number of Lobbyist programs to lobby government against allowing unrestricted proliferation of newer better AIs.

    In other words I’m saying EMs will become GMs. :-)

    >”Besides, with an army of AIs, we can expect conquering the universe, almost free energy etc. to come along too.”
    >Malthusian growth with replication times measured in hours or days will exhaust available resources very quickly, and lightspeed >limits restrict acquisition of resources via space colonization to a geometric (cubic) pattern.

    What can’t be allowed won’t be allowed. If EMs are as smart as the people on this thread then presumably they won’t allow Malthusian growth.

    Lightware:> Wouldn’t people start feeling useless and worthless if ems are better at everything?

    As an above-average AI researcher who reads Eliezer’s posts, I can say with some certainty, yes, they probably will :-)

    Anonymous.

  • Eriol

    If EMs can be forked, could they conceivably be merged?
    And intuitively, would this be equivalent to terminating one, both, or neither?

  • Carl Shulman

    Anonymous Coward,

    “Apologies for misinterpreting this. I would assume that any EM smart enough to recognise this problem will not seek to rent its hardware or energy supply, but rather, to buy it.”

    If this is done at the individual level, then its labor will be much more expensive than cheap renter ems, which will outcompete it. As Robin notes in his uploads paper, there will be evolutionary pressures for willingness to create copies who will face poor and risky prospects as renters. However, there will be much weaker pressures to make those ems willing to submit to mass eviction.

    “What can’t be allowed won’t be allowed. If EMs are as smart as the people on this thread then presumably they won’t allow Malthusian growth.”
    ‘Draconian measures,’ i.e. an em singleton.

    Eriol,

    Merging ems would require a deep understanding of the brain, whereas the emulation scenario assumes the lack of such. If you have that level of understanding you also get customized AIs.

  • Carl Shulman

    Tomasz,

    The majority of the German population, critically including the German military and SS, did not expect to be targeted in the Holocaust. With their acceptance of Nazi authority, Jewish/Roma/gay insurrection was visibly very unlikely to succeed.

  • Johnicholas

    Does this discussion assume that the best team (most productive team) consists of X copies of the best individual (most productive individual)?

    This is certainly arguable.

    Experts who make predictions, for example – two copies of the best weather predicter in the world will make the same predictions, and be no better than one. A second expert who doesn’t predict correctly as often as the best predicter, but who makes INDEPENDENT errors, is the one to add to your team.

  • jb

    I know how I would keep a large number of computer-managed brains pliable and cooperative: I would keep them disconnected from the rest of the world, and I would establish a virtual reality for them to live and compete in. They might never realize that they were just EMs – they might think they’re real human beings, and live perfectly normal lives with plenty of innovation just because of the drive to create.

    And then I would unleash a zombie virus upon them and cackle maniacally from my dark throne!

  • Carl Shulman

    Johnicolas,

    The framework of ‘niches’ encompasses variation in idiosyncratic specialization, and pure benefits of cognitive diversity. The benefits of specialization and diversity mean that there will be many niches, but will not protect against turnover in those niches.

    jb,

    Yes, em populations might be located on physically isolated hardware as a means of social control, although boxing without monitoring might lead to regrets.

  • Pingback: Overcoming Bias : When Life Is Cheap, Death Is Cheap