Billion Dollar Bots

Robin presented a scenario in which whole brain emulations, or what he calls bots come into being.  Here is another:

Bots are created with hardware and software.  The higher the quality of one input the less you need of the other.  Hardware, especially with cloud computing, can be quickly allocated from one task to another.  So the first bot might run on hardware worth billions of dollars.

The first bot creators would receive tremendous prestige and a guaranteed place in the history books.  So once it becomes possible to create a bot many firms and rich individuals will be willing to create one even if doing so would cause them to suffer a large loss.

Imagine that some group has $300 million to spend on hardware and will use the money as soon as $300 million becomes enough to create a bot.  The best way to spend this money would not be to buy a $300 million computer but to rent $300 million of off-peak computing power.  If the group needed only 1,000 hours of computing power (which it need not buy all at once) to prove that it had created a bot then the group could have, roughly, $3 billion of hardware for the needed 1,000 hours.

It’s likely that the  first bot would run very slowly.  Perhaps it would take the bot 10 real seconds to think as much as a human does in one second.

Under my scenario the first bot would be wildly expensive.  But because of Moore’s law once the first bot was created everyone would expect that the cost of bots would eventually become low enough so that they would radically remake society.

Consequently, years before bots come to dominate the economy, many people will come to expect that within their lifetime bots will someday come to dominate the economy.   Bot expectations will radically change the world.

I suspect that after it becomes obvious that we could eventually create cheap bots world governments will devote trillions to bot Manhattan projects.  The expected benefits of winning the bot race will be so high that it would be in the self-interest of individual governments to not worry too much about bot friendliness.

The U.S. and Chinese militaries  might fall into a bot prisoners’ dilemma in which both militaries would prefer an outcome in which everyone slowed down bot development to ensure friendliness yet both nations were individually better off (regardless of what the other military did) taking huge chances on friendliness so as to increase the probability of their winning the bot race.

My hope is that the U.S. will have such a tremendous advantage over China that the Chinese don’t try to win the race and the U.S. military thinks it can afford to go slow.  But given China’s relatively high growth rate I doubt humanity will luck into this safe scenario.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Like Eliezer and Carl, you assume people will assume they are in a total war and act accordingly. There need not be a “race” to “win.” I shall have to post on this soon I guess.

  • “But given China’s relatively high growth rate I doubt humanity will luck into this safe scenario.”

    We’re in luck then. A large chunk of China’s growth rate is creative accounting (I’m not talking about lower accounting standards) and not real. The only reason the Chinese won has stayed so cheap for so long is because the Chinese government has been buying up US Treasuries – if that loss is netted out, the Chinese growth rate would look much less spectacular.

  • Tim Tyler

    This seems to be “whole brain emulation week” at Overcoming Bias. I think I’ll have a siesta – until the madness calms down.

  • “The expected benefits of winning the bot race will be so high that it would be in the self-interest of individual governments to not worry too much about bot friendliness.”

    Generally, this is a fallacy (unless you are referring to pathological shortsightedness or wrong goals of institutions, how it is rather than how it should be). The problem of Friendliness is that if you don’t have it, you eventually lose everything. There is no instrumental advantage that justifies total failure in the long run. If you know that it’ll turn out OK, you already have a FAI. An uploaded FAI research group that is known to have a chance of producing a FAI is a FAI design. The humanity having a chance of solving the problem is a FAI design. (Although these are not useful abstractions, you can’t make anything from them.)

    On the other hand, it’s not obvious how development of literal uploads is relevant to the problem of Friendliness. The burden on solving this problem just passes from meat people to uploads, and respectively for the timescale. Progress accelerates, but I don’t see how it predictably influences where the path eventually leads.

  • I have heard the following, but would like to know if it is true: Synapse by synapse, the neural system of the C. Elegans worm has been mapped out, and yet there is no Worm-Bot. Is that right?

    It would seem to me to be only a moderately-complicated project to build a very detailed virtual Sim-Worm universe with a decent model of the worm’s body and squishy environment, then neuron models could be tested to see if they produce realistic behavior.

    There seems to be an assumption among most of the upload/bot proponents that some sort of fairly simple neuron model will be adequate for reproducing brain behavior, but details like the extreme sensitivity of brains to tiny concentrations of psychoactive chemicals make me wonder whether it’s true — and C. Elegans would
    seem to be a decent test case. I can imagine that a simple neuron model might be enough to capture whatever “essential” computing is done by the brain, but if it does not have perfect fidelity then I think we have to understand the abstract details of Mind that arise from neural activity in order to adjust the architecture to compensate for functional differences between neural models and real neurons. This assumes that such concise abstractions exist, and puts their elucidation right in the middle of the roadmap.

    Surely people must be testing their neural models in this way. What are the results so far?

  • Robin – in your response post please consider asking “What would John von Neumann do?” He advocated a first strike attack on the Soviet Union.

    Terry – Even given creative accounting China’s growth rate is still way above ours.

    Vladimir – You are assuming that the probability of the bots being friendly is either zero or 1. I’m assuming that the U.S. military might have a choice between something like (fast development but 20% chance not friendly) vs (slow but 1% chance not friendly). If slow means China wins it might be very rational to go with fast especially if China is using the same logic and so picking fast itself.

    Derekz – I don’t know but would be very interested in finding out.

  • Derekz, you’ve just reinvented the Nematode Upload Project (which I believe is more proposed than implemented, at this point).

  • Carl Shulman
  • Matt

    Why not have the first bots run as a combination of wetware/hardware/software? I’m sure there would be ethical objections, but it would be a good way to bootstrap more quickly to the next hardware/software phase. It would be easier to keep it “hushed” in a laboratory if it wasn’t productized.

  • Carl Shulman


    I don’t follow. What benefit follows from the (what) wetware?

  • frelkins

    < ?xml version="1.0" standalone="yes"?>
    < !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "">


    Please see the WBE roadmap pp.70-73 for an in-depth discussion of current simulations. For issues with c. elegans, see pp. 44-45, and the body simulation section p.74.

  • Tim Tyler

    Any progress since this, in 2002?

    “What is the current status of the Nematode Upload Project? Is it still active?”

    “The project is effectively suspended indefinitey (an euphemism for dead).”

  • Tom Breton

    The problem of Friendliness is that if you don’t have it, you eventually lose everything. There is no instrumental advantage that justifies total failure in the long run.

    That’s true, Vladimir. If the history of software and institutions is any guide, their response is likely to be to plan to control it, persuade themselves that the plan is foolproof, and proceed anyways.

  • I think I have found the Whole Body Emulation Roadmap at but page 44 is X-ray microscopy and page 45 electron microscopy.

    C. Elegans gets a few mentions: on page 22 it seems to be on the to-do list.

  • frelkins


    The cite is correct – read the full sections onto p46: focus especially on SSTEM. Read with care and pieced together the entire scenario for how to do c. elegans plausibly is scattered throughout the WBE roadmap.