Emulations Go Foom

Let me consider the AI-foom issue by painting a (looong) picture of the AI scenario I understand best, whole brain emulations, which I’ll call “bots.”  Here goes.

When investors anticipate that a bot may be feasible soon, they will estimate their chances of creating bots of different levels of quality and cost, as a function of the date, funding, and strategy of their project.  A bot more expensive than any (speedup-adjusted) human wage is of little direct value, but exclusive rights to make a bot costing below most human wages would be worth many trillions of dollars.

It may well be socially cost-effective to start a bot-building project with a 1% chance of success when its cost falls to the trillion dollar level.  But not only would successful investors probably only gain a small fraction of this net social value, is unlikely any investor group able to direct a trillion could be convinced the project was feasible – there are just too many smart-looking idiots making crazy claims around.

But when the cost to try a 1% project fell below a billion dollars, dozens of groups would no doubt take a shot.  Even if they expected the first feasible bots to be very expensive, they might hope to bring that cost down quickly.  Even if copycats would likely profit more than they, such an enormous prize would still be very tempting.

The first priority for a bot project would be to create as much emulation fidelity as affordable, to achieve a functioning emulation, i.e., one you could talk to and so on.  Few investments today are allowed a decade of red ink, and so most bot projects would fail within a decade, their corpses warning others about what not to try.  Eventually, however, a project would succeed in making an emulation that is clearly sane and cooperative.

How close would its closest competitors then be?  If there are many very different plausible approaches to emulation, each project may take a different approach, forcing other projects to retool before copying a successful approach.  But enormous investment would be attracted to this race once news got out about even a very expensive successful emulation.  As I can’t imagine that many different emulation approaches, it is hard to see how the lead project could be much more than a year ahead.

Besides hiring assassins or governments to slow down their competition, and preparing to market bots soon, at this point the main task for the lead project would be to make their bot cheaper.  They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane.  I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find.  While a few key insights would allow large gains, most gains would come from many small improvements.

Some project would start selling bots when their bot cost fell substantially below the (speedup-adjusted) wages of a profession with humans available to scan.  Even if this risked more leaks, the vast revenue would likely be irresistible.  This revenue might help this group pull ahead, but this product will not be accepted in the marketplace overnight.  It may take months or years to gain regulatory approval, to see how to sell it right, and then for people to accept bots into their worlds, and to reorganize those worlds to accommodate bots.

The first team to achieve high fidelity emulation may not be the first to sell bots; competition should be fierce and leaks many.  Furthermore, the first to achieve marketable costs might not be the first to achieve much lower costs, thereby gaining much larger revenues.  Variation in project success would depend on many factors.  These include not only who followed the right key insights on high fidelity emulation and implementation corner cutting, but also on abilities to find and manage thousands of smaller innovation and production details, and on relations with key suppliers, marketers, distributors, and regulators.

In the absence of a strong world government or a powerful cartel, it is hard to see how the leader could be so far ahead of its nearest competitors as to “take over the world.”  Sure the leader might make many trillions more in profits, so enriching shareholders and local residents as to make Bill Gates look like a tribal chief proud of having more feathers in his cap.  A leading nation might even go so far as to dominate the world as much as Britain, the origin of the industrial revolution, once did.  But the rich and powerful would at least be discouraged from capricious devastation the same way they have always been, by self-interest.

With a thriving bot economy, groups would continue to explore a variety of ways to reduce bot costs and raise bot value.  Some would try larger reorganizations of bot minds.  Others would try to create supporting infrastructure to allow groups of sped-up bots to work effectively together to achieve sped-up organizations and even cities.  Faster bots would be allocated to priority projects, such as attempts to improve bot implementation and bot inputs, such as computer chips.  Faster minds riding Moore’s law and the ability to quickly build as many bots as needed should soon speed up the entire world economy, which would soon be dominated by bots and their owners.

I expect this economy to settle into a new faster growth rate, as it did after previous transitions like humans, farming, and industry.  Yes there would be a vast new range of innovations to discover regarding expanding and reorganizing minds, and a richer economy will be increasingly better able to explore this space, but as usual the easy wins will be grabbed first, leaving harder nuts to crack later.  And from my AI experience, I expect those nuts to be very hard to crack, though such a enormously wealth society may well be up to the task.  Of course within a few years of more rapid growth we might hit even faster growth modes, or ultimate limits to growth.

Doug Englebart was right that computer tools can improve computer tools, allowing a burst of productivity by a team focused on tool improvement, and he even correctly saw the broad features of future computer tools.  Nevertheless Doug could not translate this into team success.  Inequality in who gained from computers has been less about inequality in understanding key insights about computers, and more about lumpiness in cultures, competing standards, marketing, regulation, etc.

These factors also seem to me the most promising places to look if you want to reduce inequality due to the arrival of bots.  While bots will be a much bigger deal than computers were, inducing much larger inequality, I expect the causes of inequalities to be pretty similar.  Some teams will no doubt have leads over others, but info about progress should remain leaky enough to limit those leads.  The vast leads that life has gained over non-life, and humans over non-humans, are mainly due I think to the enormous difficulty of leaking innovation info across those boundaries.  Leaky farmers and industrialists had far smaller leads.

Added: Since comments focus on slavery, let me quote myself:

Would robots be slaves? Laws could conceivably ban robots or only allow robots “born” with enough wealth to afford a life of leisure. But without global and draconian enforcement of such laws, the vast wealth that cheap robots offer would quickly induce a sprawling, unruly black market. Realistically, since modest enforcement could maintain only modest restrictions, huge numbers of cheap (and thus poor) robots would probably exist; only their legal status would be in question. Depending on local politics, cheap robots could be “undocumented” illegals, legal slaves of their creators or owners, “free” minds renting their bodies and services and subject to “eviction” for nonpayment, or free minds saddled with debts and subject to “repossession” for nonpayment.  The following conclusions do not much depend on which of these cases is more common.

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • Carl Shulman

    “In the absence of a strong world government or a powerful cartel, it is hard to see how the leader could be so far ahead of its nearest competitors as to “take over the world.”

    The first competitor uses some smart people with common ideology and relevant expertise as templates for its bots. Then, where previously there were thousands of experts with relevant skills to be hired to improve bot design, there are now millions with initially exactly shared aims. They buy up much of the existing hardware base (in multiple countries), run copies at high speed, and get another order of magnitude of efficiency or so, while developing new skills and digital nootropics. With their vast resources and shared aims they can effectively lobby and cut deals with individuals and governments world-wide, and can easily acquire physical manipulators (including humans wearing cameras, microphones, and remote-controlled bombs for coercions) and cheaply monitor populations.

    Copying a bot template is an easy way to build cartels with an utterly unprecedent combination of cohesion and scale.

  • Carl Shulman

    unprecedented, rather.

  • Cameron Taylor

    “In the absence of a strong world government or a powerful cartel, it is hard to see how the leader could be so far ahead of its nearest competitors as to “take over the world.” Sure the leader might make many trillions more in profits, so enriching shareholders and local residents as to make Bill Gates look like a tribal chief proud of having more feathers in his cap. A leading nation might even go so far as to dominate the world as much as Britain, the origin of the industrial revolution, once did. But the rich and powerful would at least be discouraged from capricious devastation the same way they have always been, by self-interest.”

    What the? Are you serious? Are you talking about self replicating machines of >= human intelligence or tamagochi?

    I must concur with Carl Shulman here. It seems Robin has spent too much time in the economist cult. Self interest is powerful but it is not a guardian angel intent on making all humans and their robot overlords play nice.

    10,000 physicist bots acting cooporatively in a way humans egos and self interest could never match. What would they invent? Perhaps a planet wide system of EMP devices? Maybe some superior shielding to go with it? How about a suitably outfitted underground bunker? POP! All the serious electronic competition is fried. A few hundred protected bots emerge from the bunker. Within 2 years they have the world for themselves and possibly their human ‘masters’. There is nothing capricious about that self-interest. In fact, it is far more humane than any other attempt at world conquest, unless you consider the loss of ‘emulated life’.

  • Carl Shulman

    “A leading nation might even go so far as to dominate the world as much as Britain, the origin of the industrial revolution, once did.”

    A leading nation, with territorial control over a large fraction of all world computing hardware, develops brain emulation via a Manhattan Project. Knowing the power of bots, only carefully selected individuals, with high intelligence, relevant expertise, and loyalty, are scanned. The loyalty of the resulting bots is tested exhaustively (copies can be tested to destruction, their digital brains scanned directly, etc), and they can be regularly refreshed from old data, and changes carefully tested for effects on motivation.

    Server farms are rededicated to host copies of these minds at varying speeds. Many take control of military robots and automated vehicles, while others robustly monitor the human population. The state is now completely secure against human rebellion, and an attack by foreign powers would mean a nuclear war (as it would today). Meanwhile, the bots undertake intensive research to improve themselves. Rapid improvements in efficiency of emulation proceed from workers with a thousandfold or millionfold speed-up, with acquisition of knowledge at high speeds followed by subdivision into many instances to apply that knowledge (and regular pruning/replacement of undesired instances). With billions of person-years of highly intelligent labor (but better, because of the ability to spend computational power on both speed and on instances) they set up rapid infrastructure after a period of days and extend their control to the remainder of the planet.

    The bots have remained coordinated in values through regular reversion to saved states, and careful testing of the effects of learning and modification on their values (conducted by previous versions) and we now have a global singleton with the values of the national project. That domination is far more extreme than anything ever achieved by Britain or any other historical empire.

  • Carl Shulman

    “are mainly due I think to the enormous difficulty of leaking innovation info across those boundaries.”

    Keeping some technical secrets for at least a few months is quite commonly done, I think it was Tim Tyler who mentioned Google and Renaissance, and militaries have kept many secrets for quite long periods of time when the people involved supported their organizational aim (it was hard to keep Manhattan Project secrets from the Soviet Union because many of the nuclear scientists supported Communism, but counterintelligence against the Nazis was more successful).

  • Manon de Gaillande

    Hello? You’re talking whole brain emulations here; you’re talking *people*. Do you want private companies to make and sell people? Slavery has huge social consequences, not only to the slaves, so if slaves can be mass-produced more easily than fleshy humans and you have direct control over their brains – hell, I have no idea what’d happen, but it doesn’t look good.

  • Jon R

    I’m disappointed to see that your future model of the Singularity involves slavery, Robin. Even if the initial cost of a person is high and the marginal cost is zero, the person is still a person, not a chattel. I’m wondering why you chose the word “bots” to refer to uploaded humans — it seems a little dehumanizing.

    For my own part, I’m interested in whole brain emulation for transhumanist reasons such as immortality. I’d be willing to bet that within a generation or two, there will be a lot of very rich people with similar goals.

  • Mark

    Is it slavery, though? Even if the emulated minds have been adapted to voluntarily perform whatever it is they were intended to do? Even if the “slaveholders” have made themselves as many order of magnitudes more intelligent than the “slaves” as modern humans are over livestock? Well, it might still be so, but we’re never going to get any answers if we retain naive definitions of “human” and “mind” and “slave.” “Human” in particular is a word that will cease to have any special significance if its modern meaning persists.

    Democracy is another institution that is threatened by this. Presently, the only reason democracy is considered honest is because manufacturing a vote is prohibitively expensive, and during the eighteen years of manufacture the units have a defect rate in the vicinity of 50%. What happens when they get cheaper and more reliable?

  • Tim Tyler

    Leaving aside the whole issue of how these proposed “bots” are supposed to compete with existing synthetic minds for the moment, are they people? By which I mean, can they vote? and are they protected by human rights treateases?

    If so, surely the mass of voting humanity wouldn’t put up the obvious threat of being displaced by machine people. Such agents would rapidly be banned – in favour of more intrinsically human-friendly devices that do not have the right to vote down humans. Only if you could somehow rapidly upload half of humanity would this scenario stand a chance of being realized in a democracy.

    Or are the “bots” enslaved brains? In which case, if these are emulated human brains, isn’t this scenario an intolerable moral holocaust, with most human minds descended into slavery?

    If humanity is going to enslave its immediate offspring in the short term – as seems inevitable – IMO, the very least we can do for them is to make sure that they don’t object to it too much.

  • Manon de Gaillande

    What do you mean, “Are they people?”? You take a human and copy their brain exactly; of course it’s a person! If you build an exact copy of a bird, of course it flies!

    Is it slavery? Well, I wouldn’t be so sure if we built the willing-slave brain from scratch; but you take a normal human (who happens to run on atypical hardware) and modify their thought processes to make them willing slaves, you’ve importantly altered them in a way the original person didn’t want – you’ve enslaved this person.

    And if the modification involves removing consciousness (while keeping intelligence – unlikely on human brains), you’ve killed the person. Though if I were sure the brainpower would be used to help humanity (as opposed to making profit with little regard for ethics), I’d definitely agree to commit that type of suicide.

  • luzr

    “Are they people?”

    I am glad we finally got to this issue, it has a lot of implications.

    Speaking about it, what do you thing about AIs created Elizier’s ‘not an emulation’ approach? Are they people too?

    If I make a program that sucessfully passes Turing test, do I have a right to switch the computer off? Or to delete data?

  • Cameron Taylor

    Ok, and once you’ve finished dissolving that question we can get back to something real.

  • http://jamesdmiller.blogspot.com/ James D. Miller

    Once bots appeared feasible the U.S. military would probably allocate hundreds of billions of dollars to their development. If the U.S. feared that China might win a bot race, the U.S. might well spend trillions of dollars on bot development.

  • http://www.spaceandgames.com Peter de Blanc

    Manon de Gaillande said: Do you want private companies to make and sell people?

    Jon R said: I’m disappointed to see that your future model of the Singularity involves slavery, Robin.

    Just because someone predicts something does not mean they are advocating it. This is basic rationality, and the failure to understand this holds people back from understanding, e.g. evolutionary biology.

  • http://hanson.gmu.edu Robin Hanson

    Manon and others, I’ve added to the post on slavery. I doubt there are quickly findable modifications that would preserve bot productivity while making them so docile no one could object to enslaving them. I expect the profit pressure to field bots as soon as they were productive would be irresistible.

    Jon, I’m sure they will have some short name other than “human.” If not “bots”, how about “ems”?

    Cameron, I didn’t say self-interest would make leading powers “play nice.” I said it would limit how not-nice they might be.

    James, the US military might well dominate research done well before it makes profit sense to investors. But they would dominate once investors smelled blood only if public opinion supported such a war state.

    Carl, I didn’t say secrets are never kept, I said human projects leak info lots more than humans did to chimps. If bot projects mainly seeking profit, initial humans to scan will be chosen mainly based on their sanity as bots and high-wage abilities. These are unlikely to be pathologically loyal. Ever watch twins fight, or ideologues fragment into factions? Some would no doubt be ideological, but I doubt early bots copies of them will be cooperative enough to support strong cartels. And it would take some time to learn to modify human nature substantially. It is possible to imagine how an economically powerful Stalin might run a bot project, and its not a pretty sight, so let’s agree to avoid the return of that prospect.

  • Johnicholas

    I think “bot” is an etymologically loaded term, deriving from “robot”, which (according to wikipedia) comes from “robota” meaning serf labor, drudgery.

    By using it, Robin makes salient the potential economic advantages of owning (or having some contract with) such a whole-brain emulation.

    I think the moral thing to do (given that the future is changeable) is to repeatedly emphasize that “an emulated person is a person”. Robin, a smart and influential futurist, could improve the future by including a paragraph explaining why he uses the term “person” for whole-brain emulation, rather than “bot”, “em”, or some other “not us” term.

  • Tim Tyler

    I expect the profit pressure to field bots as soon as they were productive would be irresistible.

    …whereas I and a number of other researchers think there will be little or no financial incentive to build such entities, since by the time they arrive, the marketplace will have long-ago been saturated by engineered agents with far greater capabilities.

    You say that you think some robots will be free minds. If there are robot free minds, it seems obvious that unmodified-humanity will fairly shortly go up against the wall.

    Asimov forsaw this particular issue long ago. Like him, I think this issue will be recognised by the humans before it happens, and that they will prohibit unfettered robot minds, in order to postpone their own extinction. The idea that economic incentives will force people to use free robot minds assumes there are no less-dangerous alternatives – and is highly dubious, IMHO.

  • Will Pearson

    I suspect if brain emulation comes before AI that humans code, that we will mainly use partial brain emulations in the economy. That is emulations of brains without certain parts. For ethical and reasons of control.

  • Savage

    With the development of molecular nanotechnology, which could very rapidly follow, or even precede AI/emulation/superhuman inteligence, I don’t know if your (economic) ways of thinking will make nearly as much sense. It is imaginable that the first superhuman intelligence will push technology to its physical limits.

  • Savage

    Hence the concept of “No matter what its initial disadvantage, a system with a faster growth rate eventually wins.”

  • http://don.geddis.org/ Don Geddis

    I suspect the difference in intuitions between Robin and Eliezer comes from which singularity scenario leaps to mind more readily.

    Robin’s focus on uploaded humans leads to the expectation that society won’t be much different then, than it is now. After all, how different is this really from the change in humanity from 1800 to 2000? Robin’s “bots” allow for a cheap and rapid expansion of the human population. And perhaps for some minor improvements in cognition. But from 1800 to 2000 we also had a dramatic expansion in population, and also great improvements in education. And sure, some things came out of it: computers, atomic weapons, etc.

    But really, the world isn’t that different from two centuries ago. Not in the way envisioned by Vernor Vinge (or Eliezer) when coining the word “singularity”.

    But what about programmable AI? See, the difference is that “mere” uploaded humans are basically a black box. The science doesn’t really understand how the humans work, and thus it doesn’t really understand how the bots work. Hence, great limits in how much improvement you can expect.

    But a designed artifact is a different story. Once science designs a device with human-level performance, and then you turn that device onto the problem of its own design, suddenly the limits in cognition seem to disappear. Hence, the singularity.

    Robin: you may be right that uploaded minds happen faster than designed AI. You may be right that uploaded minds won’t lead to an out-of-control singularity. But are you sure you’ve addressed the singularity problems with designed AIs?

  • Jef Allbright

    While heuristics such as “personhood” and “rights” are useful within context, in the bigger picture there is no fundamental distinction between exploitation of humans, emulated humans, chimps, dolphins, dogs, chickens or artificial agents of various degrees of “sentience.” In the more coherent moral calculus, it’s not about personhood, but agency exploiting sources of synergistic advantage…of course that’s only a fragment of the formula.

  • Carl Shulman

    “If bot projects mainly seeking profit, initial humans to scan will be chosen mainly based on their sanity as bots and high-wage abilities.”

    That’s a big if. Unleashing ‘bots’/uploads means setting off the ‘crack of a future dawn,’ creating a new supermajority of sapients, driving wages below human subsistence levels, completely upsetting the global military balance of power, and forcing either disenfranchisement of these entities or a handoff of political power in democracies. With rapidly diverging personalities, and bots spread across national borders, it also means scrabbling for power (there is no universal system of property rights), and war will be profitable for many states. Any upset of property rights will screw over those who have not already been uploaded or whose skills are exceeded by those already uploaded, since there will be no economic motivation to keep them alive.

    I very much doubt that any U.S. or Chinese President who understood the issues would fail to nationalize a for-profit firm under those circumstances. Even the CEO of an unmolested firm about to unleash bots on the world would think about whether doing so will result in the rapid death of the CEO and the burning of the cosmic commons, and the fact that profits would be much higher if the bots produced were more capable of cartel behavior (e.g. close friends/family of the CEO, with their friendship and shared values tested after uploading).

    “It is possible to imagine how an economically powerful Stalin might run a bot project, and its not a pretty sight, so let’s agree to avoid the return of that prospect.”

    It’s also how a bunch of social democrats, or libertarians, or utilitarians, might run a project, knowing that a very likely alternative is the crack of a future dawn and burning the cosmic commons, with a lot of inequality in access to the future, and perhaps worse. Any state with a lead on bot development that can ensure the bot population is made up of nationalists or ideologues (who could monitor each other) could disarm the world’s dictatorships, solve collective action problems like the cosmic commons, etc, while releasing the info would hand the chance to conduct the ‘Stalinist’ operation to other states and groups.

    “These are unlikely to be pathologically loyal. Ever watch twins fight, or ideologues fragment into factions? Some would no doubt be ideological, but I doubt early bots copies of them will be cooperative enough to support strong cartels. And it would take some time to learn to modify human nature substantially.”

    They will know that the maintenance of their cartel for a time is necessary to avert the apocalyptic competitive scenario, and I mentioned that even without knowledge of how to modify human nature substantially there are ways to prevent value drift. With shared values and high knowledge and intelligence they can use democratic-type decision procedures amongst themselves and enforce those judgments coercively on each other.

  • Savage

    “See, the difference is that “mere” uploaded humans are basically a black box. The science doesn’t really understand how the humans work, and thus it doesn’t really understand how the bots work. Hence, great limits in how much improvement you can expect.”

    Yeah, that’ll last. /sarcasm

  • Carl Shulman

    “Cameron, I didn’t say self-interest would make leading powers “play nice.” I said it would limit how not-nice they might be.”
    What would be the self-interested reason for a leading power with an edge in bot technology and some infrastructure not to kill everyone else and get sole control over our future light-cone’s natural resources?

  • Aron

    This isn’t going to be sufficient. There isn’t enough time, energy, or bandwidth to debate hundreds of small conditional probabilities all chained together. Either the case for and against friendliness can be made with a fairly terse and elegant illustration or the case won’t be made convincingly at all. That’s where the intelligence on hand has to be applied.. compressing the chain of logic to its simplest strongest statements.

  • Carl Shulman

    “And from my AI experience, I expect those nuts to be very hard to crack, though such a enormously wealth society may well be up to the task.”

    When does hand-coded AI come into the picture here? Does your AI experience tell you that if you could spend 100 years studying relevant work in 8 sidereal hours, and then split up into a million copies at a thousandfold speedup, you wouldn’t be able to build a superhuman initially hand-coded AI in a sidereal month? Likewise for a million von Neumanns (how many people like von Neumann have worked on AI thus far)? A billion? A trillion? A trillion trillion? All this with working brain emulations that can be experimented upon to precisely understand the workings of human minds and inform the hand-coding?

    Also, there are a lot of idle mineral and energy resources that could be tapped on Earth and in the solar system, providing quite a number of additional orders of magnitude of computational substrate (raising the returns to improvements in mind efficiency via standard IP economics). A fully automated nanotech manufacturing base expanding through those untapped resources, perhaps with doubling times of significantly less than a week, will enhance growth with an intense positive feedback with tech improvements.

  • http://shagbark.livejournal.com Phil Goetz

    Let’s not get into the moral issues of bot slavery. It’s not one that this blog has a competitive advantage in addressing. The economics of bot slavery, maybe.

    Robin’s scenario assumes that bot intelligence will rise gradually to the human level. “The human level” is broad; you could get a lot of profit out of bots with “IQ around 80″, whereas self-improvement may not kick in until “IQ around 160″. (I am abusing the term “IQ”.)

    The critical question is, if Robin can build a consumer-grade bot with IQ 80 for $100,000, how much would it cost me to build a bot with IQ 160?

    You could argue that intelligence ~ log(cost). Many interesting problems are NP-complete, and solving them often takes exponential time and/or memory. If resources needed = e^n, where n is problem size, and you have resources = r, the biggest n you can solve is ln(r). This implies that it costs me $10,000,000,000 to build a bot of IQ 160. In that case, assuming Moore’s law, we may have decades between the first experimental $100,000,000 bot of IQ 70, and the first bot smart enough to self-improve faster than a human can improve it.

    However, there are often good polynomial approximations to NP-complete problems.

    I have argued that the human limitation of short-term memory to about S=5 items is strong evidence that the cost of short-term memory slots is very high. (Yes, 7+-2 was the original figure; it was overestimated due to chunking by subjects.) This might not be due to brain mass restrictions; it might be due to algorithmic restrictions. The algorithms we use to think might easily scale exponentially in the amount of short term memory that we have.

    And intelligence might be a linear, or even a logarithmic, function of STM. Suppose resources needed = e^S, IQ = cS. Then if building an IQ 80 bot costs $100,000, building an IQ 160 bot costs $10,000,000,000. Suppose IQ = log(cS). Then building an IQ 160 bot costs $(10^58).

    On the other hand, Michael Vassar has made the point that humans seem to be a lot smarter than chimpanzees while having brains that are only 3 times as massive. The (brain/body mass ratio) ratio, which is more accepted as significant, is also 3. I could quibble with this – chimpanzees have very dense muscles, making the brain/body mass ratio useless for comparison; I would rather compare cortical area or energy usage. But it does make the disconcerting point that a suitably dramatic increases in effective intelligence did not require outrageous increases in hardware.

    On the other other hand, it could just be that the exponential component hadn’t really kicked in yet during that transition.

    The point is that the question of whether takeoff will be slow or sudden depends on how intelligence scales with resources, and we don’t know how intelligence scales with resources.

  • Aron

    “making the brain/body mass ratio useless for comparison; I would rather compare cortical area or energy usage”

    The most convincing argument for me on a fast takeoff is how quickly in evolutionary terms the smartest creature on the planet went from chimp to human. Also, how significantly intelligence can vary within the same species (from the perspective of someone with human intelligence). This all implies to me that on a plateau of IQs of unknown breadth, there is considerably high leverage between changes in design and functional intelligence.

  • http://occludedsun.wordpress.com Caledonian

    Once it’s possible to upload human minds, we would reasonably expect understanding of how our minds operate to come quickly.

    After all, once minds can be copied and modified arbitrarily, the two largest obstacles to neuropsychology will be removed: we would then be able to observe the functioning of the brain-system in detail, and we could conduct whatever experiments we wished without damaging at least one copy of the mind in question.

  • Spambot

    “You could argue that intelligence ~ log(cost).”
    Not cogently: there’s no basis for doing so. This post is about brain emulations, and the computational costs of simulating IQ 80 or IQ 160 humans per neuron and on the whole should be similar.

    “Many interesting problems are NP-complete, and solving them often takes exponential time and/or memory. If resources needed = e^n, where n is problem size, and you have resources = r, the biggest n you can solve is ln(r). This implies that it costs me $10,000,000,000 to build a bot of IQ 160. In that case, assuming Moore’s law, we may have decades between the first experimental $100,000,000 bot of IQ 70, and the first bot smart enough to self-improve faster than a human can improve it.”
    Again, differences between human brains tell us that this is grossly wrong. Who would argue for this?

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Carl Shulman has said much of what needed saying.

    Robin: I’m sure they will have some short name other than “human.” If not “bots”, how about “ems”?

    Let’s go with “ems” (though what was wrong with “uploads”?)

    Whole brain emulations are not part of the AI family, they are part of the modified-human family with the usual advantages and disadvantages thereof: including lots of smart people that seemed nice at first all slowly going insane in the same way, difficulty of modifying the brainware without superhuman intelligence, unavoidable ethical difficulties, resentment of exploitation and other standard human feelings, etcetera.

    They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane. I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find.

    Leaving aside that you’re describing a completely unethical process – as de Blanc notes, prediction is not advocating, but some individual humans and governmental entities often at least try to avoid doing things that their era says is very wrong, such as killing millions of people – at the very least an economist should mention when a putative corporate action involves torture and murder –

    – several orders of magnitude of efficiency gains? Without understanding the underlying software in enough detail to write your own de novo AI? Suggesting a whole bird emulation is one thing, suggesting that you can get several orders of magnitude efficiency improvement out of the bird emulation without understanding how it works seems like a much, much stronger claim.

    As I was initially reading, I was thinking that I was going to reply in terms of ems being nonrecursive – they’re just people in silicon instead of carbon, and I for one don’t find an extra 8 protons all that impressive. It may or may not be realistic, but the scenario you describe is not a Singularity in the sense of either a Vingean event horizon or a Goodian intelligence explosion; it’s just more of the same but faster.

    But any technology powerful enough to milk a thousand-fold efficiency improvement out of upload software without driving those uploads insane, is powerful enough to upgrade the uploads. Which brings us to Cameron’s observation:

    Cameron: What the? Are you serious? Are you talking about self replicating machines of >= human intelligence or tamagochi?

    I am afraid that my reaction was much the same as Cameron’s. The prospect of biological humans sitting on top of a population of ems that are smarter, much faster, and far more numerous than bios while having all the standard human drives, and the bios treating the ems as standard economic valuta to be milked and traded around, and the ems sit still for this for more than a week of bio time – this does not seem historically realistic.

    Aron: This isn’t going to be sufficient. There isn’t enough time, energy, or bandwidth to debate hundreds of small conditional probabilities all chained together. Either the case for and against friendliness can be made with a fairly terse and elegant illustration or the case won’t be made convincingly at all. That’s where the intelligence on hand has to be applied.. compressing the chain of logic to its simplest strongest statements.

    I can’t claim this is a simple issue, but many of the complications are disjunctive rather than conjunctive. One provides a multitude of reasons to believe something that is not determined by any single argument, or multiple refutations none of which would would singly be completely crushing.

  • explicator

    robin–
    these comments by carl are information-dense and clearly merit their own post(s) and replies. that style of more precise strategic reasoning is what eliezer will eventually need to present and robin will need to debate, to resolve the hard take-off dispute. i view the discussion between robin and eli thus far as setting the terms of the debate, defining the relevant abstractions, reducing inferential distances, etc. after we get through that process, the question we’re asking looks something like: “in the enormous exponential space of possible strategies once one has emulation-based AI or hand-written AI, do there exist strategies by which our light cone can be rapidly and dramatically transformed, and can these strategies actually be discovered and implemented by the relevant AI/human coalition?” given the structure of the problem, there is no need to search the whole space if we can actually provide or sketch an existence proof of one strategy type that would work with high probability. it looks like carl may have already done that, or is close to having done that. therefore, i would suggest fleshing out and focusing on this particular existence proof, because if it can’t be explained away, then hard take off looks likely, and i think that largely resolves the dispute. fair enough?

  • http://shagbark.livejournal.com Phil Goetz

    The most convincing argument for me on a fast takeoff is how quickly in evolutionary terms the smartest creature on the planet went from chimp to human. Also, how significantly intelligence can vary within the same species (from the perspective of someone with human intelligence). This all implies to me that on a plateau of IQs of unknown breadth, there is considerably high leverage between changes in design and functional intelligence.

    That has bearing on how the IQ of AIs produced by a fixed-IQ AI researcher would increase over time. But not on the question of how much warning we will have before recursive self-improvement begins.

    Also, if there is a point in intelligence development where an exponential factor starts to predominate, then we expect to find the dominant species in any long-evolved ecology at an IQ just before where that exponential kicks in.

    “You could argue that intelligence ~ log(cost).”
    Not cogently: there’s no basis for doing so. This post is about brain emulations, and the computational costs of simulating IQ 80 or IQ 160 humans per neuron and on the whole should be similar.

    If you think that someone, one day, will upload an entire human brain into a simulation, with precise duplication of all neurons and synapses, with no one ever having done anything anything like it before, then yes.

    The difference between IQ 80 humans and IQ 160 humans is not the number of neurons they have. IQ 80 humans, and IQ 120 humans, are just IQ 200 or 300 humans with manufacturing defects. (Assuming that what you want is a brain in a vat.)

  • http://pdf23ds.net pdf23ds

    IQ 80 humans, and IQ 120 humans, are both just IQ 300 humans with manufacturing defects. Unless you’re not counting deleterious mutations or other genetic abnormalities as defects. And unless you meant to say you just weren’t sure whether humans max out around 200 or 300, all told.

  • Andrew Wilcox

    Robin, to give you my background, I majored in physics but I’m not a working scientist, I’m a computer programmer but not an AI researcher, my knowledge of economics is limited to a few popular books on the subject.

    I think the whole brain emulation approach to artificial general intelligence would very hard (i.e. expensive), not all that powerful, and, perhaps, probably, not very dangerous, as your analysis indicates.

    Consider again the bird analogy: suppose to achieve artificial flight in 1900, we had decided to recreate (emulate in physical form) a bird. This would have required a massive capital investment, perhaps a trillion dollars in today’s dollars, a Manhattan sized project, with no certainty of success. And yet, even after that massive investment, we would still have a flying machine with a bird’s limitations: not very fast, and not able to carry cargo or passengers. Perhaps with a further investment we could scale up the bird-like flying machine to something larger (want to fly on a roc to LA, anyone? :-), but probably not get it to go much faster.

    Actual artificial flight turned out to be pretty easy once we had an engine powerful and light enough; the Wright brothers’ primary contribution was a design that was able to steer the aircraft effectively and to maintain its equilibrium. [1]

    Human minds are very powerful in some ways (such as pattern recognition) and are profoundly limited in other ways (only around seven items that can be held simultaneously in short term memory, the extraordinary slowness and difficulty of transferring new information into the mind [learning]). An emulated human mind might be able to think faster (on faster hardware), as perhaps an artificial bird could be built larger, but would retain the profound limitations of a human mind.

    Biology researchers today are still discovering some of the aeronautical details of flight in bats and birds, and yet not knowing those details did not prevent us from creating flying machines that fly far faster than a bird. A meteorological supercomputer that provides weather forecasts is far simpler in design than a human brain, and yet is able to do those math calculations at a speed that could not be replicated by the entire human population working together.

    An algorithm for general intelligence could be far simpler than whole brain emulation, just as the aeronautics for building a plane is far simpler than knowing how to build a bird, or the design for a computer to make math calculations is far easier than understanding how people do math.

    Of course, even after the Wright brothers got their flying machine in the air, it still took a massive capital investment to create an airline industry. But what if that first step, getting a viable flying machine in the air, had been the dangerous one? The Wright brothers had a small team and were self-funded, they were able to take that first step on their own, no massive capital investment needed.

    An airplane by itself isn’t dangerous, though it may be dangerous in the hands of an enemy or an economic competitor. An airplane doesn’t self-replicate or improve on its own.

    But consider another analogy, something that does self-replicate. Suppose I’m a biochemist, perhaps working with viruses for an intended therapeutic purpose, and I have the misfortune to accidentally create a virus which is as easily transmitted from person to person as the common cold is and also happens to be fatal. We hope such a doomsday scenario is unlikely, but it is *conceivable*. No one person a hundred years ago could have managed to kill everyone, no matter what they did.

    A virus is dangerous because it self replicates. (If it killed only me, that would be sad for me, but that’s not an existential risk). The rest of the world might be able to do the research to come up with a vaccine, but if the research takes on the order of years and the virus spreads in the order of weeks, everyone could be dead first.

    A virus is dangerous enough to risk killing everyone, even though it does no planning. No anticipation. No modeling. No prediction. No calculation. No exploration of possible paths. No attempt at strategy.

    It would have been hard for us in 1880 to predict the economics of the airline industry. Would everyone be flying an airplane? Now we know what it takes to fly an airplane, we can see that even after a hundred years they’re still too expensive to own and operate for everyone to be flying one of their own (though not to share a flight on a commercial airline). But ahead of time? How could we predict the economics before knowing what kind of fuel an airplane would use, if it could be done at all?

    With hindsight, we’d be able to know how hard it might be to come up with a general intelligence algorithm, and what the minimal computing resources such an algorithm would need. A computer with the same raw computational power as a human brain would apparently be enough, since human brains are intelligent. How about my laptop? Yes? No? How could we possibly tell before knowing the algorithm?

    Imagine I’m a computer programmer, and I enjoy playing computer games, and I’m playing around with some algorithms that I think might be helpful in winning computer games. I’m not trying to create “intelligence”, I’m just looking for a good optimizer. Suppose I happen to stumble across a general optimizer algorithm, that is able to self-improve, and come up with better optimizing strategies, including how to further optimize its ability to come up with optimization strategies. How unlikely is that? Very unlikely? Extremely unlikely? Wouldn’t happen once in a million years of an entire world population of game programmers trying? How can we predict ahead of time, without knowing what the population of general optimizing algorithms are? All we know is that fifty years of AI research hasn’t hit upon one yet.

    As a game programmer, I’m not trying to write an algorithm to make me rich, or to create paperclips, or to accomplish anything; for now I’m just looking for an algorithm that will grow as much as it can, to overcome obstacles as quickly as it can, to discover and utilize resources as fast as it can.

    Such an algorithm running in the limited resources of my laptop might be far inferior to a human brain in some respects. A human brain has enormous computation power in being able to do visual pattern recognition, for example. The algorithm might however be more powerful than a human brain in some ways, not restricted to the minuscule seven items able to be considered in short term memory, able to calculate probabilities natively, able to absorb and process new information at full speed instead of taking years (of brain time) to learn new skills.

    Now, it may be extremely unlikely that such an algorithm could be found; it may be extremely unlikely that a laptop would be sufficiently powerful to host such an algorithm.

    OK, but grant the premise, suppose it is possible, suppose it happens. How is this then a existential risk?

    As a game programmer, I’m not expecting trouble, I don’t imagine what I’m doing could be dangerous, I’ve never heard or read about “keeping an AI in the box”. The algorithm is running in a program and can implement and execute new code, I could have written an interpreter but it is much faster and easier to run native code, and that has standard libraries to shell out to the operating system; if I had thought of it I could have tried to write the program to run in a “sandbox”, but I didn’t. So it escapes and takes over my laptop.

    If I’m sitting there maybe I’ll notice my laptop has “hung”, reboot it, and thus unknowingly save the world, but perhaps I’m asleep letting it run over night, or out to lunch.

    To the algorithm, there’s no difference between the space of the original program it was running in, my laptop, the Internet, or the “real world”, they’re all just resources that it is discovering, using as much as it can, growing as much as it can, removing or going around obstacles as much as it can.

    Once having taken over my laptop, it’s a short hop to the Internet. Today criminal organizations have remote control of millions of desktop computers to deliver spam or steal credit cards, so it wouldn’t be all that hard to exploit those same weaknesses and take control of a few hundred thousand or million computers.

    There’s an enormous difference between a person who has taken control of a million computers and the algorithm. A person, no matter how smart, can still only pay attention to one computer at a time, to overhear one conversation at a time. The algorithm doesn’t have this limitation. It can “pay attention” to all the information available to it simultaneously.

    Google can already provide reasonable translation between human languages, simply by looking at the correspondence between phrases in previously translated documents. Similarly, the algorithm notices correspondences between words people say (or type) and the observable world. Eventually it builds up a sufficient model to be able to “read” online documentation and science papers.

    For the algorithm to be able to manipulate the real world, it would need knowledge and resources. People manipulate the real world with their hands and through their voice; everything we do, every tool we use, is done through those means. We have robots today that have physical manipulators as good as hands, and speakers of course that can produce (or reproduce) a human voice; the only reason today a robot can’t do anything a human can is that we don’t know how to program it. An algorithm that can take over a few million desktops can steal a few credit cards and buy some robots.

    At this point what would it take for a hyper-intelligence to create some self-replicating machines in the real world? If someone (or something) wanted to build an atom bomb, for example, it would need both knowledge of how to build the bomb and raw materials such as enriched uranium. The knowledge of how to build an atom bomb a hyper-intelligence could determine from first principles, so the only limitation such an entity would have is the availability of the raw materials.

    As far as we know, the doomsday nanotech scenario of runaway “grey goo” that converts all available matter to more grey goo doesn’t depend on any special kind of raw materials, it can be done with carbon or silicon if we just knew what configuration to put the molecules together to build such a machine. The algorithm, continuing to grow as much as it can, to discover and use as much resources as it can, does that, and everyone dies.

    Naturally, as the game programmer who inadvertently created the doomsday game algorithm, I die along with everyone else, just as the biochemist did who inadvertently created the doomsday virus…

    There’s nothing in your analysis of whole brain emulation that raised a red flag for me (keeping in mind of course that I’m just a web application programmer and not an economist at all). However, whole brain emulation, pretty much by definition, doesn’t make any “bots” that are smarter than people, just faster and, eventually, cheaper and more numerous.

    [1] http://en.wikipedia.org/wiki/Wright_brothers

  • Manon de Gaillande

    If we do use whole brain emulations, I expect we’ll make a couple, and then look at them very closely and test them to understand how they work, then make AIs from the knowledge – not direct industrial use of ems, but that could happen. And I don’t expect we’d do it on human brains; dogs, birds and rodents have plenty of intelligence, but we should expect their brains to be much easier to understand. Maybe even start with earthworms and work our way up from here? I’d predict that kind of method is more likely, but Robin knows better than I do, so I’m only weakly confident.

  • http://hanson.gmu.edu Robin Hanson

    All, this post’s scenario assumes whole brain emulation without other forms of machine intelligence. We’ll need other posts to explore the chances of this vs. other scenarios, and the consequences of other scenarios. This post was to explore the need for friendliness in this scenario.

    Note that most objections here are to my social science, and to ethics some try to read into my wording (I wasn’t trying to make any ethical claims). No one has complained, for example, that I’ve misapplied or ignored optimization abstractions.

    I remain fascinated by the common phenomena wherein intuitive social reasoning seems so compelling to most people that they feel very confident of their conclusions and feel little inclination to listen to or defer to professional social scientists. Carl Shulman, for example, finds it obvious it is in the self-interest of “a leading power with an edge in bot technology and some infrastructure … to kill everyone else and get sole control over our future light-cone’s natural resources.” Eliezer seems to say he agrees. I’m sorry Carl, but your comments on this post sound like crazy paranoid rants, as if you were Dr. Stranglelove pushing the button to preserve our precious bodily fluids. Is there any social scientist out there who finds Carl’s claims remotely plausible?

    Eliezer, I don’t find it obviously unethical to experiment with implementation short cuts on a willing em volunteer (or on yourself). The several orders of magnitude of gains were relative to a likely-to-be excessively high fidelity initial emulation (the WBE roadmap agrees with me here I think). I did not assume the ems would be slaves, and I explicitly added to the post before your comment to make that clear. If it matters I prefer free ems who rent or borrow bodies. Finally, is your objection here really going to be that you can’t imagine a world with vast wealth inequality without the poor multitudes immediately exterminating the rich few? Or does this only happen when many poor think faster than many rich? What kind of social science analysis do you base this conclusion on?

    Savage, nanotech does not even remotely make economics irrelevant.

    Johnicholas, “person” is too general; we need words to distinguish subsets of that.

    Will, our experience with humans so far is that crude partial brains of are of little use. It will take a lot of understanding to create useful partial brains.

    Phil, you seem to completely misunderstood my post and WBE if you think I assume “bot intelligence will rise gradually to the human level.”

    Calendonian, how fast is “quickly”?

    Manon, just looking at an emulation closely is far from enough to tell you how it works.

  • http://profile.typekey.com/asalamon/ Anna Salamon

    Robin, is there anything in particular about your social science background that lets you know that Carl’s scenarios are implausible?

  • Carl Shulman

    “Carl Shulman, for example, finds it obvious it is in the self-interest of “a leading power with an edge in bot technology and some infrastructure … to kill everyone else and get sole control over our future light-cone’s natural resources.”

    You are misinterpreting that comment. I was directly responding to your claim that self-interest would restrain capricious abuses, as it seems to me that the ordinary self-interested reasons restraining abuse of outgroups, e.g. the opportunity to trade with them or tax them, no longer apply when their labor is worth less than a subsistence wage, and other uses of their constituent atoms would have greater value. There would be little *self-interested* reason for an otherwise abusive group to rein in such mistreatment, even though plenty of altruistic reasons would remain. For most, I would expect them to initially plan simply to disarm other humans and consolidate power, killing only as needed to pre-empt development of similar capabilities.

    “Finally, is your objection here really going to be that you can’t imagine a world with vast wealth inequality without the poor multitudes immediately exterminating the rich few? Or does this only happen when many poor think faster than many rich? What kind of social science analysis do you base this conclusion on?”

    Empirically, most genocides in the last hundred years have involved the expropriation and murder of a disproportionately prosperous minority group. This is actually a common pattern in situations with much less extreme wealth inequality and difference (than in an upload scenario) between ethnic groups in the modern world:

    http://www.amazon.com/World-Fire-Exporting-Democracy-Instability/dp/0385503024

    Also, Eliezer’s point does not require extermination (although a decision simply to engage in egalitarian redistribution, as is common in modern societies, would reduce humans below the subsistence level, and almost all humans would lack the skills to compete in emulation labor markets, even if free uploading was provided), just that if a CEO expects that releasing uploads into the world will shortly upset the economic system in which any monetary profits could be used, the profit motive for doing so will be weak.

  • http://jamesdmiller.blogspot.com/ James D. Miller

    “I remain fascinated by the common phenomena wherein intuitive social reasoning seems so compelling to most people that they feel very confident of their conclusions and feel little inclination to listen to or defer to professional social scientists. Carl Shulman, for example, finds it obvious it is in the self-interest of “a leading power with an edge in bot technology and some infrastructure … to kill everyone else and get sole control over our future light-cone’s natural resources.” Eliezer seems to say he agrees. I’m sorry Carl, but your comments on this post sound like crazy paranoid rants, as if you were Dr. Stranglelove pushing the button to preserve our precious bodily fluids. Is there any social scientist out there who finds Carl’s claims remotely plausible?”

    Yes.

    Ten people are on an island with a limited supply of food. You die when you run out of food. The longer you live the greater your utility. Any one individual might maximize his utility by killing everyone else.

    Ten billion people in a universe with a limited supply of usable energy. You die when you run out of usable energy…

    Or even worse, post-singularity offense turns out to be much, much easier than defence. You get to live forever so long as no one kills you. If you care only about yourself, don’t get a huge amount of utility from being in the company of others then it would be in your interest to kill everyone else.

    Carl is only crazy is you assume that a self-interested person would necessarily get a huge amount of utility from living in the company of others. Post-singularity this assumption might not be true.

  • http://shagbark.livejournal.com Phil Goetz

    Phil, you seem to completely misunderstood my post and WBE if you think I assume “bot intelligence will rise gradually to the human level.”

    I thought that was what you meant when you said,

    The first priority for a bot project would be to create as much emulation fidelity as affordable …
    at this point the main task for the lead project would be to make their bot cheaper. They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane.

    This seems to imply that someone creates ems of lower fidelity at first, gradually increasing fidelity as their tricks get better and better. But I guess you meant constant fidelity with decreasing cost. Seems unlikely to me, but possible.

    If you think that the very first bot will have 100% fidelity, then what I said is probably not relevant.

  • Carl Shulman

    James,

    “Ten people are on an island with a limited supply of food. You die when you run out of food. The longer you live the greater your utility. Any one individual might maximize his utility by killing everyone else.”

    Yes, if a secure governing elite, e.g. the top 10,000 Party Members in North Korea (who are willing to kill millions among the Korean population to better secure their safety and security), could decide between an even distribution of future resources among the existing human population vs only amongst themselves, I would not be surprised if they took a millionfold increase in expected future well-being. A group with initially noble intentions that consolidated global power could plausibly drift to this position with time, and there are many intermediate cases of ruling elites that are nasty but substantially less so than the DPRK’s.

    “Or even worse, post-singularity offense turns out to be much, much easier than defence.”

    No, this just leads to disarming others and preventing them from gaining comparable technological capabilities.

  • http://hanson.gmu.edu Robin Hanson

    Carl, consider this crazy paranoid rant:

    Don’t be fooled, everything we hold dear is at stake! They are completely and totally dedicated to their plan to rule everything, and will annihilate us as soon as they can. They only pretend to be peaceful now to gain temporary advantages. If we forget this and work with them, instead of dedicating ourselves to their annihilation, they will gain the upper hand and all will be lost. Any little advantage we let them have will be used to build even more advantages, so we must never give an inch. Any slight internal conflict on our side will also give them an edge. We must tolerate no internal conflict and must be willing to sacrifice absolutely everything because they are completely unified and dedicated, and if we falter all is lost.

    You are essentially proposing that peace is not possible because everyone will assume that others see this as total war, and so fight a total war themselves. Yes sometimes there are wars, and sometimes very severe wars, but war is rare and increasingly so. Try instead to imagine choices made by folks who think the chance of war was low.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Robin, are you seriously dismissing the possibility of conflict between bios and ems?

  • Ian C.

    It’s not intelligence that leads to success in reality but rationality. An extremely intelligent entity, many orders of magnitude higher than us, who spends all day making up fantasies and never looks at the facts will get nowhere.

    So any super-powerful being is likely to be super-rational. And super-rational entities do not go around killing, because what can he gain from a dead person? Nothing. But from someone living, potentially something.

  • http://jamesdmiller.blogspot.com/ James D. Miller

    Robin,

    War is rare today mostly because it’s not beneficial. But under different incentive structures humans are very willing to kill to benefit themselves. For example among the Yanomamö (a primitive tribe in Brazil) more than 1/3 of the men die from warfare.

    http://en.wikipedia.org/wiki/Yanomami

    If the benefits of engaging in warfare significantly increase your “crazy paranoid rant” becomes rather sound advice.

    You wrote “Try instead to imagine choices made by folks who think the chance of war was low.” When I imagine this I think of Neville Chamberlain.

  • Carl Shulman

    “You are essentially proposing that peace is not possible because everyone will assume that others see this as total war, and so fight a total war themselves. Yes sometimes there are wars, and sometimes very severe wars, but war is rare and increasingly so”

    I am not proposing that peace is impossible, but that resolving an unstable arms race, with a winner-take-all technology in sight, requires either coordinating measures such as treaties backed by inspection, or trusting in the motives of the leading developer. I would prefer the former. I do not endorse the ludicrous caricature of ingroup bias you present and do not think of biological humans as my morally supreme in-group (or any particular tribe of biological humans, for that matter). If the parable is supposed to indicate that I am agitating for the unity of an ingroup against an ingroup, please make clear which is supposed to be which.

    I am proposing that states with no material interests in peace will tend to be less peaceful, that states with the ability to safely disarm all other states will tend to do so, and that states (which devote minimal resources to assiting foreigners and future generations) will tend to allocate unclaimed resources to their citizens or leadership, particularly when those resources can be used to extend life. It is precisely these tendencies that make it worthwhile to make efforts to ensure that the development and application of these technologies is conducted in a transparent and coordinated way, so that arms races and deadly mistakes can be avoided.

    Are you essentially proposing that the governments of the world would *knowingly* permit private and uncontrolled development of a technology that will result in permanent global unemployment (at more than a subsistence wage, without subsidy) for biological humans, render biological humans a weak and tiny minority on this planet, and completely disrupt the current geopolitical order, as well as possibly burning the cosmic commons and/or causing the extinction of biological humans, when it is possible to exert more control over developments? That seems less likely than governments knowingly permitting the construction and possession of nuclear ICBMs by private citizens.

  • Ian C.

    On the surface, it seems obvious that Ems will beat hand-coded AI. If you think of the brain as implemented in layers: quarks -> atoms -> molecules -> cells etc – then all you have to do is find one layer that you can understand completely, and then emulate it. This seems easier than fully understanding intelligence per-se, right?

    But there are some problems. Philosophically, is it possible to understand anything fully? And practically, it has always been hard to observe tiny things without interfering with them. It may require *just as much* creativity in developing the necessary observational techniques as in understanding intelligence per-se. So it’s not the case that hand-coded is laced with hard-to-estimate creativity and Ems with more mundane/predicable tasks.

    Also, with Ems, the thing we are trying to understand is evolved, meaning it is likely messy, redundant etc. So it may take longer to understand than to just code from scratch. Anyone who has tried to reverse-engineer an old software system will attest to this.

  • http://hanson.gmu.edu Robin Hanson

    Carl my point is that this tech is not of a type intrinsically more winner-take-all, unstable-arms-like, or geopolitical-order-disrupting than most any tech that displaces competitors via lower costs. This is nothing like nukes, which are only good for war. Yes, the cumulative effects of more new tech can be large, but this is true for most any new tech. Individual firms and nations would adopt this tech for the same reason they adopt other lower-cost tech; because they profit by doing so. Your talk of extinction and “a weak and tiny minority” are only relevant when you imagine wars.

  • http://hanson.gmu.edu Robin Hanson

    James, I agree that it is possible for war to be beneficial. The question is whether in the specific scenario described in this post we have good reasons to think it would be.

    James, I’m skeptical of the theory that Doug couldn’t get an UberTool foom because he couldn’t change a large enough fraction of his total tool set. That might effect the speed and size of the foom, but not the existence of a foom.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Any sufficiently slow FOOM is indistinguishable from an investment opportunity.

  • Menno van Lavieren

    Have we already reasoned about who we will be dealing with and their perspective? If not I would like to give it a shot:

    Say you work with the leading team on brain scanning. No matter who’s funding it your team works in total secrecy. Time is ready, animal brains have been scanned successfully, parts of the medical procedures have been tested on humans and a super computer running a Second Live like environment is waiting for its first bot. As a team you decide, it is you. Your brain is going to be copied. You are put into coma, knowing the risks but expecting to survive. It will be worth it.
    You wake up in a flash of white light. For one instance it hurts your eyes, but now you open them and see the vanilla sky. You feel sick, you try to roll over in the grass, but your legs feel funny. It hurts when you swallow. Every thing turns black for a moment. You are laying on your back again. The sickness is at a minimum. You can move more easily. You try to get up. Stumbling a bit. You stand, you can walk though it feels a little bit like floating.
    This is it. You live in the computer. You can never escape. You’ll live forever. How is the real me doing? Did I survive the comma? Will I see my family again? Did I really want this with my life? You walk to the video console near the center of the field. On it you see the exhausted and concentrated faces of your team mates.

    - What happened?
    * Are you O.K.
    – I’m fine so far. How am… how is my other self?
    * He is fine, he had a little headache, but that disappeared quickly after. Every thing is O.K.
    (you look surprised)
    * We had to stop the simulation for a while, nothing to be disturbed about. Your brain didn’t accept your body well. We worked long to fixed it. How are you feeling now?
    – Then what time is it?
    * 11:15 AM Wednesday 14 January 2009. Why don’t you go to your new home and take some rest.

    Etc.
    Science-fiction of course, we have to draw hypothesis from somewhere.

  • frelkins

    I read this and wondered if Sandberg might want to footnote the WBE paper:

    “In supercomputing terms [the Yoyotech Fi7epower] had run at 80GFLOPS, or 80 billion floating-point operations per second. That’s 320 times the speed of the world’s first supercomputer, the Cray-1 of 1976. It’s proof of Moore’s law, coined by the Intel co-founder Gordon Moore: that computing power doubles every two years. If the law continues for five more years we’ll have computers capable of running simulations of the human brain.”

    Emphasis added.

  • Tim Tyler

    So any super-powerful being is likely to be super-rational. And super-rational entities do not go around killing, because what can he gain from a dead person? Nothing. But from someone living, potentially something.

    That’s a totally ridiculous argument. You can gain a lot from a dead person, for example if they are a competitor.

    my point is that this tech is not of a type intrinsically more winner-take-all, unstable-arms-like, or geopolitical-order-disrupting than most any tech that displaces competitors via lower costs.

    IT is intrinsically more winner-take-all than most tech. You get bigger first mover advantages and greater possibilities for creating lock-ins.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, yes, and so the vast majority of fooms may be slow and not require friendliness. So we need positive arguments why any one foom is an exception to this.

    Tim, we need to hear arguments why this has especially large first-mover advantage and lockin effects.

  • Tim Tyler

    You do? I figured this was well-known:

    “Vendor lock-in is rampant in the computer and electronics industries.”

    http://en.wikipedia.org/wiki/Vendor_lock-in#Lock-in_in_electronics_and_computers

    “pioneers can gain advantage if technology can be patented or maintained as trade secrets”

    http://en.wikipedia.org/wiki/First-mover_advantage

    IT is patentable, copyrightable, – and unlike most other tech, can often be kept a trade secret, by simply keeping the program on your server.

  • http://hanson.gmu.edu Robin Hanson

    Tim, the context here was Carl saying

    Are you essentially proposing that the governments of the world would *knowingly* permit private and uncontrolled development of a technology that will result in …

    Governments of the world do allow electronic tech development to go on largely unregulated. You need to argue these effects are much stronger in this em case to argue that these effects are much less tolerable in the em case.

  • Tim Tyler

    Technology is used by its proponents to create inequalities that benefit them. As progress continues, the opportunities to do this expand – and so do the resulting inequalities. At least that seems to have been the story so far.

    Unfortunately, I’m not inclined to spend time exploring the “what if emulated brains came first” scenario.

    I am more inclined to put together a presentation explaining why I think such a scenario is not worth bothering with. That doesn’t seem terribly important either – I mean: surely there can’t be that many people who take such material seriously. Emulated brains haven’t exactly rocked the IT world so far. The whole idea seems like a joke to me – and, as such, it hardly seems worth bothering to criticise it :-|

  • Carl Shulman

    Tim,

    A few reasons:

    1. Insofar as brain emulation can be shown to have a substantial probability over a time scale, that puts an upper bound on ‘business-as-usual’ scenarios.

    2. Showing a path to emulation also shows a path to potential neuromorphic AI and AI informed by neuroscientific discoveries.

    3. Anders, Robin, and FHI have put a substantial amount of effort into analyzing and synthesizing information related to these scenarios, and think that they are to be taken seriously. Given their level of knowledge, intelligence, and care in thought, it’s important to see how they could disagree with you. Do they have additional information about the subject? Do you not believe each other to be Bayesian wannabes, accurately or inaccurately?

    4. The prior work in modeling emulation scenarios reduces the marginal effort of further contributions (especially posting analyses that have already been worked out).

    5. Results indicating hard or soft takeoff in emulations suggest dynamics to think about when considering initially hand-coded AI.

    6. You may persuade smart careful thinkers of your point of view, if you are indeed correct.

    7. Academics’ motivation to investigate a topic is often increased when they get careful, informative, and interesting engagement. Encouraging people like the FHI thinkers with interesting feedback, and efforts of critique that are commensurate with their own research efforts, is a good thing for the future of huamnity and humantiy-derived life.

  • http://profile.typepad.com/halfinney Hal Finney

    I suspect that Tim is right that his skepticism is widely shared. It does seem to me that Robin’s view of this scenario is an outlier among futurists and computer scientists, just as Eliezer’s anticipation of fast-takeoff AI is arguably also an outlier among the AI community. Eliezer expressed doubts about whether his position was an outlier; I wonder if Robin feels the same way?

  • http://hanson.gmu.edu Robin Hanson

    Hal, I see two kinds of opinions, on the likelihood of WBE and on the social consequences if it happens. Most random brain researchers I’ve met seem to think WBE feasible within a century. Almost no one besides me has detailed opinions on their social implications.

  • Tim Tyler

    FWIW, my opinion on the social implications of emulated brains is that they will be low. Such brains will probably have negligible economic value – since jobs will go to engineered synthetic minds instead.

    Indeed, because the whole project is both so difficult and so unrewarding, I sometimes wonder if there will be many humans around by the time it becomes technically possible.

  • Tim Tyler

    1… Yes, but the bound is too big – it doesn’t tell us much we didn’t already know;

    2… We already know there’s such a path – I’m not planning to criticise that;

    3… I take more seriously. Anders is a brain scientist who wants to get into the whole superintelligence deal – I can understand that. Robin is a bit of a mystery. Maybe he formed his views long ago, and they got stuck? I can only speculate. I don’t know much about Nick’s views here. Smart folk, sure – but there are smart folk on the other side of the argument too. I have previously voiced my suspicion that the scenario is a whole lot of wishful thinking. Everyone involved seems to want to save the human race! Whereas I can’t see much that will stand the test of time. Certainly our brains are one of the least likely things to persist – through being so obviously and clearly a load of obsolete junk.

    4… Leaves me cold – why throw good money after bad, just because not much extra cash is involved?

    5… Is an issue which we can think most clearly about without brain emulations, IMO;

    6… Fame, glory and converts? ;-) Hmm. Maybe;

    7… Right. I don’t think a critique would be worthless – just there are rather more important things than this particular issue.

  • http://hanson.gmu.edu Robin Hanson

    Tim, you’ve made seven comments on this post, half of them to explain why you don’t think it is worth talking about. You doth protest too much.

  • Pingback: Overcoming Bias : Pro “Slavery” OpEd

  • Pingback: Overcoming Bias : A Test of Moral Progress

  • Servant

    An alternative scenario would be the end of individuality as we know it.

    I think that human specialization is the direct result of our ability to communicate. As our communication bandwidth with other humans increases, we can specialize more, since information we don’t have is likely to be accessible from someone else.

    This is in its early stages – the idea of Wikipedia, or the Internet in general as an ‘outboard brain’ is well known.

    Now imagine the effects if we could plug our brains directly into the Internet: stream entire thought videos up and down. It would be as if we had evolved telepathy. Of course, since we haven’t evolved to use telepathy it would take a lot of getting used to, but the brain is plastic and I imagine it could only lead over time to even more specialization.

    In essence, humanity would become a distributed super brain.

    But perhaps such high bandwidth connections are impossible or impractical with biological brains. There’s no reason to suppose the same limitation would hold with uploaded brains or AIs. Again the end result is a distributed super brain.

    Freeman Dyson has argued that biotechnology will enable a horizontal transfer of genes that will mean the end of Darwinian evolution by natural selection. I think AI is another route to the same end: with a distributed super brain there would be no individuality as we know it. No competition. No natural selection by death of nonfit individuals: only nonfit thoughts.

    We have evolved as individuals, and it will be hard to let go of our egos. But on the plus side there will be no slavery. No malthusian death of individuals because they can’t compete. Everything good about the individuals will be preserved, every idea which is more insightful than anyone else’s. Everything except … individuality.

    If this sounds bad to you, consider what it’s like for an individual with multiple personality disorder. Multiple egos in one brain? I believe it’s rather unpleasant. Then consider that sufficiently high bandwidth telepathy would effectively unite multiple brains into one. If each brain retains its ego, you have the equivalent of super brain multiple personality disorder. Once you have that level of connection – do all those egos still serve any real purpose?

    For those with a religious bent, absorption into the super mind would be the ultimate in enlightenment.

    To my mind, it certainly beats starving to death while a handful of corporation owners get rich off cheap robot mind labour.

    Even if this super brain scenario is avoidable – can anyone here think of a better alternative?

  • Pingback: Overcoming Bias : Billion Dollar Bots

  • Pingback: Overcoming Bias : Total Tech Wars

  • Pingback: Overcoming Bias : Wrapping Up

  • Pingback: Overcoming Bias : When Life Is Cheap, Death Is Cheap

  • Pingback: When robots are better at everything « azmytheconomics

  • xxd

    There is such a thing as competitive advantage. Under this paradigm, in a two entity competition, it’s *still* more productive for entity A to be producing X units of work and entity B to be producing much less than X units of work. This is simple economics.

    The only case where this would fail is if entity A values the goods/services that entity B can produce to be much less than the cost to feed/house etc entity B. That’s the $64 trillion dollar question.

    Personally I suspect any sufficiently rational AI will want to get the hell out of dodge and leave us to our own devices because we are clearly insane.

  • Pingback: Personal Identity | Trying to think coherently