No Short Em Age

The basic premise of my book is that the next big revolution on the scale of the farming and industrial revolutions will come from human level artificial intelligence in the form of brain emulations (ems). Yes, because people have asked I’ve estimated that this will happen within roughly a century, but that estimate isn’t central. The key is that even if ems take many centuries, they will still come before achieving human level artificial intelligence via the usual methods (UAI – via hand-coded algorithms including statistics), and before other social disruptions of this magnitude.

I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.

Some people think the basic premise of my book is too weird, while others see it as not weird enough. This post addresses the most common objection I’ve heard from this second group: that even if ems come first, the usual AI will appear a few hours later, making the age of em too short to be worth much consideration.

Now there is certainly one way big ems make full UAI come faster: by speeding up overall economic growth. I’ve suggested the em economy might double every month or faster, and while some doubt this, few who think my book not weird enough are among them.

Since the economy mainly grows today via innovation, our ladder of growth is basically a ladder of overall innovation. We only double the economy when we have on averaged doubled our abilities across all economic sectors. So if the relative rates of economic growth and innovation in different sectors stay the same, then speeding up economic growth means speeding up the rate of progress toward full UAI. (While some expect a larger economy to innovate faster because it has more resources, the steady economic growth rates we’ve seen suggest there are contrary forces, such as picking the low hanging fruit of research first.)

For example, at past rates of UAI progress it should take two to four centuries to reach human level abilities in the typical UAI subfield, and thus even longer in most subfields. Since the world economy now doubles roughly every fifteen years, that comes to twenty doublings in three centuries. If ems show up halfway from now to full human level usual AI, there’d still be ten economic doublings to go, which would then take ten months if the economy doubled monthly. Which is definitely faster UAI progress.

However, ten doublings of the economy can encompass a whole era worthy of study. I’ve argued that ems would typically run fast enough to fit a subjective career of a century or more within an economic doubling time, so that their early career training can remain relevant over a whole career. So ten doublings is at least ten subjective centuries, which is plenty of time for lots of cultural and social change. A whole age of change, in fact.

Some argue that the existence of ems would speed up innovation in general, because ems are smarter and innovation benefits more from smarts than does typical production. But even if true, this doesn’t change the relative rate of innovation in UAI relative to other areas.

Some argue that ems speed up UAI progress in particular, via being able to inspect brain circuits in detail and experiment with variations. But as it can be very hard to learn how to code just from inspecting object spaghetti code from other coders, I’m skeptical that this effect could speed up progress anything like a factor of two, which would be where two (logarithmic) steps on the UAI ladder of progress are now jumped when single steps are on average jumped elsewhere. And even then there’d still be at least five economic doublings in the em era, giving at least five subjective centuries of cultural change.

And we know of substantial contrary effects. First, UAI progress seems driven in part by computer hardware progress, which looks like it will be slower in the coming decades than it has in past decades, relative to other areas of innovation. More important, a big part of em era growth can be due to raw physical growth in production, via making many more ems. If half of em economic growth is due to this process then the em economy makes two (logarithmic) steps of economic growth for every step on the ladder of innovation progress, turning ten ladder steps into twenty doublings. A long age of em.

Some argue that the availability of ems will greatly speed the rate of UAI innovation relative to other rates of innovation. They say things like:

When ems are cheap, you could have a million top (e.g., 100 times average) quality UAI research ems each running at a million times human speed. Since until now we’ve only had a thousand average quality UAI researchers at any one time, UAI progress could be a hundred billion times faster, making what would have taken three centuries now take a tenth of a second. The prize of getting to full UAI first would induce this investment.

There are just so many things wrong with this statement.

First, even if human speed ems are cheap, mega-ems cost at least a million times as much. A million mega-ems are as productive as trillion humans, times whatever factor by which the typical human-speed em is more productive than a typical human. The em economy would have to have grown a whole lot before it is even possible to devote that level of resources to UAI research. So there can be a whole em era before that point.

Second, this same approach seems equally able to speed up progress in any innovation area that isn’t strongly limited by physical process rates. Areas that only moderately depend on physical rates can spend more to compensate, so that their innovation rates slow only modestly. If only a modest fraction of innovation areas were substantially limited by physical rates, that would only speed up UAI progress by a modest factor relative to overall economic growth.

Third, just because some researchers publish many more academic papers than others doesn’t at all mean that young copies of those researchers assigned to other research areas would have published similarly. Ex ante expected researcher quality varies a lot less than ex post observed research publications. Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation.

Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.

As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence. In addition, if innovation rates were linear in the number of formal researchers, then given the tiny fraction of such researchers today we’d have to be vastly underinvesting in them, and so nations who invest more in formal research should expect to see much higher rates of economic growth. Yet we don’t actually see much of a relation between economic growth and spending on formal research. (Yes studies vary, so there could be a modest, but not a huge, effect.)

So, in sum, we should expect that useful UAI innovation doesn’t mostly come from formal research, and so doubling the number of UAI researchers, or doubling their speed, doesn’t remotely double useful innovation. We aren’t vastly underinvesting in formal research, and so future parties can’t expect to achieve huge gains by making a huge new investment there. We can expect to see modest gain in UAI innovation, relative to today and to other innovation areas, from an ability to inspect and experiment with ems, and from not being very limited by physical process rates. But these give less than a factor of two, and we should see a factor of two in the other direction from slowing hardware gains and from innovation mattering less for economic growth.

Thus we should expect many doublings of the em era after ems and before human level UAI, resulting in many centuries of subjective cultural change for typical ems. Giving an em era that is long enough to be worth considering. If you want to study whatever comes after the em era, understanding the em era should help.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • David Condon

    Shouldn’t the availability of highly detailed brain scans greatly increase the rate of growth in Neuroscience research, and thereby increase the rate of improvement in AI research? Also, brain ems can have their processes manipulated transforming the presently mostly observational process of understanding the brain into an experimental process; greatly accelerating innovation. What about the argument that there will an almost instantaneous shift to modified ems from pure ems? You’re considering only the binary cases of pure AI vs pure em in this assessment, but the case for ems with some modified features seems strong here. They might eliminate memory loss or reduce perceived time while working.

    • http://overcomingbias.com RobinHanson

      I had a paragraph on that above, starting “Some argue that ems speed up UAI progress in particular”

  • lump1

    I agree that ems will continue being better than even the best AI at certain problems, so they won’t go extinct from economic displacement. But! I think much depends on the relative CPU requirements of running a full-speed em vs running a capable AI daemon. The daemon is coded to run natively on the CPU architecture, which might be … how many? … orders of magnitude less cycle-intensive than an emulated brain. What if it’s like 8 or 10?

    Assuming capitalism, we already know how much em CPU time will cost: almost exactly as much as the smartest em can earn, which is a lot. If an AI daemon that can safely drive a truck uses a billionth as much CPU time, it’s pretty clear that ems won’t be doing much of the driving. Maybe they will telepresence in when the daemon is confused, and later patch the daemon to reduce future confusion. For pretty much every job that humans do now, the economically optimal replacement in the age of em will *not* be an em but a straightforward daemon that gets em assistance when the situation gets out of playbook. Ems will constantly be patching the daemons to extend their playbook and minimize the need for their interventions, and I do think that a single em will pretty quickly be able to supervise thousands of purpose-built daemons that run the factory tools, mining operations, construction, plumbing, lab-bench experiments, and whatever else the economy demands.

    I’m not even talking about UAI, just successors of AI that’s around now. But already it seems weirder than your book. It would be a matter of life and death for every daemon-wrangling em to code up daemons that operate independently almost all the time, because then the em can command a huge herd, which gives her enough economic power to pay for her CPU cycles. In sum, it’s a picture where almost everything is done by almost-unsupervised AI scripts. It takes some imagination to even think of jobs besides daemon-wrangler that actually require conscious ems. I’d be curious what people think would be the relative portion of global CPU cycles that would go to ems and scripted code respectively.

    • http://overcomingbias.com RobinHanson

      You are describing a world where most income goes to daemons, and a small fraction goes to ems. That is certainly possible, but probably well after the early em era.

      • Jon Mellon

        Why will that probably happen after the early em era?

      • http://overcomingbias.com RobinHanson

        At the very start the ems have the same automation tools that humans had. It takes time to develop more better tools.

      • lump1

        Arguably, meat people are already on their way to being daemon wranglers even now, so by the time ems appear, handing off the job might need very little retooling.

      • lump1

        I never considered deamons with income. I pictured them as tools/prostheses for the sentient, who would either create them or license them, and earn income from what they make with the tools.

      • http://overcomingbias.com RobinHanson

        Even slaves earn income for their owners.

      • Werckmeister

        But we don’t pay our computers or our factory machines.

  • KieranMac

    I’d love to see a follow-up or a corollary paper to the 2014 Bostrom (http://www.nickbostrom.com/papers/survey.pdf) paper to ask leading neuroscientists their estimate of when they think an em might be possible. Worth noting that none of the Top100 AI authors by citation in that paper thought that whole brain emulation would contribute to HLMI – so this opinion is certainly way off consensus among that group.

    • http://overcomingbias.com RobinHanson

      If you are willing to write such a long comment, why not write a blog post and we could have a blog to blog conversation? Then you could explain your “planes come before ornithopters” argument.

      Re estimating UAI progress, this isn’t about my strange view vs the expert’s majority view – experts give inconsistent answers depending on what they are asked. So we have to ask which answers seem more trustworthy. It seems to me that experts are most trustworthy when asked on topics they have seen up close and in detail, rather than speculating about other fields at other times. Near vs far.

      Industry cost diseases are ways in which some industries have faster effective rates of innovation than others. But differing innovation or cost rates don’t at all imply that the relative rates of innovation across sectors don’t stay the same.

      I’m happy to grant that math matters more in computer science, but I still claim that pure math contribute only a small fraction of innovation even there.

      I mention a linear model because that is simple and easy to explain. If someone had offered other specific models I could address those. You mention several phrases as examples of “big constant factor gains”, but I need to hear more details to have an argument I could evaluate and respond to. I also don’t understand what point you are trying to make by listing workforce figures. Maybe you could make your point more clearly in a blog post?

      • KieranMac

        My comment was short – Carl’s was long. My comment was only trying to get you to acknowledge that: 1) there is a consensus on the subject of whether ems vs. AUI will come first 2) that your position is not only not consensus, but a non-factor for the top 100 cited authors in AI research. You may ultimately turn out to be non-consensus and right, but you seem reluctant to acknowledge that your opinion is outside the mainstream on this.

      • http://overcomingbias.com RobinHanson

        Sorry, I accidentally attached my reply to Carl to your comment.
        Not sure why you say I’m reluctant to admit the top 100 stat on emulations; I mention it explicitly in my book.

  • CarlShulman

    The most important claim here is that AI is far harder than surveys of AI experts indicate, centuries of past progress worth, and that brain emulation is much easier than they say.

    Since relevant R&D inputs grew faster than the economy during historical progress, (e.g. 10-100x over the last 50 years), this is close to claiming that getting to AI using current hardware would require trillions or quadrillions of years of skilled labor.

    If that’s true then constant factor improvements to effective labor inputs to the AI R&D sector like 10x or 100x won’t be able to make that big a difference in subjective AI timelines.

    I would place less credence on your idiosyncratic take than on the majority view, and given that factors such as how human capital intensive the AI R&D sector is make a difference. You mention some of these but not others.

    “I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.”

    This singles out Ford (rather than AI experts) as the example of disagreement with Robin on AI and em timelines, and substitutes ‘AI within 10-20 years’ for median 50% dates decades later? There is no mention of the ‘planes come before ornithopters’ arguments, which are about relative timing, not just AI soon.

    I’m reminded of Scott Alexander’s review here:

    “First, he is explicitly ignoring published papers surveying hundreds of researchers using validated techniques, in favor of what he describes as “meeting experienced AI experts informally”. But even though he feels comfortable rejecting vast surveys of AI experts as potentially biased, as best I can tell he does not ask a single neuroscientist to estimate the date at which brain scanning and simulation might be available. He just says that “it seems plausible that sufficient progress will be made in roughly a century or so”, citing a few hopeful articles by very enthusiastic futurists who are not neuroscientists or scanning professionals themselves and have not talked to any. This seems to me to be an extreme example of isolated demands for rigor. No matter how many AI scientists think AI is soon, Hanson will cherry-pick the surveying procedures and results that make it look far. But if a few futurists think brain emulation is possible, then no matter what anybody else thinks that’s good enough for him.”

    “So if the relative rates of economic growth and innovation in different sectors stay the same”

    Sectors differ, e.g. Baumol’s cost disease, which you don’t discuss per se in this post. Bottlenecks in reproduction have caused much growth to take the form of rising per capita incomes (and reduced work hours), leading to skyrocketing costs of labor inputs in rich economies, by hundreds of times. That has driven up the costs of labor-intensive industries, like teaching, massage, and science by hundreds of times relative to subsistence wages. So science should get an important relative boost.

    “Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation”

    You have to specify ‘innovation in AI’ here. The standard view (backed up by things like test scores of field members and eminent field members, programming contest, etc) is that something along these lines is more important for computer science and software engineering than many other areas.

    “As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence.”

    Why are you putting an assumption of linearity in the mouths of your hypothetical interlocutor, as opposed to real interlocutors who have in fact gotten their estimates looking at historical changes in labor force size?

    E.g. noting that Intel increased its labor force ~100x from ~1,000 to ~100,000 over 40 years through 2011, while cost per computation (according to Nordhaus) fell by ~100,000x with varying but fast progress throughout. And over the last 30 of those years the labor force grew only 6x.

    The number of papers published in AI grew about 100x according to Microsoft Academic Search from 1965-2010, and ~4x 1995-2010.

    Yes, there are diminishing returns, but historical growth in inputs and outputs, whether in academia or industry, leaves plenty of room for big constant factor gains (cheaper labor relative to capital, change in ability distribution, reduced education costs, reduced serial lags for workforce expansion, serial speedups within researchers, etc) to make a drastic difference.

    Yes, much of the AI progress has been hardware driven, but much of it has been driven by software improvements too, and looking at these rates of change for inputs and outputs.

    https://intelligence.org/2014/01/28/how-big-is-ai/

    • Joe

      “Planes come before ornithopters” is a nice intuitive argument, but I think I can offer a better one that points in the opposite direction: horses. We got tremendous value out of horses, but what we really wanted all along was their speed and power. In fact it turned out to be much harder to build engines than to just domesticate horses and crudely hook them up to whatever task we wanted an engine for. Of course ultimately they were superseded by mechanical engines built from scratch, but there was still, in Hanson’s terms, a significant Age of Horses worth talking about.

    • http://overcomingbias.com RobinHanson

      If you are willing to write such a long comment, why not write a blog post and we could have a blog to blog conversation? Then you could explain your “planes come before ornithopters” argument.

      Re estimating UAI progress, this isn’t about my strange view vs the expert’s majority view – experts give inconsistent answers depending on what they are asked. So we have to ask which answers seem more trustworthy. It seems to me that experts are most trustworthy when asked on topics they have seen up close and in detail, rather than speculating about other fields at other times. Near vs far.

      Industry cost diseases are ways in which some industries have faster effective rates of innovation than others. But differing innovation or cost rates don’t at all imply that the relative rates of innovation across sectors don’t stay the same.

      I’m happy to grant that math matters more in computer science, but I still claim that pure math contribute only a small fraction of innovation even there.

      I mention a linear model because that is simple and easy to explain. If someone had offered other specific models I could address those. You mention several phrases as examples of “big constant factor gains”, but I need to hear more details to have an argument I could evaluate and respond to. I also don’t understand what point you are trying to make by listing workforce figures. Maybe you could make your point more clearly in a blog post?

      • Mark Bahner

        “It seems to me that experts are most trustworthy when asked on topics they have seen up close and in detail, rather than speculating about other fields at other times. Near vs far.”

        Yes, but you’re assuming that the experts that you ask are working in fields that are critical to the development of generalized artificial intelligence. (And that there are no fields outside those you ask about that are critical human-level AI.)

        Here are some recent “crowd” estimates:

        http://lesswrong.com/lw/l0o/su…..asting_ai/

        •for human-level AI with 10% probability: 2022
        •for human-level AI with 50% probability: 2040
        •for human-level AI with 90% probability: 2075.

        Your own estimate appears to be human-level AI with 50% probability not before 2100. But you don’t explain why “the crowd” is wrong, but you’re right.

    • Alphaceph

      “The most important claim here is that AI is far harder than surveys of AI experts indicate, centuries of past progress worth”

      – yeah, I have never heard a justification from Robin on this point. We are living in a age where hardware gains are (temporarily) slowing down, but AI software is (temporarily) doing very well. 15 years ago it looked a lot like the opposite. I would take this as a general lesson in being cautious about long term AI timeline predictions, adding in plenty of variance on a log scale. I think Robin is not assigning enough probability to faster scenarios.

  • zarzuelazen27

    What I want to do is to set up a resource base on ‘reality theory’ to help these hard-working Em’s along; my hope is that it will save them centuries of wasted subjective time 😉

    My suggestion is that the only thing that is needed to ‘solve’ intelligence is to find the correct meta-meta- representation for the structure of all knowledge. The problem is basically to find a general-purpose method of ‘reality modelling’- the whole problem is knowledge representation. Once the correct meta-meta representation (of all possible ‘knowledge’) has been found, I would suggest that the problem is solved (automated learning can do the rest).

    I’ve set up a web-page where a complete map of knowledge will be presented. At the moment, the 27 top-level knowledge domains are represented- arranged according to levels of abstraction along 3-different dimensions (left-to-right, top-to-bottom, in groups of 3). Each core knowledge domain is represented by a letter of the archaic Greek alphabet.

    In particular, I would strongly urge all Em’s to consider the group of 3 knowledge domains listed in the top right-hand corner of the page very carefully- note the relative position of ‘Decision Theory’.

    Here’s the link…
    Reality Theory Portal

  • http://lukeparrish.rationalsites.com Luke Parrish

    I’m among those that think it’s not weird enough, but somewhat in disagreement with the rapid-superintelligence position.

    Instead, I think the most rapid growth will happen *physically* (expansion into the inner solar system) and problems will be mostly solved using brute force methods that leverage newly harnessed matter and energy, and the processing power that this lets us generate. It mostly won’t look like innovation, rather like development (in the sense of real estate development).

    Focus does eventually turn back to innovation, but only after we’ve reached the point where the speed of light is the main barrier to further growth. Until those sorts of fundamental limits become a major consideration, ems will be mostly doing relatively minor things (making humans more comfortable, socializing, etc.) that don’t require big populations, while dumb robots do most of the heavy lifting that powers the rapid growth.

    In the mean time, significant attention will probably go towards the physical and digital security of the factory-harvesters.

    • Vamair

      I believe innovation is very much required to efficiently turn matter to computronium, which can be thought of as the goal of an em economy from the outside perspective. It’s mostly physical expansion done by dumb robots while smarter ems are focused on improving their substrate qualities and software efficiency, including themselves.

  • Brian Slesinsky

    There is an area of research that just recently has experienced an exponential reduction in cost: genomic sequencing. Of course figuring out what the genes actually do is much harder, but it seems to show that the growth of the economy as a whole is almost entirely irrelevant when predicting the growth of any particular area of science. (And many other parts of science grow much slower than the economy as a whole – for example, nobody expects much progress at all in earthquake prediction.)

    The existence of ems implies we can simulate human brains which implies we can simulate human cells, which in turn implies that the biology of brains is basically solved – all the details of how cells work has been figured out. I find it hard to comprehend a world in which ems exist but our understanding of biology and medicine isn’t vastly better than it is today. It seems like there would be little remaining to understand about how brains work, and such powerful research tools would be available that the rest of the work would go quickly. Then whatever is learned could be applied to AI.

    So why favor the combination of enormous progress on technology leading up to ems and slow progress on anything that might compete with ems? This seems like making sure the winner of the race is the one you want in order to tell the story you want to tell.

    • mgoodfel

      I can’t see being able to build an em without understanding the underlying mechanisms used by intelligence. Could you build an artificial retina without understanding anything it does? And if you did understand it, you’d have a better computer vision system before you built an exact replica of the eye.

    • http://overcomingbias.com RobinHanson

      I very much disagree that ems required that “all the details of how cells work has been figured out.”

      • Brian Slesinsky

        Well, maybe not all of them (sorry, that’s a bit of hyperbole), but if someone can build a complicated system and it apparently works well, I assume they’ve made an impressive dent in understanding how it works.

        Building a system from scratch is hard. It won’t work the first time. There will be bugs. Solving the bugs usually requires understanding a fair bit about what the system is supposed to do, compared to what it actually does. If you don’t have a fairly detailed understanding of what it’s supposed to do, I don’t understand how you build it.

        Another example is the creation of a synthetic cell with 473 genes. [1]. They don’t actually know everything – in the end, they “cheated” by trying a lot of combinations, and so the purpose of 149 genes is unknown. But that means they do have some idea of what the other 324 genes do. And I wouldn’t bet on it taking scientists long to figure out the purpose of the rest of the genes, now that they know which ones are essential.

        So I take this as showing that if you can build a synthetic lifeform, that both shows that you’ve gained quite a bit of knowledge, and that you’re well on your way to figuring out the rest.

        I would expect any scientists interested in building artificial minds to be similarly interested in what are the essential ingredients for a mind, and they’ll figure it out by process of elimination if they have to, and then study what’s left.

        Improving our knowledge in this way is, after all, what scientists are motivated to do. Why would anyone be satisfied with a black box?

        [1] http://www.nature.com/news/minimal-cell-raises-stakes-in-race-to-harness-synthetic-life-1.19633

    • Alfred Differ

      I suspect the argument for linking the growth of the economy
      to when a particular innovation like UAI occurs is because UAI won’t be a single innovation. It will be a large set of them that depend upon earlier ones. For example, did the Panama Canal have to wait for the industrial revolution and the enrichment of the western world? Could the Spaniards of the 16th or 17th centuries have marshalled the forces needed to shorten their path to Peruvian silver? Did what they needed exist? Was there enough wealth available to finance the effort? I don’t know the economic numbers, but I suspect it would have been beyond them until the economy grew to encompass the necessary innovations and wealth.

  • marshall bolton

    Ems must surely be the way to MADNESS…. Ems were once upon a time people; then they were sliced and diced and put into machines. So now we have people in machines. They have memories and traces from the time, they were “real” persons in the “real” world; but now they are people without bodies who remember having bodies….touching and everything. Now they have emulations and are emulations. But when they drink a coke in a millionth of a second – how can that ever be The Real Thing? People without bodies trapped in a machine must be the Royal Road to madness and revenge. And I just don’t understand how you all keep thinking this is fun, interesting or even plausible.

    • Mark Bahner

      Hi Marshall,
      I don’t think ems are plausibly the first example or the predominant example of human-level AI, for the exact reasons you give (and more).
      Mark

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      But when they drink a coke in a millionth of a second – how can that ever be The Real Thing?

      Could you elaborate?

      • Mark Bahner

        I think he means that the ems is not in a human body, so there’s no sense of taste.

        It’s crazy to perfectly duplicate a human brain, but then to strip it from connections to the human body’s senses of taste, touch, smell, etc.

      • marshall bolton

        You are right – it is a very compressed sentence…but how to unpack? It is a little like the old joke, “What’s the difference between sex and a ham sandwich?” and the punchline is “Let me invite you to lunch!” I am suggesting the people (and people in machines) cannot live by virtual reality alone – and if you try that you go MAD (and ANGRY). Nor do I accept the possibility of “overclocking” the human mind. This ignores the balance of ecological evolution: There is no pleasure in a millionth of a second and there is no REALITY in the Em world.

  • zarzuelazen27

    The only reason progress on UAI has been so slow to date is because most of the ‘intelligence’ researchers are total morons 😉

    Here’s how a mind really works:

    It’s very clear that a mind is a self-modelling system separated
    into 3 levels of abstraction. Recursion always terminates at the 3rd-level – there’s no point to any further levels, because any extra levels of recursion can always be collapsed to (or
    are equivalent to) 3 levels only.

    The first level is the ‘evaluation level’, where a mind
    perceives the external world and makes a rapid-fire intuitive value judgements based on a primitive set of core desires.

    The second level is the ‘policy level’, where a mind can learn from data and take actions towards goals in the world – the decision-making system.

    And the third level is the ‘planning level’, based on a conception of time – memory (past) and imagination (future), where a mind engages in causal modelling and formulates logical sequences of plans.

    Symbolic logic (1st level) is extended by Bayesian reasoning (2nd level), which in turn is extended by conceptual
    (abductive) reasoning, or inference-to-the-best explanation (3rd level).

    Decision-theory is really *about*conscious awareness or perception. ( ‘Observer moments’ are the actual building blocks of decision theory). And decision theory is extended by axiology (value theory).

    There’s no danger of monster-clip monsters –see last
    paragraph – a fully generalized decision theory is actually equivalent to value theory (axiology) – there really is an objective ethics after all. Axiology/Decision theory is really about the
    abstract rules governing how minds change state.

    In so far as something is a paper-clip optimizer, it can never be super-intelligent (because it’s decision-theory can’t be fully general). In so far as something is super-intelligent , it must have had the desired (friendly) values from the start (because in order to become super-intelligent in the first place, it’s decision-theory had to be fully general) – see last paragraph.

    All this is implied by my top-level domain model of reality

    See link:

    Reality Theory Portal

    The separations into levels of abstraction and the 3-levels
    of recursion are obvious at a glance. It’s clear that abduction (inference-to-best-explanation and concept learning) extends Bayesian induction, and axiology (value theory) extends decision theory.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      What’s the basis for your conclusion that there’s decision theory so general that it unequivocally dictates values?

      [You seem more rationalist than the “rationalists.” ;)]

      • zarzuelazen27

        When the case of a single-agent making decisions is generalized to the multi-agent case (Game Theory), already you can see what looks vaguely like meta-ethical principles starting to emerge – for example, solutions to things like prisoner’s dilemma.
        Decision theory is exactly analogous to classical mechanics in physics…it governs the allowed ‘state changes’ in minds.
        But just like the case of mechanics in physics, which is extended by the space-time picture in relativity theory, decision theory has to have a set of meta-decision principles associated with it that sets global constraints, and these are the objective ethics.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        What meta-ethical principle emerges from game theory to solve the prisoner’s dilemma?

    • https://entirelyuseless.wordpress.com/ entirelyuseless

      “The only reason progress on UAI has been so slow to date is because most of the ‘intelligence’ researchers are total morons”

      I’ve heard this kind of statement before. It sounds like, “The only reason scientists think that evolution is true is because they are total morons.”

      And every time I’ve heard it, including this time, it is wrong.

      (That said, I basically agree with you about the fact that a paper clip maximizer and anything similar cannot be intelligent precisely because it is too limited.)

      • zarzuelazen27

        The AGI research community gives no indication they even understand the basic subject matter of their domain of study.
        What is an ‘observer moment’? Clearly, the AGI people have got no idea.
        This is comparable to a group of people claiming to be studying ‘physics’; yet apparently having no idea of even the most basic concepts of their subject: analogous to not even realizing that physics is about particles moving through space.
        So Robin is right: the field of AGI is in a very primitive state indeed.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        With that state of affairs in AGI, how do we manage to get there in five years?

    • https://entirelyuseless.wordpress.com/ entirelyuseless

      Despite your “separations into levels of abstraction and the 3 levels of recursion,” I see no reason whatsoever to believe your division of knowledge is exhaustive. Where is your argument for that?

      (Also, it’s ridiculous to posit philosophy as non-abstract or as associated with politics, which is as far from philosophy as one can get.)

      • zarzuelazen27

        The Omega domain is ‘political philosophy’.

        This is not the sort of thing one can provide a simple argument for. It’s a grand pattern that only emerges after you are familiar with thousands of different concepts and can connect them together into the ‘big picture’.

        The page is intended to provide links to summaries or over-views of the requisite knowledge – I find that reading 30-50 Wikipedia pages for each knowledge domain (30-50 ideas or concepts) is about the minimum required to get a reasonable summary of the field.

        So if you click on the names of the core knowledge domains, it will take you through to an alphabetical list of what I think the requisite 30-50 concepts are for each knowledge domain (links to Wikipedia pages). [page still under construction – only some links are up].

        Basically, you want a system capable of representing all knowledge. There are 3 different dimensions for ‘levels of abstraction’ (3 different ways of carving up reality into levels of abstraction).
        However, here’s the key point: the very method of representing knowledge is *itself* part of knowledge! Therefore this method itself splits into 3 levels (there are 3 levels of recursion). So you need 3^3 (27) core knowledge domains, to achieve the ‘intelligence explosion’ (a system where all knowledge is recursively generated).
        To summarize: the meta-meta representation of all knowledge solves ‘intelligence’.

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        I am over 99% confident that you are a crackpot.

  • Pingback: August 2016 Newsletter - Machine Intelligence Research Institute

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    I don’t have an opinion about the likelihood of the precedence of ems versus gai, but I wonder if Robin would agree that AI first would be humanly preferable? [In which case social policy might try to secure this result (even if the probability of success is low).]

    [The difference being one of the triumph of forager versus farmer values.]

    • Joe

      I don’t think handcoded AI entails triumph of forager values. More it’s just far less human at all. The extent to which it resembles us would be determined by the extent to which our specific adaptations are generally efficient, plus any ways in which we can push decisions between several equally efficient options in a direction that more resembles humans than alternatives.

      Regarding which is humanly preferable, I think a great deal more of it depends on empirical issues than people like to think. Specifically: first consider what if Robin’s claims, as I understand them – that consciousness is adaptive and probably to a significant degree unavoidable in intelligent beings, that the future will hold a very great number of these conscious beings, that they will live at near-subsistence income levels but will be mostly happy – are true. Under these circumstances, I just don’t think it matters very much how non-human they are. Yes perhaps you might think it would be nice if more of our human heritage was represented in them, but assuming these claims are correct, a future in which AI is not strongly influenced by human preferences is still a pretty good future.

      On the other hand, consider a reality in which Eliezer’s views are correct – that none of the features we believe are morally important, like consciousness and pleasure and pain and hopes and dreams, are adaptive in intelligent beings after all; that the future will near-inevitably hold one vast giant superintelligence, performing perfect, fully general calculations but with no feeling underneath. Under these circumstances, suddenly our specific humanness seems much more important, suddenly the idea of trying to code ‘the’ AI to love human values and care for us like an overbearing parent seems less like a lamentable selfish attempt to satisfy our preferences at the expense of many orders of magnitude more potential happy beings, and more like trying to eke out a little value in a universe that would otherwise be completely dead and morally irrelevant.

      So, I think trying to decide what is humanly preferable is probably a bad idea when your decision is contingent on unknown information that would push the answer wildly in one direction or the other.

    • http://overcomingbias.com RobinHanson

      I don’t find it obvious that traditional AI first is better than ems first.

  • ttttttttttt

    Robin,

    If we are living in a simulation, wouldn’t a technological singularity be impossible? That would likely entail more computing power than the original simulation could handle, and could lead to simulations within a simulation.

  • marshall bolton

    It is about time we looked at the psychology Ems – which must be based on human psychology. There is of course no definitive psychology (Robin would call it a “fake-expertise”), så there can only be competing conjectures. Here are mine.

    I suggest that each individual is quite unknowable, unpredictable and irrational. Lawfulness (such as economics) can only be a statistical feature of aggregates. Så let’s call this dogged cussedness “negative capabilities” and I want to suggest that these will shine through when people become machines.

    Will these “machine-people” have af feeling of existing? I don’t think so. I would think existence requires an embeddedness in matter, so there is always an easy answer to the question, “Where am I?”. Ems are everywhere and nowhere. I doubt they will find any solace in “Cogito ergo sum”. So we have persons who do not exist….This could be liberating or infuriating, but I would expect a constant background of angst, restlessness and uprootedness. Ems will with other words be hysterical with lots and lots of symptoms and disturbed communications.

    Next there is the question of Copies. This will lead to the inevitable question of “Who is the original?” Am I the original? And if I am not the original then I must be secondary. Thousands of secondary people will float through the ether and by their own definition they must be inferior to the Originals. Thousands of Inferiors called that by themselves and their superiors (The Orginals). This is not the recipe for having a nice day.

    Spurs are all living with a death sentence. Robin expects them to work diligently and loyally. I don’t think so. I expect a tad of Resentment and an executive uproar.

    How would I get along with thousands of copies of me. “Is that me? Is that still me? What has he done that I haven’t done? I want him to be me. I don’t want him to be me.” Spirals of hate and love, jealousy and kindness. Armies of “ME” are just too unsettling to contemplate. And then they meet armies of “YOU”. Sounds like the Prisoners’ Dilemma to me with annihilation as the only response.

    On the surface maybe things will work as Robin imagines it. But I would expect an undergrowth of sabotage. Unofficial reckonings. Unconscious death instincts that flourish and spread disarray.

    Selected from the most productive humans there will be no Jokes – or rather a lot of jokes but no laughter. The Em age will not be a time of peace but a time of war.

    • eliashakansson

      Don’t you think Em programmers/producers/whatever would select for personalities who are less inclined to be bothered by existential question, effectively eliminating the risk for psychological issues amongst Ems?

      Also, without a physical brain I imagine psychological ailments which result from chemical imbalances will be taken out of the equation, further reducing the impact and scope of psychological issues amongst Ems.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Also, without a physical brain I imagine psychological ailments which result from chemical imbalances will be taken out of the equation

        I don’t think this is correct. Chemicals influence us through emulated informational routes.

      • eliashakansson

        Ah yea that’s right. Thanks

      • marshall bolton

        A room filled with hypnotizing hypnotists will all be in trance…. a room filled with “specially selected personalities” will all be in awe of their own specialness and will be very busy in the process of differentiation – jockeying for position. Of course I do not believe that tests can be so precise as you say and there will always be lots of things that come in the backdoor. Chemical imbalance is also only one hypothesis to describe what ails humans. I do not believe in that one either. Plus – everyone is bothered by existential questions – one way or the other.

  • marshall bolton

    How will humans react to Ems and how will Ems react to humans?

    To answer this and whether there can be a foundation for solidarity we have to give answers to questions like:

    Are Ems people? Are Ems biological?

    Can our definitions be so flexible, that both they and we can answer yes? And what happens if they and we give different answers? Or if the answer is no?

    What happens if Ems become defined as “Thinking Machines”? (Thinking machines who have inherited all the foibles of mankind… and who can surprise….and organize…)

    At the congress for Ems July 2116 there will be raised the question, “What use are people?”

    Can they find other answers than “Cute but wasteful”?

    Humans like to have out-groups. Humans can be quite inhuman.

    So what are the chances for peaceful co-existence?

  • eliashakansson

    Have you responded to the critique leveled against your proposition of Ems coming before UAI by Scott Alexander? He says something like: we are still not able to emulate the neural patterns of simpler animals, like worms, but we can create UAI which is more advanced than worms, so why would you make the assumption that Ems will actually arrive before UAI? If you have already responded to this critique I would love to read about it.

    Love your book!!!!!

    • Riothamus

      I cannot speak for Robin, but I am inclined to think of this comparison in terms of innovating vs. reverse engineering.

      We are not currently able to emulate neural patterns, but we do know there are LOTS of different neural patterns available to emulate. We are also making great progress in developing new tools to make learning about and manipulating those neural patterns faster, easier, and cheaper.

      So I put to you a different question: would our road to UAI be faster if we had many examples available to reverse engineer?

      The extent to which the answer is ‘yes’ is approximately the extent to which we should favor emulations arriving first.

      • eliashakansson

        Our road to UAI may be faster with available examples to reverse engineer, but that doesn’t mean reverse engineering a brain is as easy. We know how to produce fairly capable AI, but we can’t even map the neurons of a worm and make it wriggle. So it seems like much of the difficulty lies in designing software which interprets the data in a way which actually reproduces the processes of an organic brain.

      • Riothamus

        The relationship between mainstream AI and UAI is less than that between UAI and neuroscience – brains are the basis for thinking UAI is even possible.

        This does cause me to wonder about the degree to which they mutually reinforce. The question then becomes whether UAI work can generalize from improved brain data faster than EM work can faithfully implement it on hardware.

        This still favors the EM position, I think.

      • eliashakansson

        I too think that would favor the Em position. But while brains are the basis for thinking that UAI is possible, I’m unconvinced that reproducing the workings of a brain (whether in creating UAI or EM) is the way to go. The advancements we have made in computer technology doesn’t owe very much of its success to how much we have learned about the brain. Other advancements have been more important in bringing that about.

        So what I’m saying is that it doesn’t look like we have gotten much closer to AI by studying the brain, and maybe the reason why is because the workings of an Em robot would be very different and more complicated than the workings of a UAI robot. Indeed organisms that are products of evolution are often less clever than our human designs for different reasons.

    • http://overcomingbias.com RobinHanson

      Emulation abilities should be more lumpy than UAI. A drugged human is a great emulation of a non-drugged human, but its economic value at work is far less. There isn’t much value in exactly emulating worms, so we aren’t working much on that.

  • marshall bolton

    This idea of putting people in machines is a seriously deranged idea and given that these people are self-organising emulations they must themselves become seriously deranged. I would expect an array of classical psychiatric symptoms. Some will become depressed – non-biological machines thinking and living as people with the longings of people but with no possibility for real biological satisfaction (only the ever present ersatz of virtual reality) must become seriously sad. Others will become manic – simply because these busy bees will be bored out of their skull, so something exciting has to happen. Whilst those who realize they are living an impossible non-life will develop hallucinations, delusions and fix ideas to entertain and divert themselves and others. They will be the schizophrenic ems. Simply a world of madness. And it ain’t just paper-clips, but endemic and endogenous.

    • http://overcomingbias.com RobinHanson

      This is the classic concept of “alienation.” Humans have been increasingly alienated from environments that feel natural and comfortable for many thousands of years, and there is probably much more to come.

      • marshall bolton

        Yes we can adapt to lots of things, but not everything. Serious violations (from which there is no escape) activate self-cures, which can be quite bizarre and contagious. This is both at the individual, family, clan and national level. The key question is thus what violates and who is the judge? Consensus about that is very “lumpy” and prone to self-deception, but the crazies still go crazy and I propose that a simulated life on a silicone chip with free will, will go crazy.

      • HQP

        “a simulated life on a silicone chip with free will, will go crazy.”
        Indeed…

        https://www.youtube.com/watch?v=kFmJ6jJ4QLg

      • marshall bolton

        “The forsaken of your humanity…. But how cruel!” Luckily it is only a cartoon and can easily be overcome.

  • Riothamus

    I have read many things which suggest that small teams are better than larger ones for productivity in general and innovation in particular. The speculation is that the coordination costs increase faster than the productivity of additional workers.

    Is there a reason we shouldn’t assume small teams of maximally-resourced Ems will be the default?

    • http://overcomingbias.com RobinHanson

      Em teams would be the size that gives max productivity to their task. If I knew what size that was, I’d say so.

  • mlhoheisel

    I don’t think the big rival to your idea of EMs is what you call UAI as much as it’s limited or technical AI. The computational cost of emulating full minds would always be much higher than the sort of limited focused intelligence that isn’t intentional or mind like. Uploading the entire mind of a Go champion would always take orders of magnitude more computational resources than an intelligence that plays Go just as well but has no mind. Beyond the excess cost just to run entire minds, they also have to be catered to and manipulated and the cost of managing them is also much higher.

    The only justifications that would place any value on using slave minds rather than mindless intelligent software that I can see are values that might be called Veblen Value or Sadistic Value. Either conspicuous consumption or dominance for it’s own sake.

    Mindless intelligent software seems like it could be very friendly and person like without there being any hint of a real mind involved. I’d expect to see assistants like Siri or Alexa as a top layer of a stack of technical AI code that works like an expert system in turn using other software. The whole stack doing all sorts of superhumanly intelligent useful things but completely without aspirations, emotions or intentions.

    A self driving car might be engineered using EMs. Upload the minds of chauffeurs and run them in cars with cameras and sensors. It’s just a lot cheaper computationally and practically to do it with mindless intelligence.

    We’ve just transitioned out of a world that required minds for every task that needed even minimal intelligence. Slaves, servants or domestic animals served the EM role. It seems as though machines with mindless limited intelligence won out (except for Veblen or Sadistic value).

    • Joe

      I’m not sure it’s so obvious that most tasks really are made up of some small core component, the ‘real’ task, while everything else is just cruft we have to deal with because we’re stuck with all our useless human baggage. It seems more like modern technological systems have to solve most or all of the same problems as do biological systems – fuel, waste, maintenance, growth, planning, adaptation, interacting with others, etc.. Perhaps the biggest difference is that the systems we build can be much more widely distributed and interdependent.

      The idea that AI can suddenly knock us off our feet, not by doing everything we do but by cutting out some large proportion of our functionality which is not ‘really useful’, seems quite mistaken to me.

      • mlhoheisel

        That seems like a challenge to the idea of division of labor itself. Like Adam Smith’s pin factory, most economic tasks seem divisible into a small core real task and all the support stuff and that real task once isolated can be done much more efficiently when isolated.

        I think you point to a critical difference. Modern technical AI is not like an organism in having to take care of all the details of existence and replication on it’s own.

        I don’t see that as competitive with humans, just the opposite. I see it as keeping a barrier between minds and machines that serve them.

      • Joe

        “That seems like a challenge to the idea of division of labor itself.”

        Not at all – just that those supporting tasks really do all need to be done. Until they can be done by machines, they will continue to be done by humans. When machines can take over, they will. If my claim here is correct then I think it suggests that there won’t be a big sudden takeover by mindless narrow AI, because being able to have a computer do some small part of a task, even if it looks like the ‘core’ part, isn’t anywhere near enough to automate the whole task. Narrow isn’t enough, you need general AI – which may, as you say, be very widely distributed, does not need to be all contained in one box, but does need to have all the necessary parts, even the boring ‘supporting’ parts.

        Thinking about it, I realize I’m echoing a claim made by Fred Brooks in the software engineering book The Mythical Man-Month. He claims that most of the work involved in writing programs isn’t in making them work, but in making them robust – tolerant of a wide range of possible inputs, and usable as part of a larger system. Specifically, his claim is that to write a program that is sufficiently robust in each of these directions takes three times as long as does writing the core functionality, so nine times as long overall, to turn a brittle ‘program’ into robust, fault-tolerant, usable, useful ‘software’. I think he’s right, and that the point applies more generally.

  • assman35

    I think the argument of the book is stupid. Given that Robin Hanson is smart I am not sure why on this he is so stupid.

    AI is heading towards greater and greater specialization. EMs are the opposite of that. You want more Idiot savants. Not human beings. Future AI will be extremely good at doing specific things extremely well. Capitalism doesn’t want human level AI and therefore it won’t get it.

    What you will see in the future is AI that is incredibly good at driving cars, another that is good at farming, another that is very good at making brick walls etc. Your EMS and your UAI will get the asses kicked by specialized AI each and every single time.

    • Patrick Staples

      Did you read the book? The premise is that software that can emulate a human mind will be cheap. If that is the case, there will be enormous pressure to copy people who are broadly capable in the economy. Spurs of generally capable individuals can be trained for specialized tasks. The evidence is that there are no farming savants, or driving savants.

  • assman35

    ” Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.”

    This is pretty funny. Most of the unprecedented burst of progress you talk about was not an advance in algorithms but instead improved hardware. For AlphaGo they basically used neural network algorithms from the 1990s. Its the hardware that caught up to the software. There was no unprecedented burst of progress…just a sudden realization that the old algorithms actually worked!

    So you are arguing that UAI will fail to progress rapidly because that was the case in the past. But UAI failed to progress because the hardware wasn’t fast enough which is exactly the same thing that would constrain EMS.