Allen & Greaves On Ems

Paul Allen and Mark Greaves say the “singularity” is over a century away:

This prior need to understand the basic science of cognition is where the “singularity is near” arguments fail to persuade us. …. A fine-grained understanding of the neural structure of the brain … has not shown itself to be the kind of area in which we can make exponentially accelerating progress. … By the end of the century, we believe, we will still be wondering if the singularity is near.

But what about the whole brain emulation argument that we can simulate a brain without understanding it? They say:

For example, if we wanted to build software to simulate a bird’s ability to fly in various conditions, simply having a complete diagram of bird anatomy isn’t sufficient. To fully simulate the flight of an actual bird, we also need to know how everything functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been made (using many different organisms) to chain together simulations of different neurons along with their chemical environment. The uniform result of these attempts is that in order to create an adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of knowledge about the functional role that these neurons play, how their connection patterns evolve, how they are structured into groups to turn raw stimuli into information, and how neural information processing ultimately affects an organism’s behavior. Without this information, it has proven impossible to construct effective computer-based simulation models.

This seems confused. No doubt a detailed enough emulation of bird body motions would in fact fly. It is true that a century ago our ability to create detailed bird body simulations was far less than our ability to infer abstract principles of flight. So we abstracted, and built planes, not bird emulations. But this hardly implies that brains must be understood abstractly before they can be emulated.

Yes you need to understand a system well in order to know what details you can safely leave out and still achieve the same overall functions. But if you can afford to leave in all the details, you don’t have to understand what is safe to leave out. We apply this principle every time we play a song or movie. Since we know that a song or movie recording contains enough detail to reproduce a full sound or visual experience, we don’t have to understand a song or movie in order to be able to replay it for someone, and achieve most of the relevant artistic experience.

Projecting trends like Moore’s law suggests that our ability to simulate low level brain processes should increase by fantastic factors within a century. These factors seem plenty sufficient to model entire brains at low levels of detail. So if we have not understood brains well enough by then to know what details we can safely leave out, we should be able to reproduce their behavior via brute-force simulation of lots of raw detail.

Added 10p: As I explained in January:

We should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Ely

    The connectomics project between Harvard and MIT is a particular place where a useful approximate link between specific technologies and ability to emulate brains may be calculable in the short term.

    FWIW, I am studying in this course this semester and I am working on some research that uses connectomics to provide plausible complexity bounds on some brain operations, for the purpose of arguing against Ian Parberry’s analysis and conclusion that human cognitive resources are prohibitively difficult to emulate without better abstract knowledge of cognitive science.

    There is other recent evidence that suggests an algorithmic approach to brain activity will at the very least give us short term access to replicating certain specific human cognitive functionality. I think this is a case where Leo Breiman’s distinction between “the two cultures” of data analysis is pretty apt.

  • Billy Brown

    A seductive argument. The trouble is, there are no known methods of getting a high-fidelity simulation of biochemical events without going all the way down to a quantum mechanics-level analysis of every subatomic particle in the brain. Current molecular simulation tools are too approximate (and narrowly tuned) to give adequate results, but the QM-level equivalent is computationally intractible even for planet-sized masses of nanotechnological computers.

    So we’re going to have to understand enough to make a more selective simulation instead, which means confronting the fact that the brain is actually quite a bit harder to reduce to an abstract model than something like a bird’s flight. If it takes 50 years to figure out which of the myriad events in the brain are computationally significant, then it’s going to be 50 years before work on an adequate model can seriously begin.

    Of course, there are several other routes by which a singularity could be reached, so this doesn’t necessarily put off the date. But it does suggest that the human uploading path is not as easy as many have tended to assume.

    • http://hanson.gmu.edu Robin Hanson

      I don’t know how you can know that current tools are inadequate for this purpose.

      • Billy Brown

        Try using one one for real work, or alternatively open one up and look under the hood.

        To be more concrete, a brute-force brain sim would have to accurately account for phenomena such as chemical bonding, diffusion of small particle clouds in confined spaces, electrical effects involving small numbers of electrons, and so on. Unfortunately the macroscopic approximations used in conventional engineering aren’t valid for such small structures, so you have to fall back on simulating individual particles to get accurate results. But the critical ‘particles’ for most of these interactions are individual electrons and the atoms they’re bonding with, which means we’re back to implausibly high computational demands. It’s worth noting that general form of the protein folding problem is only a tiny subset of what you’re trying to simulate here.

        Besides, you’re reversing the burden of proof. If you’re going to stake out the position that brute-force simulation of whole brains is going to be feasible within a few decades you need to show some actual estimates of simulation difficulty to back this up. Just pointing to Moore’s law isn’t sufficient this case, because the task is at least 9 (and possibly 15 or more) orders of magnitude harder than what’s currently feasible.

  • Albert Ling

    Are we assuming that once whole brain emulations are made, and billions and billions of human ems are created, then the “complexity brake” can be overcome by sheer number of human-level agents working on understanding it, and a theory for the algorithm underlying the brain can be made? Because then we can tweak the algorithm itself to create intelligence that grows exponentially fast (which is at least my definition of a singularity).

  • http://markbahner.typepad.com Mark Bahner

    So we abstracted, and built planes, not bird emulations.

    Actually, we (well, not me personally…but guys like DaVinci) first tried flapping-wing aircraft. But the fixed-wing propeller type were a lot easier to mate to an internal combustion engine. And the internal combustion engine was necessary to get the power-to-weight ratio high enough for heavier-than-air flight.

    I’m amazed that people can’t simply look at the things that robots like Asimo, or computers like Watson, can do and realize that human-grade intelligence can’t be 100 years into the future.

  • mjgeddes

    It would certainly have taken another century if I hadn’t been around, but with me in this Everett branch, I believe that cuts the time to Singularity from 50-100 years to within 21 years or so. 😉

    The article makes a good point here:

    “Truly significant conceptual breakthroughs don’t arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to reëvaluate portions of what they thought they had settled.”

    New paradigms such as finding out that probability is just a special case of something more primative (‘similarity’) and Bayes is just a special case of categorization? 😉

    You see; super-clicker intuition can actually anticipate the general form of future fundamental breakthroughs, even without much knowledge of technical details.

    • Konkvistador

      Can someone help me here? I haven’t been too diligent a reader of OB, and I have occasionally run into this posters comments. They always seem tantalizingly close to making perfect sense, yet I always come away with a, not to certain feeling, that there is are blindingly obvious flaws and I’m trying to think stuff into it which just isn’t there.

      Are the inferential chains too long? If so is there an introduction to some of the more specific terminology mjgedds uses?

      Is my IQ too low for it to be worth trying to understand? Just say it I’m not sensitive.

      Is mjgeddes just slightly silly person who makes some good points but overestimates their importance? Is mjgeddes a crank?

      I hate to draw attention to anyone like this, but in the absence of LWish karma, its hard to me to know what I can get out of reading through his contributions.

      @mjgeddes: Please don’t take offence! I’m just trying to get a quick and dirty reading on what’s the appropriate level of investment without reading all your stuff ever.

      • mjgeddes

        If I was smart enough to make perfect sense, the Singularity wouldn’t be in the future, it would be happening right now. If the accuracy of ideas about ‘future’ breakthroughs was perfect, it would no longer be referring to the ‘future’, since it could be implemented right now and thus would have moved into the present.

  • rapscallion

    Seems to me that they’re saying you need to know a lot of details before you can do a useful simulation, and you’re saying that computational capacity will surely give us the ability to program to a sufficiently detailed level, but I don’t see how computational capacity equals programming knowledge. As far as I can tell you’re assuming that we’ll have proto-transporter tech which will let us to copy/paste sufficient details from actual brains into computers without initially understanding what we’re transferring. But such tech is far more speculative than the expanded computational capacity that Moore’s law suggests we’ll have.

  • http://www.isteve.blogspot Steve Sailer

    I don’t see much evidence that our descendants will want to have us around in virtual form to kvetch to them about why they never virtually come to visit us. When they are making up their budgets, not paying for maintenance on our brain emulations sounds like the first area they will decide they can cut.

  • http://hanson.gmu.edu Robin Hanson

    Albert, no I’m not assuming we’ll have ems because we’ll have lots of ems to design them.

    Mark, yes we did try emulation first, but with very poor emulation tools.

    rapscallion, brain scanning tech will be probably be the first of the three needed em techs to be ready.

    Steve, I talk about ems created because they are productive and competitive, not out of reverence to ancestors.

    • Albert Ling

      Robin, I’m just saying that even after ems are created then there is still the other step of understanding fully how they work, in order to tweak it to be a trans-human level of intelligence and not just many copies of smartest current humans. imo there has to be a qualitative change in the process of intelligence not just a quantitative one. If we put a million people with an IQ of 50 in a room, they cannot come up with an insight on quantum theory but one genius theorist can…

  • Michael Wengler

    Problems I see with brain emulation that I don’t think I’ve seen mentioned before:

    1) Structurally, a dead brain and a live brain are pretty similar. In a newtonian world, your snapshot to start the emulation needs to include the position and velocity of every particle. We know at the quantum level these are not both measurable simultaneously, so brain understanding will have to proceed at least to the point that we know how to take a “snapshot” of the brain that includes both the static and the dynamic information in the snapshot. This will necessary involve some sort of statistical inference that will necessarily be hit or miss: either the statistical inference we are using is “good enough” or it isn’t. It may well be that philosophically there is no such thing as a “perfect simulation,” that this is just a concept we can say without doing like “square circle” or “p-zombie.” I’d bet dollars to donuts it is a lot easier to emulate a dead brain or a broken brain than a brain that has roughly the same dynamics as a living brain. And debugging, finding the subtler deviations between em and living, will take a while and would in the current environment trigger all sorts of ethical concerns.

    2) We know brains work with a two-way interaction between the brain and remote items like eyes, ears, skin, muscles. A “complete” emulation where nothing is taken for granted would have to include a complete emulation of the systems with which the brain interacted, a full emulation of the body and a full emulation of a not-completely-trivial environment in which the body/brain/eyes/ears are operating. If that merely doubles the complexity of the problem then I suppose that doesn’t really matter for this level of argument.

    3) You talk frequently of quark level simulation which certainly gets beyond dealing with the statistical mechanics as embodied in thermodynamics. But at the deepest level, physical theory is probabilistic. The way one currently simulates probabilistic processes is with random number generators. But most theories which are probabilistic are probabilistic by convenience, there are actually a myriad of details which operate which are declared “unimportant for current purposes” and replaced with a pseudo-random number generator, which ultimately is simply assumed to have “close enough” to the same results the details would have given. BUT… we don’t KNOW what causes the probabistic results in quantum mechanics. We don’t know if when we replace all these quadrillions of quantum interactions between the quarks with pseudo-random numbers that we will still have a brain functioning, and not the equivalent of placing pseudo-random drivers behind the wheels of every car in the country in order to do a traffic simulation. Indeed, Penrose, who whatever else you might say is not an idiot, thinks the quantum interactions may be involved in consciosness. Perhaps the processes will be emulatable, but they will probably have to be understood to some extent before that emulation can succeed.

    We COULD find evidence that p-zombies are impossible when we finally build a brain emulation. If we replace the “soul” with pseudo-random number generators, we may have a brain that functions like someone comatose or in some other non-optimum state, but never rises above that. Technically that would be a FANTASTIC achievement, but it would not likely be something you would find value in repeating trillions of times.

  • Ari T

    I’m still curious. What is the average expert opinion on the time scale of em simulations happening?

    • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel

      Not enough academics take this topic seriously for there to be a sensible average or consensus. There are a handful of experts with varying types of expertise, who all have different opinions and biases.

  • Michael Wengler

    I think Kurzweil’s vision of the singularity is likely to happen way sooner than anything like an em or pure machine consciousness.

    Kurzweil talks more of integrating mechanical and biological. Kurzweil’s AI is not a machine that humans interact with. Kurzweil’s AI is an enhanced human, a human who accesses google through interfaces built in to his brain, so that it “feels” like accessing memory, but has more vivid audio visuals, and can be more easily shared with other similarly enhanced humans. Kurzweil’s AI is a human with enhanced sensors, a human that can hear radio channels, can see radio, infrared, ultaviolet radiation, can sense what is going on in great detail in his body, control a lot of it consciously when he wants to, and do a much better job of repairing it or enhancing performances.

    By enhancing a human, Kurzweil’s AI elegantly avoids the hardest problems in AI. Whatever those hardest problems are. Because by integrating with the human, you enhance what you know how to enhance and use the existing human brain to do the parts you don’t yet know how to enhance.

    So can we have a singularity without every creating consciousness in a machine? I think we can. When human capability is in some sense more than doubled by machine enhancements, than it is the new integrated humans who dominate future developments and take them in directions that unenhanced humans would not fully comprehend. Things get different fast. Perhaps part of that is that consciousness does show up in machines, perhaps that comes soon, perhaps that takes 1000 years.

    Further, whether conscious or not, when the bulk of thought-like stuff is done by machines, things are different. We already know from brain studies that the brain does so much more unconsciously than it does consciously. That consciuosness is a little thing on top of a fantastic structure, like Captain Kirk in the Enterprise.

    Of course, looked at this way, we passed a different kind of singularity when machines got stronger than muscles. The average human on earth (or is it in the U.S.? My bad for not remembering) uses every day the average energy output of 25 humans. We have passed a mechanical singularity years ago. If you think the mechanical singularity was not that big a deal, perhaps the cognitive singularity will be similarly both incredible and mundane.

    • Albert Ling

      Could it be possible to figure a way to biologically boost intelligence? maybe by genetic modification, maybe by some chemical process applied to fetuses, etc. ? Having every new human born have the mental capacity of an Einstein or a Feynman would likely be a paradigm shift in the same order of magnitude [of brain emulations].

      Today the “99% movement” complains that the 1% gets too much of the pie. The fact is, I think the 1% produces most of the pie… In this respect at least, Ayn Rand was right. Charles Murray also writes that there is a huge underclass (the majority of the population) that does not have the mental capacity to do high level jobs of the future. It’s not a lack of education, its mental power that’s lacking. So future tech must figure a way to make everyone operate equal or better than the current 1%!

  • http://www.isteve.blogspot Steve Sailer

    Who is going to pay to defrost Robin’s brain? I really don’t get the economics of the various immortality schemes.

    • http://lukeparrish.rationalsites.com/ Luke Parrish

      Funding is set aside in an interest-bearing account as part of making cryo arrangements.

  • http://markbahner.typepad.com Mark Bahner

    “Mark, yes we did try emulation first, but with very poor emulation tools.”

    The take-home of our attempts to duplicate bird flight shouldn’t be that we had “very poor emulation tools”…it should be that birds are lousy fliers. We now have jet engines that eat birds, not to mention flying way faster than they can.

    The attempt to achieve artificial intelligence should not be based on trying to emulate human brains. It should be based on simply continuing current trends in increasing capabilities of hardware and software.

    I haven’t seen the latest iPhone, but it sounds very impressive. And Asimo is impressive. And Watson is impressive. Just cram them all together, and it will be one danged impressive robot. Voice recognition. Good mobility. And outstanding knowledge of puns and trivia.

  • Dave

    I read. I slap my head. I chuckle. If evolution is true the mind is cobbled together from systems originating from worm like creatures reacting to chemical and electromagnetic forces. The worms found ways to eat ,reproduce and compete well enough to survive. Logic was discovered because it made more sense to swim toward food rather than away from it Unless you are theistic, you will make no progress in understanding this, much less in improving upon it.
    Enter,the Singularity:

  • Joe Teicher

    So basically your hypothesis is that the best use we will come up with for enormously powerful future computers is to brute-force simulate human brains down to the atomic level, connect those brain simulations to very advanced robots and have those robots flip burgers.

    For any given amount of computing power, it seems like having specialized algorithms that are designed to be fast on the underlying hardware will destroy a setup that involves having the computer blindly simulate a big/complicated physical process and then have that physical process do the computing.

    Computers with much less power than human brains can already outperform us on a very large range of tasks. My PS3 can “paint” 30 incredibly detailed pictures/second. I can’t even twitch my finger 30 times/second let alone do anything useful. My trading system can react in microseconds to market data. If I try really really hard it still takes me like 200 milliseconds to react to anything.

    I just can’t fathom what cognitive tasks will supposedly be so difficult to program that we will want to resort to whole-brain emulation to produce them in computers. I feel pretty confident that there will be no such tasks and that whole brain emulation will never be economically valuable. Frankly, the whole idea of software that has goals, ambitions and desires like people do seems horrible. Instead of having tools that just do what they do, you’ll have employees that you have to motivate. No thanks!

    • http://daedalus2u.blogspot.com/ daedalus2u

      The same as what most intellectual power goes to now, coming up with rationalizations as to why a desired action is justified and coming up with flattering things to say about high status individuals that are not immediately recognized as transparent lies.

    • http://timtyler.org/ Tim Tyler

      We may emulate brains for fun or as a hobby – in much the same way that hobbyists emulate birds today. However, economic value does indeed seem pretty far-fetched.

  • zmil

    This comment thread might be dead, but I’ve been meaning to ask this question for a while, just waiting for a relevant post on OB to come up.

    I (vaguely) understand your argument that the signal processing aspect of brain function should be relatively abstractable from the fine biochemical/physiological details, especially given the robustness of brain function to seemingly massive chemical and physical insults.

    What I do not understand is how you propose to model memory formation. As I understand it, long term memory formation is dependent on changes in neural connections, which I assume must depend largely on chemical signaling. We’ve been trying to model cell-cell interactions for a while now, and mostly failed miserably. Between feedback loops, great dependence on starting conditions, and just the sheer number of different molecules and structures that are involved, I don’t have too much hope that this modeling will improve.

    Now, I can imagine a scenario where formation and strengthening/weakening of neural connections is a fairly straightforward probabilistic function of the strength of the electrical signals going in various directions, but I don’t know of any evidence that this is the case.

    My suspicion is that, even if the signal processing and short term memory functions of brain are relatively straightforward and mainly dependent on the electrical inputs and outputs to each neuron, formation of long term memories is more of a biochemical problem, and thus not nearly as amenable to modeling with our current methods.

    One possibility that this brought to mind is of a situation where we can model brains for a few seconds, but they have nothing but short term memory, or at best the modeling of long term memory is faulty and breaks down over time. This might still be useful- you could run the short term model, then put that output into another run, and so on. But it would be hard to think of EMs as persons in this situation, rather than tools.

    • http://hanson.gmu.edu Robin Hanson

      zmil, the process of writing and reading memory is a key part of the signaling processing system, and so must also have been designed to be robust to irrelevant influences.

      • zmil

        As I understand it, there is strong reason to believe short term and long term memory formation are fundamentally different processes. This is also suggested by the existence of anteretrograde amnesia.

        I’m a biologist, not a computer scientist, but this also seems similar to computers like the original Mac, which only had ROM and RAM, no hard drive. Clearly, some sort of memory is required for processing to occur, but this memory need not be permanent storage. Again, this is also suggested by people with anteretrograde amnesia, who are clearly capable of thinking, but are unable to form long term memories.

        Now, perhaps long term memory is just as robust to irrelevant influences as processing and short term memory, but I think the effects of ethanol and concussion suggest otherwise.

        On the other hand, that messes with my suggestion that EMs would not be considered people, as amnesiacs are clearly still people. With sufficient processing and storage capacity, they could even have a form of long term memory, by taking snapshots of their own processes -though they might not be able to integrate that into their own memory…

  • Pingback: Overcoming Bias : Kurzweil Rejects Ems