Bad Emulation Advance

You may recall my guess is that within a century or so, human whole brain emulations (ems) will induce a change so huge as to be in the top four changes in the last hundred million years. So major advances toward such ems are big news:

IBM’s Almaden Research Center … announced … they have created the largest brain simulation to date on a supercomputer. The number of neurons and synapses in the simulation exceed those in a cat’s brain; previous simulations have reached only the level of mouse and rat brains. … C2 … re-create[s] 1 billion neurons connected by 10 trillion individual synapses. C2 runs on “Dawn,” a BlueGene/P supercomputer. …  DARPA … is spending at least US $40 million to develop an electronic processor that mimics the mammalian brain’s function, size, and power consumption. The DARPA project … was launched late last year and will continue until 2015 with a goal of a prototype chip simulating 10 billion neurons connected via 1 trillion synapses. The device must use 1 kilowatt or less (about what a space heater uses) and take up less than 2 liters in volume. …

“Each neuron in the network is a faithful reproduction of what we now know about neurons,” he says. This in itself is an enormous step forward for neuroscience, .. Dawn … takes 500 seconds for it to simulate 5 seconds of brain activity, and it consumes 1.4 MW.

“Enormous step” seems a bit too much, but even so Randal Koene agrees this is big news:

This recent demonstration of computing power in simulations of biologically inspired neuronal networks is a good measure to indicate how far we have come and when it will be possible to emulate the necessary operations of a complete human brain. Given the storage capacity that was used in the simulation, at least some relevant information could be stored for each updatable synapse in the experiment. That makes this markedly different than the storageless simulations carried out by Izhikevich.

Even if big news, this is not good news.  You see, ems require three techs, and we have clear preferences over which tech is ready last:

  1. Computing power – As a steadily and gradually advancing tech, this makes the em transition more gradual and predictable.  Here first only expensive ems are available, and then they slowly take over jobs as their costs fall.  Since it is a large industry with many competing producers, we need worry less about disruptions from unequal tech access.
  2. Brain scanning – As this is also a relatively gradually advancing tech, it should also make for a more gradual predictable transition.  But since it is now a rather small industry, surprise investments could make for more development surprise.  Also, since the use of this tech is very lumpy, we may get billions, even trillions, of copies of the first scanned human.  And the first team to make that successful scan might gain much power, if it hasn’t made cooperative deals with other teams. By the time a second, or hundredth, human is scanned most of the economic niches may be filled with copies of the first few ems.
  3. Cell modeling – This sort of progress may be more random and harder to predict – a sudden burst of insight is more likely to create an unexpected and sudden em transition.  This could induce large disruptive inequality in economic and military power, both among teams trying to succeed and among ordinary folks displaced by em labor.

This new DARPA project seems focused more on advancing special computing hardware than cell-modeling.  If so, it makes scenario #1 less likely, which is bad.  Can someone please tell these DARPA knuckle-heads that they are funding exactly the wrong research?

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Halvorson

    Apparently this new finding is no big deal

    • Michael Turner

      Whoa, Markram, don’t hold back, tell us what you really think. ;-)

      I have to say, just from skimming the C.V.’s, Markram sounds like the real deal on brain science, whereas Mohda’s mainly distinguished himself in stuff like cache line replacement strategies. He’s a competent computer scientist, surely, but … well, compare Markram’s capsule bio web page with the sprawling monument to his own ego that’s Mohda’s.

  • Michael Turner

    The actual result is a 1/100th-speed modeling of columns (a kind of neural package) in a cat’s cerebral cortex — which is a layer a few millimeters thick, hardly the bulk of the brain even when you take folding into account. Call it 10% of bulk, and say a cat’s brain is 1/10th as bulky as a human’s. So,in all: five orders of magnitude away from the goal. How they get to the goal by 2018 is quite beyond me. More watts, more racks, more special purpose hacks, more Moore’s Law, and just keep it up for 8 more years — well, maybe. But I’m skeptical.

    The apparent architectural significance, is not, I think, what some assume. Yes, it seems a huge leap, since most of the important functions of cognition seem to be hosted in the cortex, which suggests it is hugely significant, architecturally. However, columns for one function aren’t anatomically differentiated from another (hence greater ease in modeling — it’s just replication of the same “chip”, in a big 2.5D computing surface.

    Ironically, these columns are arrayed as part of the outer sheath of the brain–precisely where they are most exposed to head trauma. To some extent, people can recover from damage in these areas by relocating the functions elsewhere. The cerebral cortex might be just a big array of general-purpose computrons, right where you’d expect them to be from the point of view of fault-tolerance. How the brain provisions the cortex with its various areal specializations, in such regular patterns from one brain to the next — I’m not sure anyone has the answer to that question.

    This IBM “breakthrough” may be a Bad Advance. But even if so, the road to hell is taken in baby steps, and this achievement is somewhat like a toddler’s first few, I think. Plenty of time to head ‘em off at the pass. If they even get to the pass, that is.

    • Michael Turner

      If Markram’s reaction to this announcement is basically accurate despite his volcanically eruptive dyspepsia, I’ll have to update my estimate: the supposed “cat brain” is 7 or 8 orders of magnitude away from realtime human em, not 5. The main thing I can see bringing it down is some discovery of digital circuitry that trades higher power dissipation for lower “virtual neuronal parts count” — i.e., that you don’t need a simulation at such a detailed level.

  • michael vassar

    This sounds to me like Robin saying that in the presence of excess computing power overhang we should expect AGI to undergo a local hard take-off.

    I’m curious as to why he thinks that its plausible that there could NOT be a large computing power overhang given that for almost everything we do with computers we initially use (usually many) orders of magnitude more computing power than is necessary and given that ems would give us the software development force and the motivation to eliminate much of that inefficiency in the em software.

    Robin, I don’t consider scenario 1 above to be likely. Assuming that scenario 1 doesn’t hold, what are the differences between your position and Eliezer’s position as you understand it?

    • http://hanson.gmu.edu Robin Hanson

      Development would start in earnest not when ems are efficient enough to sell, but when the expected improvement from a development effort is expected to make them efficient enough to sell. So the elimination of initial inefficiency is part of the development plan.

      • michael vassar

        According to economic theory, but in actual industries that simply isn’t how things generally work. With enough government funding that sometimes happens, but if that situation persists for more than a few years, or a decade at most, it will simply end up as an endless boondoggle, if it doesn’t start out that way, and in my assessment of history there will never be substantial government funding for more than a single basic approach outside of wartime, as people who receive funding for one approach will see funding for other approaches as a challenge to their legitimacy.

      • http://hanson.gmu.edu Robin Hanson

        Huh? Almost every product starts development before it is efficient enough to sell at a profit.

  • rob

    um, i feel someone should say something regarding ems: no fucking way. bat shit crazy. i hope you are told this at least a thousand times a day to keep on even footing.

    • John Maxwell IV

      Why do you think ems are bat shit crazy?

  • Eric Johnson

    Skeptical! The state of virtual molecules is that they barely have any use for drug discoverers, who use them to dock 30-atom small-molecule drugs onto proteins. There are quantum effects at this level, but even the programs that include it do not do that well. Things may be getting better, according to the people at “In the pipeline” — but theres always the question of whether that is just hype. Drug discovery is a high-hype field.

    The 3-dimensional conformation of the above-mentioned proteins was, of course, determined by X-ray crystallography or NMR, because no one can predict the folding of a protein based on its sequence. But its not clear that these techniques can give us structures for all human proteins.

    And what about scanning? Is there some scanner that will get you cell-level resolution? Or is there reason to expect that there ever will be?

    • http://www.aleph.se/andart Anders Sandberg

      Scanning: yes, there are current scanning methods giving us beyond cell level (nanoscale) resolution in 3D. See the work by Kenneth Hayworth or my review in the whole brain emulation roadmap. The real bottleneck is that these methods usually have very limited scanning volumes – to work on entire brains extra trickery is needed. A more fundamental issue is what information we need to scan is currently uncertain (connectivity obviously, but what parts of the local chemistry?)

      Right now the emulation-people consensus is that the best level to go for is detailed compartment modelling. New evidence may change that, but if this holds we do not need any molecular modelling. In the case actual molecular modelling is needed, then brain emulation is going to be very, very computationally costly and arrive late this century (if ever).

  • Bill

    I bet they’ll train it to swipe harmlessly at a button on a string and call it success. Maybe chase a mouse. Then there will be an arms race for a Dog computer. You watch.

  • Greg Conen

    @rob, Eric: 100 years is a long time. Especially in computer science but even in cellular and molecular biology. 100 years ago, people didn’t even know what proteins were, assuming that they formed by aggregation of small molecules. The “neuronal doctrine”, though know, wasn’t universally accepted.

  • rob

    Even if technologically possible, these whole brain ems would likely be very depressed with their circumstances. (I don’t suspect they will do well with women.) They will be unwilling workers, and instead, plot to destroy the world that brought them into such a miserable existence.

  • Bill

    When you have a hammer, everything looks like a nail. Today, we think of hardware/software solving a problem. In the future, it might be a bunch of proteins thrown in a pail to solve a complex computational problem on ecology, flow dynamics, weather patterns, or other complex problems.

    Looking at problems from different technological perspectives, or doing such things as mixing a physical (biological system as above) and electrical system may be the future.

    Who knows.

    I’ll tell you when I get there.

    • loqi

      How exactly is throwing a bunch of proteins (presumably according to a precise specification) into a pail to solve a complex computational problem not an example of using hardware and software to solve a problem?

      • Bill

        Proteins are not electronic hardware and it is not software that runs them. The results have to be computed, I suppose, so I guess everything is electronically computational, even though the stuff in the middle wasn’t. Electronic hardware and software turf on this site are well protected, but the idea is that we don’t build silos and reach across many disciplines.

      • loqi

        By the same token it’s not software that runs electronic computers either, it’s Maxwell’s laws.

        Solving a computational problem involves hardware and software, pretty much by definition. Protein isn’t electronic hardware, but any Turing-complete system that we have sufficient control over functions as hardware. Similarly, the procedure by which the proteins are applied to achieve a specific computational result might not be specified as a C program, but you’ll have a hard time convincing me it’s not software.

        This is really just a semantic dispute, but I think the narrowness of your “hardware” and “software” categories is undermining the point you may have been trying to make.

    • http://www.rationalmechanisms.com Richard Silliker

      However it is done you will need a lever.

  • michael vassar

    Robin, I’d still like an answer to the question “Assuming that scenario 1 doesn’t hold, what are the differences between your position and Eliezer’s position as you understand it?”. What position are you taking?

    • http://hanson.gmu.edu Robin Hanson

      I’m not sure which Eliezer position you have in mind.

      • michael vassar

        Reasonably high probability of fairly local or concentrated take-off, e.g. of the sort of scenario you seem to be in favor of preventing here.

    • http://yudkowsky.net/ Eliezer Yudkowsky

      Actually, I’m not sure either what difference you’ve got in mind.

      • michael vassar

        The major dispute between you and Robin seems to be over the plausibility of highly local singularities. In this post Robin seems to be saying that what DARPA is doing is bad because it makes a local singularity more likely, while in argument with you he seems to claim that such singularities are implausible anyway.

    • http://hanson.gmu.edu Robin Hanson

      Locality is only one of several things that can go wrong in a transition, and there are several kinds of locality. For example, having most descendants all be copies of the same original human could still be very far from a “singleton.” In this post I’m getting more precise about distinguishing various scenarios and the various senses and degrees to which such might be “local.”

      • michael vassar

        “a sudden burst of insight is more likely to create an unexpected and sudden em transition” seems to be strong evidence in favor of the “brain in a box in a basement” hypothesis.

        I know locality is only one thing that can go wrong. More precisely, as far as I can tell, without locality things all but automatically “go wrong”, a-la The Future of Human Evolution.

      • http://www.rokomijic.com Roko

        “More precisely, as far as I can tell, without locality things all but automatically “go wrong”, a-la The Future of Human Evolution.”

        I think Robin disagrees with this. I agree: the forces of competition, if left unchecked, will wipe out our values. There are various things stopping that from happening right now, but in the long run, survival of the fittest means death of humanity.

  • Aron

    I am at a loss as to why they are expanding to ‘cat-size’ when most (I assume) of the fundamental scientific questions are present at smaller scales. I watched an hour long presentation from Markram and I got *very* little out of it in terms of what hypotheses were actually being tested.

  • http://www.aleph.se/andart Anders Sandberg

    Markram and Modha are looking at two different levels of modelling. It is a bit like local weather forecasting vs climate models. To make a full WBE we need both levels, if only to check what simplifications we can get away with. It might turn out that the most workable level is more abstract like Modha’s, or more biological like Markram (or something on a scale above or below these).

    I have been playing around with my own estimates of computational demands for brain emulation, trying to see when it could occur. It turns out that modelling when hardware will be enough (as a function of the currently unknown simulation resolution) is not too handwavy – we have Moore’s law (and its uncertainties), we have people’s ideas of resolution and some estimates of the computational demands for resolution.

    A bigger problem is to estimate when we will develop the necessary large-scale scanning technologies – tech development of this can likely be done fairly quickly (think HUGO – I think this is something that would happen on a decade timescale) but is dependent on *when* people start throwing money on it.

    And the deep problem is to tell when the scan interpretation and basic neuroscience gets done – I have no good idea of how to estimate this, although it is easy to throw together plenty of scenarios. For example, early scanning gives a lot of new data to work with and get academic cred for, so scanning ought to accelerate neuroscience. But would there remain hard, non-parallelizable problems to solve?

    This is a bit like the issue of whether we would get AGI before WBE or not. We do not have good ways of estimating AGI progress, and we do not have good ways of estimating neuroscience progress. Or do we?

  • Doug S.

    Neurons don’t do all the computational work of the brain; glial cells are, apparently, more than just a support system for the neurons. A brain emulation may very well have to simulate them as well.

    • http://www.aleph.se/andart Anders Sandberg

      Yup. I looked a bit at it in my roadmap, and if we need to simulate them they do not seem to add much to the computational overhead. The reason is that glial cells have rather slow dynamics, so they do not need to be simulated at the same rate as the neurons. They might add more memory requirements, though.

  • Pete Edwards

    The use of transistors in this venture is not the way forward. While they have merit, and can mimic the operation of neurons, they are, by their very nature, limited in their time sliced domain. There is another technology, a hybrid, and it is far more powerful than this. I’ve kept quiet about it for 20 years, hoping to bump into the right people to make it happen. I guess one day it will

    • Pete Edwards

      explain?

  • Pingback: Overcoming Bias : Ho Hum Nuclear Winter

  • Pingback: Overcoming Bias : Come The Em Rev

  • Pingback: Overcoming Bias : Hurry Or Delay Ems?