23 Comments

IBM gets $4.9 million from the Defense Advanced Research Projects Agency grant: IBM, Partners Aim To Build Brain-Like Computer Systems

Expand full comment

What I would love is a 3-hour led-discussion seminar on this...

If you can make it to London, Anders is doing a 2 hour WBE talk on Saturday the 22nd. There's also the option of lunch before and pub after.

If not, they're planning to webcast it. But the event is free, and Anders is amazing to listen to and debate with, so I recommend coming.

Expand full comment

For those who don't read the report, let me just note what the it doesn't say (so as to forestall some possible misunderstandings).

The report does not claim that WBE will happen before other AI approaches succeed; this question is not addressed. The report does not claim that we ought to try to develop WBE; the issue desirability is not addressed. The report also does not address safety issues, nor does it discuss the potential socio-economic consequences of WBE. We omitted these issues, not because we deem them unimportant, but because we thought it best to start by trying to create some basic technical understanding of the prospect and the challenges it involves. We hope to address these wider questions in future work, or to stimulate others to do so.

Expand full comment

People forget that whole brain emulation's goal is not to build an AI but to learn more about the brain. Blue Brain and such are modeling as accurately as possible but we still don't know the full story regarding many basic properties of the brain such as plasticity, development, glial cells; even the neurons themselves are not completely understood. That is the point of computational neuroscience, to garner a systems understanding of brain function that is not entirely possible through experiment though experimental verification is crucial throughout I'm sure.

That being said, WBE along with all other advancements in our understanding of brain-cognitive science will slowly give us more and more of the picture and engineers will naturally use that knowledge to build technology based on it. The end result of WBE will not be a functioning human-like intelligence in a box, but will be theory and algorithms of how the brain processes information. This in turn can and probably will be used to build intelligent machines by piecing together the necessary parts as we see fit. An analogy is we can model an ant colony digitally and the result is not to have a digital ant colony running around on screen (may have entertainment value but little else), but better internet routers based on the principles discovered. Already, biologically based algorithms for machine vision are surpassing all other attempts thus far and that trend will continue.

I think a combined approach of current AI/engineering methods along with the new tools and ideas that computational neuroscience provide will be the best route towards intelligent machines of value to society.

Expand full comment

@Will

I used to dance tango with a neuroscientist a lot. I once asked him this question myself, for which he had an interesting theory, based on discoveries about dreaming. Of course when we dream, the hippocampus sets up a slow wave, to which the amydagla contributes by somehow encouraging or activating so-called PGO waves from other parts of the brain, to set important visual centers in motion and thus coherently bring images to the dream.

All this stuff is happening when we sleep, and somehow the main scanning brain wave goes out to "sweep up" all these events so we can experience them at the level of our coherent selves. (Because even when I sleep I wake up and realize that dreams have happened to "me.")

Said dancing neuroscientist argued therefore that brainwaves were perhaps important in coordinating our brain parts and gathering up all the disjointed crazy, million things happening at once perceptual input and allowing us to have an understandable, meaningful "experience" out of that. An important thing to note is that of the four kinds of brainwaves, while one predominates, they are all always there, a little bit.

So while we are in the brainwave state we call "beta," chatting away, there is also always a trace amount of the other brainwaves, even the delta we associate with sleep. It may be that even when you are actively awake, small parts of your brain are actively asleep. I now use this as a convenient excuse all the time.

Expand full comment

One thing I am interested in that wasn't referenced in the document is longer range electrical activity than Ephatic effects. Brain waves may serve a long range regulatory function. I wonder why there are brain waves at all. Why and how is the electrical activity coordinated, is it a byproduct or a function? If it is coordinated function by the electrical activity itself, you get into the equivalent of intractable many body dynamics.

It may well be possible to simulate this more simply, but you would have to understand the function.

Expand full comment

Well, that's certainly true - though "hundreds of millions of dollars" is peanuts out of the salaries of the multi-billionaires involved on the corporate side - like James Harris Simons, Sergey Brin and Larry Page.

Expand full comment

That's not what Modha said. He says governments are funding private companies right now, in a very big way.

Expand full comment

Governments are the other one of the three scenarios that I bother listing in The Awakening Marketplace. The US government agencies that may have an interest in superintelligence include DARPA and the NSA.

DARPA's efforts so far do not seem to be on a scale proportional to the scale of the opportunity the field represents.

The NSA has bigger computers and smarter employees, and so might also have a shot. They might keep their superintelligence chained up in the basement, though.

In both cases, I suspect the problem is vision and interest. Governments don't exactly have a stellar record for innovaton in computer science. ARPANET was cool - but since then the torch has mostly been carried by companies. Maybe governments will muscle in on the field later on, when they can better see what is likely to happen.

Expand full comment

the two most likely funding sources for superintelligence development are search oracles and stockmarket traders.

While I'm not in a position to know, Dr.Dharmendra Modha, director of cognitive computing at IBM, personally told me at the Singularity Summit that the only entities with both adequate resources and appropriate interest in super-human AI are governments. He said they are pouring "hundreds of millions" of dollars into it that he personally knows about, excluding participation that may be beyond his knowledge. He seemed very adamant about his opinion.

Expand full comment

To make compelling comparison between approaches you need to put some effort into considering the hardware requirement ratio, to estimate how much human effort is involved in both cases (and at what cost), and to consider who is (or will be) willing to pay these costs, among other things.

I have limited resources for blog post comments, and am not really out to produce a "compelling comparison" - since I think the idea is too silly for me to spend much time on.

However, to briefly address the funding issue, IMO, the two most likely funding sources for superintelligence development are search oracles and stockmarket traders. In both cases, funding sources are substantial. Brain emulations are mostly a solution in search of a problem.

Expand full comment

I am reading this now. What I would love is a 3-hour led-discussion seminar on this, with Robin as the tutor and a nice group of about 12 discussants, like we used to have at SJC.

Expand full comment

After reading this roadmap, I am more skeptical about the viability of WBE.

A general meta-criticism of the WBE approach is that, in even trying to analyze how hard the problem is, one instinctively starts cutting the brain's functionality up into categories, and one starts putting neat boundaries around what does and doesn't happen. Evolved systems - in my limited experience - don't respect this. For example, in "Table 4: Likeliehood estimates of modelling complications.", it is assumed that effects such as Volume Transmission are either required or not required for WBE. I suspect that the truth of the matter might be that they are all required to greater or lesser degree, and that missing out information of a particular effect will have a complex set of ramifications on the performance of the resultant simulation. This kind of "messiness" is likely to pervade a science of WBE.

The hardest problem, I suspect, will be getting to a level of accuracy where the simulated human is capable of meaningful outputs - for example relatively coherent speech - so that one can tell whether one's tweaks are improving the quality of the emulation. I suspect that there will be a large region of simulation parameter space where the simulated brain just outputs nonsense. It will be hard to distinguish one form of nonsense from another, so it will be hard to tell when one is tweaking the relevant parameters in the simulation.

I could crsytallize this by saying that WBE research exhibits a highly nonsmooth fitness landscape; whereas in AI research, a failed attempt at a human level AI is usefully intermediate between total nonsense and sentience, almost all failed WBEs will be equally useless. We can see this in, for example, Geortzel's use of the Novamente engine to power virtual pets on SL.

On the other hand, it may be the case that lower-level functionality [image recognition from the optic nerve, the autonomic nervous system, ... ] will be more robust to the the kind of modelling errors that WBE will make, and we might end up with human WBEs in, say, 2050 that can do simple image recognition and have plausible human emotional responses but are utterly useless at abstract thought. This would smoothen the fitness landscape for WBE research, and make the task much easier.

However, only time will tell...

Expand full comment

Tim Tyler: "I mean, isn't it just a ridiculous plan, that stands practically no chance of reaching the target first?"

I don't think so. The hardware requirements may be ultimately unimportant. If it requires 1000 times more hardware to emulate a brain than to engineer it from scratch and the hardware per dollar ratio doubles every year, it will only take 10 more years to acquire suitable hardware for emulation at the same fixed dollar cost. We already waste tons of hardware capacity using high-level languages like Python rather than writing all our code in assembly because it saves human resources.

To make compelling comparison between approaches you need to put some effort into considering the hardware requirement ratio, to estimate how much human effort is involved in both cases (and at what cost), and to consider who is (or will be) willing to pay these costs, among other things.

Expand full comment

The difference between a bird and a brain is that simulating a brain well is functionally equivalent to actually building it.

In the analogy, you don't simulate the targets, you construct them. You build a flying machine, and you build a thinking machine. It's true that constructing the thinking machine is easier in some ways - due to a lack of moving parts - but that makes the pure-engineering approach to making a mind easier just as much as it makes whole brain emulation easier - and doesn't disturb the point of the analogy.

Expand full comment

Detailed knowledge of a physical system usually makes simulating that system straightforward given enough computing power. We can model the aerodynamics of a bird; we can model the dynamics of a neural network. The difference between a bird and a brain is that simulating a brain well is functionally equivalent to actually building it.

My own take on safety is that we develop uploads or AIs as soon as possible. The more ubiquitous computing power is the more dangerous software intelligence will be when it's created. If a human level AI requires a multimillion dollar supercomputer to run it at its time of inception, that puts a severe limit on its ability to grow and spread. By all means, I say continue with both the uploading and pure AI research.

Expand full comment