35 Comments

I don't understand why brain scanning is mentioned. Once you have a model of how the human brain works in that much detail, you don't need to copy an existing human. You can just run your model and start with a baby AI. Brain scanning adds a ton of complications and I don't see how it would solve any problems.

Expand full comment

The use of transistors in this venture is not the way forward. While they have merit, and can mimic the operation of neurons, they are, by their very nature, limited in their time sliced domain. There is another technology, a hybrid, and it is far more powerful than this. I've kept quiet about it for 20 years, hoping to bump into the right people to make it happen. I guess one day it will

Expand full comment

Yup. I looked a bit at it in my roadmap, and if we need to simulate them they do not seem to add much to the computational overhead. The reason is that glial cells have rather slow dynamics, so they do not need to be simulated at the same rate as the neurons. They might add more memory requirements, though.

Expand full comment

"More precisely, as far as I can tell, without locality things all but automatically “go wrong”, a-la The Future of Human Evolution."

I think Robin disagrees with this. I agree: the forces of competition, if left unchecked, will wipe out our values. There are various things stopping that from happening right now, but in the long run, survival of the fittest means death of humanity.

Expand full comment

By the same token it's not software that runs electronic computers either, it's Maxwell's laws.

Solving a computational problem involves hardware and software, pretty much by definition. Protein isn't electronic hardware, but any Turing-complete system that we have sufficient control over functions as hardware. Similarly, the procedure by which the proteins are applied to achieve a specific computational result might not be specified as a C program, but you'll have a hard time convincing me it's not software.

This is really just a semantic dispute, but I think the narrowness of your "hardware" and "software" categories is undermining the point you may have been trying to make.

Expand full comment

Neurons don't do all the computational work of the brain; glial cells are, apparently, more than just a support system for the neurons. A brain emulation may very well have to simulate them as well.

Expand full comment

Markram and Modha are looking at two different levels of modelling. It is a bit like local weather forecasting vs climate models. To make a full WBE we need both levels, if only to check what simplifications we can get away with. It might turn out that the most workable level is more abstract like Modha's, or more biological like Markram (or something on a scale above or below these).

I have been playing around with my own estimates of computational demands for brain emulation, trying to see when it could occur. It turns out that modelling when hardware will be enough (as a function of the currently unknown simulation resolution) is not too handwavy - we have Moore's law (and its uncertainties), we have people's ideas of resolution and some estimates of the computational demands for resolution.

A bigger problem is to estimate when we will develop the necessary large-scale scanning technologies - tech development of this can likely be done fairly quickly (think HUGO - I think this is something that would happen on a decade timescale) but is dependent on *when* people start throwing money on it.

And the deep problem is to tell when the scan interpretation and basic neuroscience gets done - I have no good idea of how to estimate this, although it is easy to throw together plenty of scenarios. For example, early scanning gives a lot of new data to work with and get academic cred for, so scanning ought to accelerate neuroscience. But would there remain hard, non-parallelizable problems to solve?

This is a bit like the issue of whether we would get AGI before WBE or not. We do not have good ways of estimating AGI progress, and we do not have good ways of estimating neuroscience progress. Or do we?

Expand full comment

Scanning: yes, there are current scanning methods giving us beyond cell level (nanoscale) resolution in 3D. See the work by Kenneth Hayworth or my review in the whole brain emulation roadmap. The real bottleneck is that these methods usually have very limited scanning volumes - to work on entire brains extra trickery is needed. A more fundamental issue is what information we need to scan is currently uncertain (connectivity obviously, but what parts of the local chemistry?)

Right now the emulation-people consensus is that the best level to go for is detailed compartment modelling. New evidence may change that, but if this holds we do not need any molecular modelling. In the case actual molecular modelling is needed, then brain emulation is going to be very, very computationally costly and arrive late this century (if ever).

Expand full comment

Why do you think ems are bat shit crazy?

Expand full comment

"a sudden burst of insight is more likely to create an unexpected and sudden em transition" seems to be strong evidence in favor of the "brain in a box in a basement" hypothesis.

I know locality is only one thing that can go wrong. More precisely, as far as I can tell, without locality things all but automatically "go wrong", a-la The Future of Human Evolution.

Expand full comment

I am at a loss as to why they are expanding to 'cat-size' when most (I assume) of the fundamental scientific questions are present at smaller scales. I watched an hour long presentation from Markram and I got *very* little out of it in terms of what hypotheses were actually being tested.

Expand full comment

Locality is only one of several things that can go wrong in a transition, and there are several kinds of locality. For example, having most descendants all be copies of the same original human could still be very far from a "singleton." In this post I'm getting more precise about distinguishing various scenarios and the various senses and degrees to which such might be "local."

Expand full comment

Huh? Almost every product starts development before it is efficient enough to sell at a profit.

Expand full comment

The major dispute between you and Robin seems to be over the plausibility of highly local singularities. In this post Robin seems to be saying that what DARPA is doing is bad because it makes a local singularity more likely, while in argument with you he seems to claim that such singularities are implausible anyway.

Expand full comment

If Markram's reaction to this announcement is basically accurate despite his volcanically eruptive dyspepsia, I'll have to update my estimate: the supposed "cat brain" is 7 or 8 orders of magnitude away from realtime human em, not 5. The main thing I can see bringing it down is some discovery of digital circuitry that trades higher power dissipation for lower "virtual neuronal parts count" -- i.e., that you don't need a simulation at such a detailed level.

Expand full comment