There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson … fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems. … Digital computers … were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.
You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. … Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. … We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate. … Each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. (more; Eli Dourado agrees; Alex Waller disagrees.)
Human brains were not designed by humans, but they were designed. Evolution has imposed huge selection pressures on brains over millions of years, to perform very particular functions. Yes, humans use more math that does natural selection to assist them. But we should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor. The weather is not a designed signal processor, so it does not achieve such decoupling. Let me explain.
A signal processor is designed to mantain some intended relation between particular inputs and outputs. All known signal processors are physical systems with vastly more degrees of freedom than are contained in the relevant inputs they seek to receive, the outputs they seek to send, or the sorts of dependencies between input and outputs they seek to maintain. So in order manage its intended input-output relation, a signal processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. Really, just ask most any signal-process hardware engineer.
Now sometimes random inputs can be useful in certain signal processing strategies, and this can be implemented by coupling certain parts of the system to most any random degrees of freedom. So signal processors don’t always want to minimize extra couplings. But this is a rare exception to the general need to decouple.
The bottom line is that to emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.
This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.
We are confident that the number of relevant signal dimensions in a human brain is vastly smaller than its physical degrees of freedom. But we do not know just how many are those dimensions. The more dimensions, the harder it will be to emulate them. But the fact that human brains continue to function with nearly the same effectiveness when they are whacked on the side of the head, or when flooded with various odd chemicals, shows they have been designed to decouple from most other physical brain dimensions.
The brain still functions reasonably well even flooded with chemicals specifically designed to interfere with neurotransmitters, the key chemicals by which neurons send signals to each other! Yes people on “drugs” don’t function exactly the same, but with moderate drug levels people can still perform most of the functions required for most jobs.
Remember, my main claim is that whole brain emulation will let machines substitue for humans through the vast majority of the world economy. The equivalent of human brains on mild drugs should be plenty sufficient for this purpose – we don’t need exact replicas.
Added 7p: Tim Lee responds:
Hanson seems to be making a different claim here than he made in his EconTalk interview. There his claim seemed to be that we didn’t need to understand how the brain works in any detail because we could simply scan a brain’s neurons and “port” them to a silicon substrate. Here, in contrast, he’s suggesting that we determine the brain’s “key internal signal dimensions and their internal mappings” and then build a digital system that replicates these higher-level functions. Which is to say we do need to understand how the brain works in some detail before we can duplicate it computationally. …
Biologists know a ton about proteins. … Yet despite all our knowledge, … general protein folding is believed to be computationally intractible. … My point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior. … By the same token, even if we had a pristine brain scan and a detailed understanding of the micro-level properties of neurons, there’s no good reason to think that simulating the behavior of 100 billion neurons will ever be computationally tractable.
My claim is that, in order to create economically-sufficient substitutes for human workers, we don’t need to understand how the brain works beyond having decent models of each cell type as a signal processor. Like the weather, protein folding is not designed to process signals and so does not have the decoupling feature I describe above. Brain cells are designed to process signals in the brain, and so should have a much simplified description in signal processing terms. We already have pretty good signal-processing models of some cell types; we just need to do the same for all the other cell types.