42 Comments

I think the simulation/emulation distinction is important.

I have a hypothesis that the only way that natural language can be understood is via emulation of the cognitive structures that map sounds and gestures onto mental concepts. If you are unable to emulate that mapping, you are unable to understand the mental concepts that are being communicated. For most native speakers of the same language, the mapping is very similar so emulating the corresponding structures in another native speaker is quite easy.

I think the emulation/simulation distinction is important so that one can keep the emulation separate from your own thoughts. If you can't, then you start to take on other people's mental concepts as your own. This does happen, this is what groupthink is.

Expand full comment

Turing paper eventually had to enter the present discussion. Just for clarifying, my background is surface & ground water modeling.

It seems that Tim Lee has minded Turing's paper 60 years old. http://www.loebner.net/Priz...

Crusty Dem is 100% right. The brain is a God damned complex system (pun intended). The way we ended up having our intelligence is trough several million neurons on our head working together. How it happened, Evolution, God? I just don't know. But, is it the only way to intelligence and/or intelligent behavior?

I guess the problem here is semantics. You're losing time and effort trying to draw the line between simulation and emulation. It is important of semantics for sure, but is it important in the "real world"? If the simulation is good enough to look (behave) like an emulation. Would you still care about the difference?

If someday a signal processor computer simulation with learning aptitudes is loaded with Prof. Hanson memories and keeps answering questions the way he does.........it would be hard to say that the computer is not "emulating" his brains. At least a part of them.

Ps. Looks like a duck, walks like a duck, swims like a duck.....might be a duck?

Expand full comment

For the brain to be a Turing machine or some subset thereof is a necessary but not even close to sufficient for feasible emulation on a classical digital computer. Besides the abstraction layer problem posed above, there is a crucial efficiency problem. A Turing machine can after all crack a 10,000 bit public key. It's just that this Turing machine would have to consist of all the atoms in the universe, communicating at instantaneous speeds, and even then operating for quadrillions of universe lifetimes, to crack the key.

It turns out that general algorithms are extremely inefficient at solving problems. The general learning task, Solomonoff induction, is uncomputable, The most general yet computable learning task, time-delimited Solmonoff induction (Hutter's AXAI or "universal artificial intelligence") requires time exponential in the complexity of the environment being analyzed and yet is not guaranteed to find an answer. That's more time than it takes to crack a public key! We only achieve any efficient solutions at all by "cheating": by knowing important things about our data or environment ahead of time and choosing far more efficient special-purpose algorithms to suit that data or environment.

The only efficient computational world is thus a world full of hyperspecialized software. Our brains are undoubtedly collections of a large number of very specialized techniques, some genetically coded to expect our ancestral environment and many more learned. form our current environment. Interestingly enough, the gains (usually exponential or super-exponential) to be had from computational specialization correspond to the common observations of economists (Smith, Reed, Hayek, et. al.) about the huge gains to be had from ever more extreme divisions of labor, as each agent employees specialized algorithms most suited to its unique environment and role in the economy.

Expand full comment

sark is right; modularity is very functional, so evolution selects it for functional reasons.

Expand full comment

Well, if everything were dependent on everything else then it would be hard for natural selection to precisely excise relevant defects. It would also be hard for it to build new traits without messing something up. So I suppose species have evolved modularity as an evolvability adaptation. (Note this requires postulating selection at levels higher than that of an individual organism)

Evolution and Design may be different in some relevant sense, but not this one. A good optimization process will try to keep the functional parts of its product somewhat independent.

Expand full comment

I would be enormously in your debt if you would please please elaborate your criticism for us, or point us to an elaboration elsewhere. "Soon" was within a century of so, btw. I'd let you have your own guest post here, for example.

Expand full comment

Doesn't this come down to Turing computability? If the brain is a Turing machine or implements a Turing machine; it is emulatable on any other Turing machine. If it's not -- for instance if (to pluck a plausible but unlikely example) quantum interactions between neurotransmitters in the synapses turn out to play a significant role, then it cannot be implemented on a conventional computer.

On the "designed" issue, I rather like Friedman's "as if designed" :P

Expand full comment

As a cellular neurophysiologist who spent the last 15 years studying small sets of ion channels on individual neurons, I think I'm qualified to say that the idea that the whole brain will soon be modeled is completely and totally absurd. We don't even know how all the basic subparts work, what the firing properties are of individual subtypes of neurons in vivo, the detailed connections, the different types of synapses, the wiring, all the types of plasticity, etc. Additionally, nearly all the current data has been obtained from neurons in culture or slice preparations, a quiescent system bearing little similarity to in vivo. Plus, the organization and connectivity of human brain is hugely different from most model organisms studied (sub-primate). Any attempts to model the brain without more detailed information will be a complete waste of time.

Expand full comment

Nick is using "abstraction" in a different sense than Robin is using the term. Yes, organs are an "abstraction" the same way that pages in a book are "abstractions". They are not abstractions that facilitate emulation.

Expand full comment

That's interesting and somewhat unexpected for the reasons nick describes, though obviously true. Any ideas why it's so?

Expand full comment

Evolved systems do have modularity, and hence abstraction layers. For example, animal bodies are divided into clearly distinct organs. This is nothing like a figment of human imagination when considering them - they really do break up into modular and distinct organs.

Expand full comment

It is relatively easy for simulations of the social system, and of most systems modeled by numerical methods, to be accurate-enough as signal processors.

Expand full comment

Nope I think Robin and me are of one mind, I wish he would not use the word "design" which is loaded, though. Both are processes to fin a solution in the solution space, yes they do work differently, are one "better" than the other more efficient, innovative? I think the jury will be out much longer than most are supecting on that question.

"There’s a fundamental difference between systems that were actually designed"

There really is no reason to read further after this ... err claim. But as usual I did... You are claiming some stupid things about computing machines that are proven by very rigorous mathematics, they are not "fundamentally different" they are physical systems in a very natural world AND one that wasn't found by "traditional" evolution.

Expand full comment

I assumed he was just trying to establish a lower bound. If there are even better ways to accomplish the goal, so much the better.

Expand full comment

Why ape human functioning? There are indefinitely many potentially useful non-human ways to get things done.

I have the same question. If we understand the functional basis for representational abilities, why would we be limited to or even want to deploy them in a human manner? While the discussion seems to concern "Ems," perhaps it's crionics—where emulation is indeed critical—that's lurking in the background, since emulating human representational capacities seems much harder than simulating them.

Expand full comment

The key relevant distinction between evolved and designed is abstraction layers. Human engineers need abstraction layers to understand their designs and communicate them to each other. Thus we have clearly defined programming languages and protocol layers. They are designed, not just to work, but to be understood by at least some of our fellow humans. Evolution has no such needs, so there is no good reason to expect understandable abstraction layers in the human brain. Signal processing may substantially reduce the degrees of freedom in the brain, but the remaining degrees of freedom are still likely to be astronomically higher than those of any human-understandable abstraction layer. No clean abstraction layers, no emulation.

And BTW, the ability of artificial ears and the like to work has more to do with the plasticity of human neural systems than the accuracy of the artificial ear's output emulating those of a real ear.

Expand full comment