Signal Mappers Decouple

Andrew Sullivan notes that Tim Lee argues that ems (whole brain emulations) just won’t work:

There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson … fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems. … Digital computers … were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. … Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. … We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate. … Each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. (more; Eli Dourado agrees; Alex Waller disagrees.)

Human brains were not designed by humans, but they were designed. Evolution has imposed huge selection pressures on brains over millions of years, to perform very particular functions. Yes, humans use more math that does natural selection to assist them. But we should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor. The weather is not a designed signal processor, so it does not achieve such decoupling. Let me explain.

A signal processor is designed to mantain some intended relation between particular inputs and outputs. All known signal processors are physical systems with vastly more degrees of freedom than are contained in the relevant inputs they seek to receive, the outputs they seek to send, or the sorts of dependencies between input and outputs they seek to maintain. So in order manage its intended input-output relation, a single processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. Really, just ask most any signal-process hardware engineer.

Now sometimes random inputs can be useful in certain signal processing strategies, and this can be implemented by coupling certain parts of the system to most any random degrees of freedom. So signal processors don’t always want to minimize extra couplings. But this is a rare exception to the general need to decouple.

The bottom line is that to emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.

This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.

We are confident that the number of relevant signal dimensions in a human brain is vastly smaller than its physical degrees of freedom. But we do not know just how many are those dimensions. The more dimensions, the harder it will be to emulate them. But the fact that human brains continue to function with nearly the same effectiveness when they are whacked on the side of the head, or when flooded with various odd chemicals, shows they have been designed to decouple from most other physical brain dimensions.

The brain still functions reasonably well even flooded with chemicals specifically designed to interfere with neurotransmitters, the key chemicals by which neurons send signals to each other! Yes people on “drugs” don’t function exactly the same, but with moderate drug levels people can still perform most of the functions required for most jobs.

Remember, my main claim is that whole brain emulation will let machines substitue for humans through the vast majority of the world economy. The equivalent of human brains on mild drugs should be plenty sufficient for this purpose – we don’t need exact replicas.

Added 7p: Tim Lee responds:

Hanson seems to be making a different claim here than he made in his EconTalk interview. There his claim seemed to be that we didn’t need to understand how the brain works in any detail because we could simply scan a brain’s neurons and “port” them to a silicon substrate. Here, in contrast, he’s suggesting that we determine the brain’s “key internal signal dimensions and their internal mappings” and then build a digital system that replicates these higher-level functions. Which is to say we do need to understand how the brain works in some detail before we can duplicate it computationally. …

Biologists know a ton about proteins. … Yet despite all our knowledge, … general protein folding is believed to be computationally intractible. … My point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior. … By the same token, even if we had a pristine brain scan and a detailed understanding of the micro-level properties of neurons, there’s no good reason to think that simulating the behavior of 100 billion neurons will ever be computationally tractable.

My claim is that, in order to create economically-sufficient substitutes for human workers, we don’t need to understand how the brain works beyond having decent models of each cell type as a signal processor. Like the weather, protein folding is not designed to process signals and so does not have the decoupling feature I describe above. Brain cells are designed to process signals in the brain, and so should have a much simplified description in signal processing terms. We already have pretty good signal-processing models of some cell types; we just need to do the same for all the other cell types.

GD Star Rating
loading...
Tagged as: , , , ,
Trackback URL:
  • scott

    Between this post and Moore’s Law, emulation is eminently feasible.

    It seems to me that once you get someone to accept that simulation of the human brain is possible in principle, you could direct them to this post to convince them that it’s probable.

    (This is an excellent post.)

  • Proper Dave

    That article set off allot of alarms, some paragraphs was kind of pseudo-scientific technobabble and the “design” fetish set me off. Actually I view everything as some evolutionary process, but that’s just me :). Nevertheless if this guy is a card carrying scientific materialist, his grasping at chaos theory is kind of amusing, if he’s not there is no point in arguing when you don’t share the same reality :)

    You CAN have a simulation with every raindrop prediction, it is just that after a while it will kind of differ in the micro-detail. It will still looks like earths weather also in aggregate it should be pretty accurate, like desertification, average rainfall, vulnerable places for tropical storms etc.

    Also the brain is kind of an anti-random pattern identifying machine (we actually see shapes in perfectly random stuff like clouds ). So I think that there will come a time when the models is not good enough and actually starts to diverge into madness and malfunction, but that after incremental improvement they will enter a stable equilibrium. I don’t see how that is going to require perfect anti-heisenbergian sampling and modelling to achieve, the brain is fairly “robust” and noise resistant.

    • jsalvatier

      See this post (http://lesswrong.com/lw/l6/no_evolutions_for_corporations_or_nanodevices/) for why most things are not “evolved”. It’s probably more accurate to say “heavily optimized according to some criteria” than “designed”.

      • Proper Dave

        I define evolution as that, there is obviously some “criteria” and pressures… Also evolution can be “stopped” or slowed at least, for example some ecological niche, but then it is static and nothing new, if you want something new you need some evolutionary process…
        Regarding Nano-devices, you can of course use error correction to “beat” Shannon’s law (you cannot eliminate the noise only boost the signal sky high), but there is still that infinitesimal change that it will mutate into grey goo :)

    • Michael Rosefield

      I don’t think that our inability to accurately measure systems with (if I recall the terminology correctly) ‘sensitive dependence upon initial conditions’ really matters.

      For me, the central message of the chaos theory butterfly scenario is that the flapping wings are just as likely to stop a hurricane as lead to one — the effects of all actions, including inaction, are entirely homogeneous and inseparable. In the same way, a brain simulation would still be brain-like, as differences in the fine grain of the simulation will lead to coarse variations that stay within normal parameters. It’s equivalent to a little thermal noise in your neurons – sure, it might have an effect, but you’re not going to notice and nor is anyone else.

  • Ben

    I have a friend who is currently studying meteorology. She says that we can model and emulate weather patterns more accurately than we currently do. The practical difficulty is this: emulating the weather is slower than real time. So we sacrifice accuracy for speed. Thus it’s hard to get an accurate forecast beyond a certain number of days (though the number of accurate days continues to rise).

    As computational power increases, this becomes less of a problem.

    Even if we have to emulate a brain on the physical level, modeling each protein and atom in each neuron, it’s still technically possible. It’ll just take a lot longer than we currently estimate.

  • http://liveatthewitchtrials.blogspot.com/ davidc

    What about really stupid ems? If a cockroach level of reasoning new all the facts in wikipedia whose jobs could it replace? http://blog.steinberg.org/?p=11

    The recent Jeopardy bot suggests we might soon know the answer.

  • http://www.timothyblee.com Tim Lee

    @Proper Dave, you have the “design fetish” point precisely backwards. The intelligent design crowd holds that all ordered systems must be the product of design, which is obviously false. Robin Hanson is making the same error in the opposite direction: he’s claiming that even systems that weren’t designed by humans can be treated as “designed” by evolution. I think this claim is equally fallacious, for roughly the same reason.

    There’s a fundamental difference between systems that were actually designed, and systems that are the product of decentralized processes like natural selection. The former can be reverse engineered to discover the principles on which they were designed. The latter often cannot, because by definition they weren’t designed according to any particular engineering principles. This is a point libertarians instinctively understand with respect to markets (which are the product of human action but not of human design), but the same point applies to natural systems like evolution and the human brain.

    • http://hanson.gmu.edu Robin Hanson

      Do you dispute my claim that signal processors, even evolved ones, must satisfy the “principle” of decoupling from extra dimensions?

      • http://www.timothyblee.com Tim Lee

        I just don’t think this is a very useful way of thinking about the problem. Obviously, the brain as a system has fewer degrees of freedom than the sum of the degrees of freedom of its components. But that doesn’t tell us very much–it’s possible (I think probable) that even after this dimensionality reduction, our brains will still be too complicated to be amenable to computer simulation. And even if a computationally tractable model for the brain exists, we won’t necessarily be able to discover it.

        More later…

    • Proper Dave

      Nope I think Robin and me are of one mind, I wish he would not use the word “design” which is loaded, though. Both are processes to fin a solution in the solution space, yes they do work differently, are one “better” than the other more efficient, innovative? I think the jury will be out much longer than most are supecting on that question.

      “There’s a fundamental difference between systems that were actually designed”

      There really is no reason to read further after this … err claim. But as usual I did… You are claiming some stupid things about computing machines that are proven by very rigorous mathematics, they are not “fundamentally different” they are physical systems in a very natural world AND one that wasn’t found by “traditional” evolution.

  • Pingback: Tweets that mention Overcoming Bias : Signal Mappers Decouple -- Topsy.com

  • Lord

    One can consider an em as a static reproduction, but then there is the problem of dynamic change and how much of its duration and environment must be paralleled to consider it a reliable em or even whether it is possible independent of its sensory inputs and the task becomes reproducing the entire organism and environment. Whether they would all drift into apathy or psychosis or something we no longer recognize as human if we ever did.

  • Matt Young

    The fundamental bit of the brain nerve impulse, well defined. The fundamental way neurons in small groups ‘compute’ is well defined. Gross features of nerve tracts down to the quarter inch are known. There is likely a central principle say “action with the least effort” that defines the task switching. Once neuroscience gets basic subroutine format then we get pretty close to emulation.
    .
    One thing to remember about the brain it does a whole lot based on external cues and it fills in the rest with its own emulation. Human interaction works because we agree on the internal emulation, and via learning in children learning cues we pick up the resulting internal emulations of the brain. Much of how the brain works is exposed in childhood upbringing.

    The problem is going to occur when we expect multiple artificial brains to interact. The long cuts we use in our hardware will cause the artificial brains to find better mutual emulations with each other. For example, the artificial brains will check if its partner has Internet, if so, all the built in human design goes away, and the artificial brains start communicating like Android tablets, using much TCP/IP!.

    We get interference when we mix the two types, humans and machines; the3 machines will diverge, seeking their own kind for efficiency.

  • mjgeddes

    You’ve got it exactly right.

    I’ve now completely cracked consciousness/mind at the in-principle level.

    As you say, the function of the mind is signal processing. Specifically, signal processing for the purposes of elegant (minimum complexity) representations of goals. This is done by categorization (which uses the metric of ‘similarity’). So as I mentioned:
    Complexity + Similiarity = Reflection,
    a new type of information theory tracking internal signal processing related to our goals. Decision theory is merely a special case (Utility+Probability = Decisions), the case where the ‘signals’ handled are only the sub-set of signals representing external functions (use cases).

    It’s clear that indeed the signal processing is largely decoupled from the lower level functions of the brain. So to duplicate the mind, you don’t even need to reproduce the higher level input/output functions of the brain, you only need the content of what the signals represent (so even the ‘functionalists’ are too conservative).

    Finally, I’m sure the above implies that there are ultimate terminal values, universal in nature. This will be dictated by the math of the new information theory I referred to. But clearly, it’s closely related to aesthetic values, because aesthetics is all about minimizing the compexity of internal representations.

    In summary, the mysteries of the mind are close to being cracked at the in-principle level. Sure, there’s a huge amount of technical work smart folks will have to do to get proper understanding, but that stuff is just details.

  • Matthew Fuller

    Well I am glad he said ‘more later…’ otherwise this would have been a waste of time for me.

  • http://www.timothyblee.com Tim Lee

    It seems like your update says something different than your original post. I took the original post to be saying that the entire brain (or significant portions thereof) could be treated as a signal processor and would therefore not be too complicated. In your update you seem to be shifting this to the level of individual neurons.

    I’m not sure the argument makes sense either way. With the ear, for example, we already know exactly what the inputs and outputs are supposed to be because the ear performs a function (converting sound to electrochemical signals) that we already understand pretty well. So building an artificial ear is just a matter of producing a device that produces the right output signals given the input signals.

    I don’t see how you can perform a similar analysis on an individual neuron, because we don’t really know which characteristics of an individual neuron are essential to its functioning as part of a brain. It’s really hard to debug a system you don’t understand, and ethical considerations sharply limit our ability to observe live brains in operation. Successful full-brain emulation may depend on subtle details of the interaction of groups of neurons that would be difficult to observe by studying individual neurons. So even if a computationally tractable model of the brain exists, it may be too difficult to find it.

    • http://hanson.gmu.edu Robin Hanson

      A set of signal processors hooked together in a certain pattern of inputs of some hooked to outputs of others IS a signal processor. Yes we don’t yet know for all cell types which cell parts are their central signaling degrees of freedom. That is part of why we aren’t there yet. Seems you are just making the generic anti-reductionist argument – that for complex systems one can never be sure that reductionist local interactions account for the full behavior.

      • http://www.timothyblee.com Tim Lee

        I’m saying precisely that “for complex systems one can never be sure that reductionist local interactions account for the full behavior.” Our intuition to the contrary comes from the fact that human-designed systems are often built in a “layered” fashion that allows us to simulate the lowest layers of a technology “stack” and get the higher-level behaviors automatically. Natural systems are not designed by human beings, are not “layered” in the same way, and so there’s no reason to think we can build a micro model that, when aggregated, will give us accurate macro-level behavior.

      • http://hanson.gmu.edu Robin Hanson

        “Never be sure” is a much weaker claim than “no reason to think.” That seems far too strong a claim. The fact that we have in fact understood a great many natural systems suggests a decent chance we can understand any one particular as-yet-not-understood system.

  • Philo

    You write: “[I]n order to create economically-sufficient substitutes for human workers, we don’t need to understand how the brain works beyond having decent models of each cell type as a signal processor.” We don’t even need that. We just need to specify what we want done, and get to work designing a machine to do it. It doesn’t have to do it in a way closely similar to how a human being would do it. And many of the things we want done *can’t be done by a human being*; that doesn’t mean we can’t design a machine that would do them. Other things can be done by a human being, but we can design a machine that will do them *more efficiently*, in some non-human fashion.

    Why ape human functioning? There are indefinitely many potentially useful non-human ways to get things done.

    • http:/juridicalcoherence.blogspot.com Stephen R. Diamond

      Why ape human functioning? There are indefinitely many potentially useful non-human ways to get things done.

      I have the same question. If we understand the functional basis for representational abilities, why would we be limited to or even want to deploy them in a human manner? While the discussion seems to concern “Ems,” perhaps it’s crionics—where emulation is indeed critical—that’s lurking in the background, since emulating human representational capacities seems much harder than simulating them.

      • Constant

        I assumed he was just trying to establish a lower bound. If there are even better ways to accomplish the goal, so much the better.

  • Matthew Fuller

    Wow Tim, couldn’t the researchers start with a toy model, then work up to something more complicated like *just* the visual system, and then go on to “higher” order cognitive processes, like logic or concept formation? And they could do this with increasing fidelity given more computation and time.

    Isn’t the real issue about timing rather than feasibility?

    • http://www.timothyblee.com Tim Lee

      couldn’t the researchers start with a toy model, then work up to something more complicated

      No, that’s the whole point. The brain is not built in a modular fashion. There’s no reason to think we could isolate the neurons that account for “logic or concept formation” and expect those to work independent of the rest of the brain. And even if we could, it’s not clear how we’d test them and verify that they’re working correctly, since we don’t know how to “read” the firing pattern of a group of neurons and translate them into equivalent thought. The only way to test a brain simulation would be to build a full-scale model and hook it up to a simulated body and hope it exhibits human-like intelligence. If it doesn’t, it’s hard to see what you’d do to debug the system.

      This isn’t to say we couldn’t build AI systems to do many of the things human brains do. But we’re not going to accomplish that by emulating human brains. We’re much more likely to accomplish it by re-implementing those capabilities using the very different techniques of human engineering.

  • Lex Spoon

    The claim Lee starts from is bogus:

    “You can’t emulate a natural system because natural systems don’t have designers”

    The examples given are of emulators that aren’t working very well with current technology. However, large hosts of emulators that work very well are completely ignored. The solar system didn’t have a human designer, but the emulations of solar orbits are extraordinarily precise.

    • http://www.timothyblee.com Tim Lee

      Actually, this example proves my point. Our models of the solar system aren’t precise. That’s why we periodically have to measure the position of the Sun so we know when to add leap seconds to our calendar. Without continuous measurement, our models of the solar system would gradually drift out of sync with the actual solar system.

      Our models of the solar system seem relatively precise because the solar system is a relatively simple and slow-moving system. But if you look at larger time scales, we can’t predict how the solar system will evolve. The brain is much more complex and operates on much shorter time scales, so we should expect a simulated brain to drift out of sync with the real brain much faster.

    • http://www.timothyblee.com Tim Lee

      Incidentally, the distinction I’m drawing between emulation and simulation isn’t something I made up. The idea that digital systems can be emulated precisely and analog systems cannot is a fundamental tenet of computer science. There’s a whole sub-discipline–numerical methods–that studies how to handle the errors that inevitably crop up when we try to approximate the behavior of natural systems. It’s perfectly reasonable to argue that in particular simulations these errors are small enough that we can ignore them. But that doesn’t mean that our simulation has become emulation–it’s still just an accurate-enough simulation.

      • http://hanson.gmu.edu Robin Hanson

        It is relatively easy for simulations of the social system, and of most systems modeled by numerical methods, to be accurate-enough as signal processors.

  • Douglas Knight

    This post is exactly the right response to Tim Lee, but I think it would be useful to use the word “robust,” especially in the title.

  • roystgnr

    So if we start with a signal processing problem, then use evolution to solve it, we’ll end up with a system that’s easy to explain and emulate?

    http://www.cs.nyu.edu/courses/fall08/G22.2965-001/geneticalgex

    Here “emulate the circuit” and even “use another copy of the same kind of FPGA” were inadequate models for evolved systems of 100 logic cells.

    Perhaps there are other considerations (robustness against damage does sound like a good one) that will make systems of 100 billion neurons easy to model, but I wouldn’t bet money on it happening soon.

  • BobR

    Tim, I think you may be confusing “emulatability” with “predictability.” Very simple systems can be unpredictable, because simple algorithms can give rise to extremely complex behavior. It may be possible to build a perfect brain emulator which is no more predictable than a real brain.

  • http://unenumerated.blogspot.com nick

    The key relevant distinction between evolved and designed is abstraction layers. Human engineers need abstraction layers to understand their designs and communicate them to each other. Thus we have clearly defined programming languages and protocol layers. They are designed, not just to work, but to be understood by at least some of our fellow humans. Evolution has no such needs, so there is no good reason to expect understandable abstraction layers in the human brain. Signal processing may substantially reduce the degrees of freedom in the brain, but the remaining degrees of freedom are still likely to be astronomically higher than those of any human-understandable abstraction layer. No clean abstraction layers, no emulation.

    And BTW, the ability of artificial ears and the like to work has more to do with the plasticity of human neural systems than the accuracy of the artificial ear’s output emulating those of a real ear.

    • http://hanson.gmu.edu Robin Hanson

      Evolved systems do have modularity, and hence abstraction layers. For example, animal bodies are divided into clearly distinct organs. This is nothing like a figment of human imagination when considering them – they really do break up into modular and distinct organs.

      • Constant

        That’s interesting and somewhat unexpected for the reasons nick describes, though obviously true. Any ideas why it’s so?

      • http://daedalus2u.blogspot.com/ daedalus2u

        Nick is using “abstraction” in a different sense than Robin is using the term. Yes, organs are an “abstraction” the same way that pages in a book are “abstractions”. They are not abstractions that facilitate emulation.

      • http://twitter.com/afoolswisdom sark

        Well, if everything were dependent on everything else then it would be hard for natural selection to precisely excise relevant defects. It would also be hard for it to build new traits without messing something up. So I suppose species have evolved modularity as an evolvability adaptation. (Note this requires postulating selection at levels higher than that of an individual organism)

        Evolution and Design may be different in some relevant sense, but not this one. A good optimization process will try to keep the functional parts of its product somewhat independent.

      • http://hanson.gmu.edu Robin Hanson

        sark is right; modularity is very functional, so evolution selects it for functional reasons.

  • Pingback: Recomendaciones « intelib

  • Crusty Dem

    As a cellular neurophysiologist who spent the last 15 years studying small sets of ion channels on individual neurons, I think I’m qualified to say that the idea that the whole brain will soon be modeled is completely and totally absurd. We don’t even know how all the basic subparts work, what the firing properties are of individual subtypes of neurons in vivo, the detailed connections, the different types of synapses, the wiring, all the types of plasticity, etc. Additionally, nearly all the current data has been obtained from neurons in culture or slice preparations, a quiescent system bearing little similarity to in vivo. Plus, the organization and connectivity of human brain is hugely different from most model organisms studied (sub-primate). Any attempts to model the brain without more detailed information will be a complete waste of time.

    • http://hanson.gmu.edu Robin Hanson

      I would be enormously in your debt if you would please please elaborate your criticism for us, or point us to an elaboration elsewhere. “Soon” was within a century of so, btw. I’d let you have your own guest post here, for example.

  • http://freesoc.wordpress.com sconzey

    Doesn’t this come down to Turing computability? If the brain is a Turing machine or implements a Turing machine; it is emulatable on any other Turing machine. If it’s not — for instance if (to pluck a plausible but unlikely example) quantum interactions between neurotransmitters in the synapses turn out to play a significant role, then it cannot be implemented on a conventional computer.

    On the “designed” issue, I rather like Friedman’s “as if designed” :P

    • http://unenumerated.blogspot.com nick

      For the brain to be a Turing machine or some subset thereof is a necessary but not even close to sufficient for feasible emulation on a classical digital computer. Besides the abstraction layer problem posed above, there is a crucial efficiency problem. A Turing machine can after all crack a 10,000 bit public key. It’s just that this Turing machine would have to consist of all the atoms in the universe, communicating at instantaneous speeds, and even then operating for quadrillions of universe lifetimes, to crack the key.

      It turns out that general algorithms are extremely inefficient at solving problems. The general learning task, Solomonoff induction, is uncomputable, The most general yet computable learning task, time-delimited Solmonoff induction (Hutter’s AXAI or “universal artificial intelligence”) requires time exponential in the complexity of the environment being analyzed and yet is not guaranteed to find an answer. That’s more time than it takes to crack a public key! We only achieve any efficient solutions at all by “cheating”: by knowing important things about our data or environment ahead of time and choosing far more efficient special-purpose algorithms to suit that data or environment.

      The only efficient computational world is thus a world full of hyperspecialized software. Our brains are undoubtedly collections of a large number of very specialized techniques, some genetically coded to expect our ancestral environment and many more learned. form our current environment. Interestingly enough, the gains (usually exponential or super-exponential) to be had from computational specialization correspond to the common observations of economists (Smith, Reed, Hayek, et. al.) about the huge gains to be had from ever more extreme divisions of labor, as each agent employees specialized algorithms most suited to its unique environment and role in the economy.

  • Axa

    Turing paper eventually had to enter the present discussion. Just for clarifying, my background is surface & ground water modeling.

    It seems that Tim Lee has minded Turing’s paper 60 years old. http://www.loebner.net/Prizef/TuringArticle.html

    Crusty Dem is 100% right. The brain is a God damned complex system (pun intended). The way we ended up having our intelligence is trough several million neurons on our head working together. How it happened, Evolution, God? I just don’t know. But, is it the only way to intelligence and/or intelligent behavior?

    I guess the problem here is semantics. You’re losing time and effort trying to draw the line between simulation and emulation. It is important of semantics for sure, but is it important in the “real world”? If the simulation is good enough to look (behave) like an emulation. Would you still care about the difference?

    If someday a signal processor computer simulation with learning aptitudes is loaded with Prof. Hanson memories and keeps answering questions the way he does………it would be hard to say that the computer is not “emulating” his brains. At least a part of them.

    Ps. Looks like a duck, walks like a duck, swims like a duck…..might be a duck?

  • http://daedalus2u.blogspot.com/ daedalus2u

    I think the simulation/emulation distinction is important.

    I have a hypothesis that the only way that natural language can be understood is via emulation of the cognitive structures that map sounds and gestures onto mental concepts. If you are unable to emulate that mapping, you are unable to understand the mental concepts that are being communicated. For most native speakers of the same language, the mapping is very similar so emulating the corresponding structures in another native speaker is quite easy.

    I think the emulation/simulation distinction is important so that one can keep the emulation separate from your own thoughts. If you can’t, then you start to take on other people’s mental concepts as your own. This does happen, this is what groupthink is.

  • Pingback: Overcoming Bias : Allen & Greaves On Ems

  • Pingback: Overcoming Bias : Kurzweil Rejects Ems

  • Pingback: Overcoming Bias : Theories vs. Metaphors.

  • Pingback: Overcoming Bias : Functions /= Tautologies

  • Pingback: Overcoming Bias : Robot Econ Primer