Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • https://entirelyuseless.wordpress.com/ entirelyuseless

    Let me explain why I think you’re mistaken about this. I agree that modelling brain cells is easier than modelling the brain. But I don’t think it follows that creating ems is easier than modelling the brain.

    Suppose I give you my desktop PC and tell you to create a copy that does the same stuff. If you use some kind of physical scanners to determine how it is constructed to a fairly fine level of detail, you may be able to create something that works theoretically like my PC. But in practice it will not, unless you make a copy of the hard drive. And I don’t think physical scanning is going to help you copy the contents of the hard drive, until you also know what method of encoding the hard drive uses.

    In practice, if you use a “physical copy” method to copy my PC, the new PC won’t boot.

    I think all of that will apply pretty much exactly as it stands to ems. If you figure out how brain cells work, then model their connections to create an em of me, that em will be brain dead, unless you first know how my memories and experience and knowledge are encoded.

    If people do try to make ems as you suppose, the first one will be made by destroying the original, and will result in a brain dead copy. That will be the end of that process as a social possibility.

    That will be the case until people figure out how knowledge is encoded, so that they can copy the knowledge. But if you know that, it really is likely that you already know how to create an AI, since you know how to make something that knows something.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      In practice, if you use a “physical copy” method to copy my PC, the new PC won’t boot.

      Why’s that?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        Because you won’t see the data when you look at the drive with a microscope.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Is that a theoretical deduction or an empirical observation?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        Both.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Where has it been empirically observed?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        First, I was speaking of a microscope in its common sense, which is an optical microscope. Robin has been clear that he thinks this level of accuracy is sufficient to model a brain.

        You can search google for examples of this with a hard drive. (Yes I have done it myself, and no I did not see any data.)

        Second, even if you do use an instrument which is capable of detecting the data, you would assume it is random variation unless you already knew about the encoding. If you are copying a computer to that level of detail, you would be creating an actual physical copy. The same thing is true of brains. Sure if you create an atomic level copy of a brain, it should work the same. It will also be an actual physical brain, not an em. Robin is talking about emulating high level physical features of a brain.

        There is no reason to expect that to work, for exactly the reason I’ve been saying.

      • Peter David Jones

        > Second, even if you do use an instrument which is capable of detecting
        the data, you would assume it is random variation unless you already
        knew about the encoding

        There are ways of making a formal judgement about how random something is.

    • http://overcomingbias.com RobinHanson

      To make an emulation we’d need to read the state at whatever spatial and chemical resolution is required to see the key state. Reading at a lower resolution doesn’t get that key info.

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        I agree. But I am saying that unless you know how the data is encoded, you don’t know what level you need to read.

      • mehmeh

        I think the best way to approach not knowing what level we need to read at is to look at what we are contained by in physics/instrumentation/present experimental research results with some napkins numbers:

        speed of light in grey matter:

        ~4.70650973710659 * 10^-13 = (4.23* 10^4) * (4 * pi * 10^-7) * (8.854187817620 * 10^-12)

        ~1,457,640.81124346 m/s = 1/sqrt(4.70650973710659 * 10^-13)

        published spacial resolution with eeg beaming techniques: ~1mm

        max sampling rate on biosemi hardware: ~60k hz, so ~30k hz

        calcating distace (mm, rough spacial resolution) the light (or information would travel in this env):

        0.0485880270414487 mm = 1,457,640.81124346 / 1000 / 30000

        Seeing as ~.05 mm is still within the cavity of the brain (no information has escaped at the hardware sampling rates) we, probably would be over sampling for nearly everything except for that which is ~.05 mm from the electrodes, theoretically speaking since there has nothing (to my knowledge) been published with such resolution in eeg

        However in my lab, we sampled at 2048 (which most in the field considered to be too high anyways) which puts us:

        ~0.711738677364971 mm = 1,457,640.81124346 / 1000 / 2048 (this is ~.160 GB/minute at 24 bit resolution)

        Which still can be considered to be oversampling as far a information content escaping the brain depending on the roi, but pretty close what I think would be acceptable for exploring now within the bounds of publish work.

        The min size of neuron ive seen thrown around has been around 0.002 mm, so even theoretically speeking from what the hardware can do now, we are still about 25x off from mesuring changes accross neurons with eeg.

        But considering that the timescale of reaction times to motor tasks is greater than 150ms, I think there is considerable leeway in classifying neural states to sufficient accuracy (as the field has thus far with non spacial techniques), not to mention that even measuring at a 2048, you see sinusoidal fluctuations from bin to bin so you are going to have to be averaging some where anyways and the level of a single neuron is likely to not be very helpful in the context of what every other neuron is doing at that time in the brain (or in the body).

    • Peter David Jones

      > And I don’t think physical scanning is going to help you copy the
      contents of the hard drive, until you also know what method of encoding
      the hard drive uses.

      The encoding isn’t something separate from the decoding.
      (Ie it’s not like a book written in an unknown language).

      If you do, a sufficiently fine-grained scan of a brain or a PC
      you will capture both at once.

      Of course there is a real catch about “sufficiently fine grained”. The less you know about brains, the more you would have to brute-force the issue, leading to quantitative problems.

  • Daniel Carrier

    > That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling.

    The thing is, we have options that evolution didn’t. We invented the wheel, which is far simpler than the legs evolution invented, but evolution isn’t really capable of creating a wheel. Good wheels need axles, and that means you can’t connect it to the blood supply and all that stuff.

    In the case of brains, we can train neural networks in a way that the human brain cannot. We can look at how much difference a marginal change in the neural strengths would make, and adjust for that. Humans can’t do calculus on our neurons. If they were truly analoge, we could change each weight a little and see how the result changes, but they’re not. They just have different probabilities in sending signals. And we can’t measure tiny differences in probability without waiting a really long time.

    Likewise, neurons may be complex not because it’s important for the design of human brains, but because it’s the simplest way to actually build the things as opposed to being simple to model. And then our brains evolved around that, so replacing them with a simpler model may not work.

    • http://overcomingbias.com RobinHanson

      Having more options applies BOTH at the level of the brain as a whole and at the level of individual brain cells. The issue is our RELATIVE ability to simplify what nature did at these two levels.

      • Daniel Carrier

        Human brains are based on how neurons are. If we simplify what nature did, then we won’t be able to get uploads to run on them. Simplifying either of these gives an advantage to ordinary AI (OAI?).

  • Don Reba

    The flight-AI analogy gets used a lot, and I think it is worth reiterating here: the wing IS simpler than the feather. Learning to replicate feathers and placing them in the exact same arrangement as one in a detailed scan of a bird’s wing is not the best way to make a flying machine. Same way, evolution is not a good reason to think that arranging brain cells is a workable way of creating intelligence.

    • http://overcomingbias.com RobinHanson

      Not everything is simple like flight. Some things, like bacteria, ecosystems, firms, legacy systems, and cities, really are complex.

    • Joe

      I’m not sure what a ‘whole wing emulation’ would even look like. The only reason this is hypothetically feasible with brains is because they are processors, they perform their function by taking signals in and sending signals out. So as long as you have some way of giving a brain input signals and receiving its output signals, it doesn’t matter where it is implemented.

      On the other hand, wings work by physically propelling an object through the air. If you did produce a computer simulation of a wing, what good would that do? Maybe you could simulate how a wing would behave in virtual reality, but that just isn’t the problem that wings are supposed to solve.

      Looking at the problem from another angle, it’s not clear that our first flight technology wasn’t the homing pigeon, which does in fact use preexisting feather and wing technologies, designed by evolution, without us needing to first understand how flight works on a high level.

  • Mark Bahner

    “And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc.”

    It’s not that they would “drop them.” It’s that they would never even *acquire.*

    I don’t know how any person can look at the things that IBM’s Watson can do, and not say that it is getting very close to human intelligence. But Watson has virtually no features of humans. It doesn’t have a family or friendships, and isn’t a member of a team or a firm. The things that it does that are features of the human mind…e.g., it can understand and respond to human speech, and it can “read”…have just been added because it makes Watson more valuable.

    And to take an even more concrete example…I guarantee you that computers will replace human drivers well before this century is out. But those computers won’t have families, friends, teams, firms, etc.

    In the immortal words of Kyle Reese: “It doesn’t feel pity, or remorse, or fear.”

    • Peter David Jones

      I’m always amazed by the ability of some people to make confident predictions about the nature and behaviour of AIs of unknown architecture that haven’t been built yet.

      If we explicitly programme in ethics to our AIs for safety reasons, they will have ethics.

      If we implictly train them into ethics thorough socialisation, they way we train ethics into human children, they will have both ethics and, as a necessary prerequisite, social skills.

      Etc.

  • Lord

    Why create an AI? That may be pertinent to which is done first. Mostly we just want to automate human tasks and most of these won’t need general intelligence at all, automatons, household chores, self driving cars, translators, and the like. They will just acccrete functionality over time. We may consider and treat them as intelligent even if not or even if we don’t know how to tell. If we want to download ourselves then only some sort of em would work because it is ourselves we want to copy.

    • Mark Bahner

      “Why create an AI? That may be pertinent to which is done first. Mostly we just want to automate human tasks and most of these won’t need general intelligence at all, automatons, household chores, self driving cars, translators, and the like.”
      Yes, I think that’s absolutely correct. To me, it’s already obvious that our desire is first and foremost to accomplish tasks that are costly in terms of time or money. For example, driving is to many people (e.g. me) a waste of precious time…and the costs and hassles of owning a car (depreciation, maintenance, insurance) are annoying. Computer-driven cars solve that. They also solve wasting valuable space in houses for garages, wasting valuable space for parking in businesses, range anxiety for electric cars, etc. etc.
      Similarly, computer-driven vehicles mean I don’t need to do shopping for groceries, home improvement, clothes, etc. That cuts both time *and* expense, because brick-and-mortar sales locations employ many more people per unit of sales than e-commerce. For example, Amazon sells about four times as many goods per employee as Walmart.
      Or look at home maintenance. A computer-driven lawnmower would mean that one computer-driven lawnmower could easily do the work of a whole neighborhood of lawnmowers at present, because most peoples’ lawnmowers sit idle for all but an hour or two a week.
      It is far, far, *far* easier to make computer-driven cars, trucks, and lawnmowers than it is to make a human brain emulation that happens to be a good driver or a skilled mower of lawns. A similar situation exists house painters, house builders, road builders, etc. That’s why computers that take human jobs will come long before brain emulations that take human jobs.

  • turchin

    I think that there is a 3 way which could combine two approaches: ems and theoretical AI. It is creating a model of human mind which is similar to actual human brain on functional level, but is not modeling cells or a concrete person.

    It is modelling several hundreds of known brain regions and also everything we know about how information is processed in the brain.

    But inside these functional blocks the solutions could be different. For example our neural nets produce similar result to lower visual field V1 of human brain – they recognise edges, movement, corners – the same thing as V1 is doing.

    And it is enough for our model. We don’t need to scan actual V1 field or learn something about its cells.

    It is the same level of abstraction as in planes – they also have wings, as birds, but on lower level details are completely different: there is no feathers etc. It also has implication for AI safety.

    Such simple model of human brain will look from inside as human mind and its behavior will be understandable for us. We could slowly fine tune it by comparing with actual human brains.

  • http://praxtime.com/ Nathan Taylor (praxtime)

    FWIW I’m team simple. My simple is intelligence = prediction. More precisely: prediction from real time sensory input about how your actions might alter the world.

    Now….where I very strongly disagree with team FOOM is on how hard prediction really is. Prediction about billiard balls is easy, chess games moderate, go games harder, living dumb things hard (say plants), living dumb groups of things super hard, and living smart things (say people) crazy hard. Orders and orders of magnitude more than chess games. And the reason is obvious enough. Human society is super complicated. So I’m ok with saying the underlying frame for AI is simple. Philosophers just like taking an obvious thing and making it hard. Qualia. Bah. So the bird-airplane analogy works for me. It’s just that the complexity required to predict the most complex things (groups of intelligent people) is waaaay beyond what it takes to do simple prediction. And humans are highly evolved to be expert at super complex group dynamic predictions. Things like status, signalling, in-group out-group dynamics. So, like human language or even picking up the laundry and folding it, it’s crazy computationally difficult. So AGI will come slowly. But before ems of course. 🙂

    • Joe

      I think prediction of billiard balls is only easy when you already know how to model them. Same with chess, same with humans, same with everything.

      Since it’s utterly unfeasible to form predictions by simulating reality at its most fundamental level, an intelligent predictor has to make many many simplifying assumptions, ignoring the factors that for the current prediction task don’t matter, and simplifying those that do to produce a robust useful model.

      This is hard, which is why progress consists of slow painful accumulation of models, including knowledge on when and where each model can be usefully applied. New models are generated by applying or modifying old models – there isn’t a ‘general model-generating algorithm’ that does this for you – and so progress will continue to be made only by building on what we have, not by suddenly discovering some simple fully general algorithm that re-learns everything from first principles and discards all our accumulated knowledge.

      That, at least, seems to me to be the obvious response to your argument.

      • http://praxtime.com/ Nathan Taylor (praxtime)

        Not sure, but seems like we’re mostly on the same page. The fundamental point I was making was using mind = prediction as a model, which you seem ok with, at least for this argument. Then you say prediction models are hard, which I think I was also saying too (“crazy hard”, “orders and orders of magnitude”). Where we disagree is just how hard hard is.That’s an empirical question.

        Maybe I’ll provide a bit more context on this though, as to why I find it plausible that model building could continue to go fairly fast (still many decades and decades to AGI, but steady progress). That’s because the best model creator in the brain is the neocortex. Now, if the brain built ad hoc models for each sense or problem in the world, then the neocortex would be specialized. For example, if you were born blind, the visual portion of the neocortex couldn’t do much. Plus the neocortex would be biologically different in different regions. That’s what the many many models to understand the world brain looks like. But of course that’s not true at all. The neocortex appears to very flexible, nearly identical throughout the brain, and is able to model any sensory input stream. So if born blind that tissue can still do stuff. And if brain damage occurs, nearby tissue can flexibly take things over. Maybe not quite as well, but still. So if mind = prediction, and the mind’s best prediction tissue is very very flexible, and is also general purpose, then if we can just model that part, we have a general purpose learning algorithm. To be clear, I think the current machine learning trends right now are still far from that. But this doesn’t mean it can’t be figured out. And then tuned, much as airplanes can fly faster than birds. Now maybe this last paragraph you won’t agree. I’m just providing some context on why it’s at least a plausible biologically based argument general purpose prediction modeling algorithms might exist.

      • Joe

        Seems like you’re making the same claim Mark Bahner more explicitly states above: that you believe neurons, or perhaps only the kind of neurons found in the neocortex, will spontaneously produce useful functionality when a bunch of them are clumped together; that there is no meaningful superstructure existing across many or all neurons in aggregate. Your evidence for this is the uniformity and flexibility of the neurons in the neocortex.

        But memory addresses in a computer are also uniform and flexible. Yet computers don’t spontaneously perform useful activity, they need actual software to be written for them. Their uniformity allows them to store and run many different pieces of software, but does not for a moment mean they behave the same under any configuration.

        Are you sure the same isn’t true for the brain – that there is no ‘software’ analog existing in the brain at a higher conceptual level than the level of individual neurons?

  • Robert Koslover

    So… how much computational power, or memory, or both, is actually carried by an individual brain cell? And once you can model that cell as a black-box with some appropriate number of inputs, outputs, and states, do you need to do anything else to represent that cell accurately?

    • http://overcomingbias.com RobinHanson

      We just don’t know.

  • acarraro

    This is so strange. I read your basic assumptions and I reach the exact opposite conclusion. I fundamentally agree that modelling a brain cell is much easier than modelling a whole brain. But that makes me believe artificial AI is more likely not less.

    If you could design a working artificial neuron, you would have solved one of the issues with brain emulations. Ems require artificial neurons (either software or hardware ones). But they also require an accurate enough map of an existing human brain to create an emulation.

    But could you not skip the mapping altogether? Could you not raise an artificial child brain the same way you raise a human child? Obviously not all networks of artificial neurons would show intelligence, but there is some suggestive evidence that it would. And you’d have a massive advantage: you could simulate huge numbers of artificial childhoods at great speeds. This is similar to the training phase for recent AI attempts.You could use evolutionary strategies to determine which configuration show promise. I find it difficult to see an evolution in brain scans: how do you know that you are getting closer to capturing the information you need? But artificial children could give some clue you are getting closer to success, that’s very important in any development.

    You could argue that the possibility space of neuron configurations is so large, that the feedback loop I describe is impossible. But the latest progress in AI is suggestive. And human brains are self healing to some extent, which suggests there is some robustness in the design. It’s true that it took a few million years for natural selection to hit human intelligence, but we already know a lot more than natural selection did, and we can run natural selection a few orders of magnitude faster.

    Would you consider such an intelligence a pure AI or a brain emulation? Maybe you’d consider it a brain emulation and we then agree, but I find it difficult to believe…

    • http://overcomingbias.com RobinHanson

      Of the 3 techs needed for ems, brain scanning will probably be ready first. I’m very skeptical that a random mass of neurons does much interesting when raised as a child.

      • acarraro

        I find it difficult to believe that you could develop a perfect scanning tech without verification (e.g. the other techs). You will only know that the scan is good enough when you can run it…

        So let’s say you have an imperfect scanning technology, would you not agree that such technology would allow you to limit the search space significantly? Do you really believe that there is only a single work configuration of your neurons?

        If you have a “grainy” picture of a human brain could you not perturb that and generate enough variations of it, to get a working human brain at some point. What are the chances that you’d get to the original brain if you did that? Would someone try to develop artificial AI really care if the emulation is accurate?

        So I think your assumption that it would be a random mass of neurons is incorrect. Scanning would inform the search space and reduce it greatly.

        I also wonder if we really will start from a human brain emulation. Many animals have useful enough intelligence levels, with less ethical concerns about destructive scanning. Most pet brains are able to recognise their owners. That sounds quite a useful technology. Once you have a working dog or monkey brain, could you evolve that? That’s would hardly be random again. Still a few million years of evolution required, but again between higher speed and the human anatomy knowledge you should be able to trim the search space significantly…

        Did they not manage to teach a random mass of neurons to play loads of computer games?

      • Mark Bahner

        “I also wonder if we really will start from a human brain emulation. Many animals have useful enough intelligence levels, with less ethical concerns about destructive scanning. Most pet brains are able to recognise their owners.”

        One huge problem with any animal brain emulation (including humans) is that the brains are designed to interact with bodies.

        In many cases of artificial intelligence, a body is not needed. For example, a computer-controlled car doesn’t need a “Johnny Cab” ala Total Recall:

        https://www.youtube.com/watch?v=xGi6j2VrL0o
        The computer needs vision, but it doesn’t need to control arms to move a steering wheel, or legs to press the accelerator or brakes.
        To develop an emulation of a brain that isn’t matched to the body the brain was designed for is a waste of effort.

      • acarraro

        Surely brain emulations will exists in a simulated environment which will provide enough external stimuli. Quadriplegics have issues adapting to the diminished sensory inputs. I doubt simulated brains would be happy living in an environment that provides only visual inputs and no interacting environment. Given that we have reasonably realistic virtual environments in games already (and they evolve pretty quickly), I doubt at least that level of sophistication would not be available to ems… The complexity of simulating a macroscopic external environment are fairly small compared with simulating the brain itself, so it seems unlikely that they would have to exist in such a state.
        Artificial AI would have fewer problems with this, since they didn’t start in such an environment…

      • Mark Bahner

        “Quadriplegics have issues adapting to the diminished sensory inputs. I doubt simulated brains would be happy living in an environment that provides only visual inputs and no interacting environment.”
        Yes, that’s exactly the trouble with a brain emulation. Without a body, it may be extremely unhappy. With an emulation –something that behaves the same as a human–there would be no good argument against denying it rights.
        As I noted previously, IBM’s Watson has to be considered as approaching human intelligence. (And that’s with a measly 80 teraflops of processing power…well short of the human’s 500-10,000 teraflops of processing power.) But no one I know of has suggested that Watson should have rights, in part because he/it hasn’t asked for them.
        Human brain emulations are a legal/social can of worms. There’s no reason to open that can of worms, since artificial lures (/intelligence) will be able virtually every job a human can do in a few decades. Drive a car? Check. Paint a house? Check. Build a house? Check. Collect garbage? Check. It would be foolish to have an em building houses, when we can have robots doing the same or a better job for far less money. It baffles me that Robin apparently doesn’t see this.

  • http://www.sanger.dk Pepper

    We managed to figure out how to use computers for arithmetic without understanding everything the brain does. I’d guess a calculator uses algorithms significantly simpler than the ones the brain uses. It’s possible that general intelligence is analogous to arithmetic.

    • Peter David Jones

      The the brain is over-engineered?

      • http://www.sanger.dk Pepper

        Which is simpler, a leg or a wheel? We were able to build internal combustion engines long before we built viable robotic legs.

      • Peter David Jones

        which is easier to do depends on where you are starting from.

      • Joe

        The general trend of technology seems to me to be increasing decentralization, interdependence, and specialization, allowing gains via scale economies.

        Consider just how many tasks a human leg is designed to perform, and under what limitations. Then look at wheels, and the environments they are useful in. Seems to me that rather than wheels lacking a whole bunch of unnecessary features of legs, they still do have those features, only they’re outsourced, rather than performed inhouse. So wheels don’t include functionality for repairing themselves – instead they are designed to be more easily removed and replaced, with the creation of new wheels performed in factories. They don’t need to handle as many diverse environments as do legs – they run on long flat surfaces that are purposefully created for them to run on. They don’t improve by recombination of internally-contained blueprints – instead, design is handled by external organizations that can make improvements, both small and incremental (as is done by genes) or larger, riskier, long-shot changes.

        Without any of this supporting infrastructure, wheels just don’t work, and aren’t even remotely competitive with self-growing self-maintaining broadly useful legs.

    • Peter David Jones

      > It’s possible that general intelligence is analogous to arithmetic.

      There are general problem-solving algorithms such as AIXI that look simple when written mathematically, but they tend to be uncomputable, and therefore far too complex in the relevant sense. In mathematical notation it is all too easy to say “and then repeat for an infinite number of cases”.

  • Peter David Jones

    “So maybe what the median AI researcher and his or her fans have in mind
    is that the intelligence of the human brain is essentially simple, while
    brain cells are essentially complex. This essential simplicity of
    intelligence view is what I’ve attributed to my ex-co-blogger Eliezer
    Yudkowsky in our foom debates.”

    There are general problem-solving algorithms such as AIXI that look simple when written mathematically, but they tend to
    be uncomputable, and therefore far too complex in the relevant sense. In
    mathematical notation it is all too easy to say “and then repeat for an
    infinite number of cases”.

  • AFB

    I’m new to this so pardon of this has been said before, but hasn’t the trend been replacing humans with task specific automated processes? And it makes sense– that’s is where immediate results are achieved and you are not wasting resources emulating all the extra stuff that’s only ever needed in a general purpose agent?

    And doesn’t that, in turn, mean that we will skip right past the human like ems who switch on their love for music or whatever during their spare cycles and go straight for the non-sentient optimized for productivity end state?

  • static

    Having a few semesters in a lab focusing on simulating biological neural networks, at that time we did consider the network of neurons simpler to model than trying to do a detailed model of individual neurons. E.g., a network of neurons based on something like Hebbian theory (1949) for neuronal activation and plasticity is much simpler than trying to actually model the wide variety of actual neurons, their complex electro-chemical processes, the effect of neurotransmitter levels, etc.