Robot Econ Primer

A recent burst of econo-blog posts on the subject of a future robot based economy mostly seem to treat the subject as if those few bloggers were the only people ever to consider the subject. But in fact, people have been considering the subject for centuries. I myself have written dozens of posts just here on this blog.

So let me offer a quick robot econ primer, i.e. important points widely known among folks who have long discussed the subject, but often not quickly rediscovered by dilettantes new to the subject:

  • AI takes software, not just hardware. It is tempting to project when artificial intelligence (AI) will arrive by projecting when a million dollars of computer hardware will have a computing power comparable to a human brain. But AI needs both hardware and software. It might be that when the software is available, AI will be possible with today’s computer hardware.
  • AI software progress has been slow. My small informal survey of AI experts finds that they typically estimate that in the last 20 years their specific subfield of AI has gone ~5-10% of the way toward human level abilities, with no noticeable acceleration. At that rate it will take centuries to get human level AI.
  • Emulations might bring AI software sooner. Human brains already have human level software. It should be possible to copy that software into computer hardware, and it seems likely that this will be possible within a century.
  • Emulations would be sudden and human-like. Since having an almost emulation probably isn’t of much use, emulations can make for a sudden transition to a robot economy. Being copies of humans, early emulations are more understandable and predictable than robots more generically, and many humans would empathize deeply with them.
  • Growth rates would be much faster. Our economic growth rates are limited by the rate at which we can grow labor. Whether based on emulations or other AI, a robot economy could grow its substitute for labor much faster, allowing it to grow much faster (as in an AK growth model). A robot economy isn’t just like our economy, but with robots substituted for humans. Things would soon change very fast.
  • There probably won’t be a grand war, or grand deal. The past transitions from foraging to farming and farming to industry were similarly unprecedented, sudden, and disruptive. But there wasn’t a grand war between foragers and farmers, or between farmers and industry, though in particular wars the sides were somewhat correlated. There also wasn’t anything like a grand deal to allow farming or industry by paying off folks doing things the old ways. The change to a robot economy seems too big, broad, and fast to make grand overall wars or deals likely, though there may be local wars or deals.

There’s lots more I could add, but this should be enough for now.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Pingback: Links 5/15/13 « naked capitalism

  • Aron Vallinder

    Surely your final point is contentious.

    • http://twitter.com/AlexeiSadeski Alexei Sadeski

      If it wasn’t contentious, it wouldn’t neet to be said because it would already be accepted and obvious.

      • Aron Vallinder

        Sorry, I should have been clearer. My point was that it’s contentious even “among folks who have long discussed the subject.”

      • http://twitter.com/AlexeiSadeski Alexei Sadeski

        Of course.

  • IMASBA

    “AI takes software, not just hardware.”

    True, but we haven’t got the faintest idea of how consciousness works, it may require breakthroughs in quantum computing.

    “It might be that when the software is available, AI will be possible with today’s computer hardware.”

    Doubtful, very doubtful.

    “Human brains already have human level software. It should be possible to copy that software into computer hardware”

    I do not agree with this. You can’t just read out the software of the mind, there is no hard disk equivalent storing all the algorithms and connections between them. The algorithms more or less depend on the hardware configuration, like an old fashion circuit board that doesn’t run software but simply carries out its function because all the physical electronics and wires are set up in the right way. You’d probably need so much information about the structure and workings of the brain that you might as well build an AI, it takes the same level of technology.

    “There probably won’t be a big war or deal.”

    This really depends on the specifics of the new system. It could be almost utopian or dystopian, what is clear though is that robots might in thoery pull off something like the Star Trek economy, or a planned economy, or some system that hasn’t even been invented yet, because they can communicate flawlessly and are not hindered by things like testosterone spikes. Really, while some economic principles are universal a lot of them aren’t, if you’d keep half as open a mind to possible economic systems of the future as you do to possible technology of the future you’d see that truth can be stranger than fiction.

    “Being copies of humans, early emulations are more understandable and predictable than robots more generically, and many humans would empathize deeply with them.”

    This part worries me, are you implying AIs will coexist with flesh and blood humans for an extended period of time? If AIs aren’t created to replace flesh and blood humans entirely, then what will they be created for? To be abused as slaves?

    • Stephen Diamond

      True, but we haven’t got the faintest idea of how consciousness works

      That’s because “consciousness” (raw experience or “qualia”) doesn’t exist. ( See The supposedly hard problem of consciousness and the nonexistence of sense data: Is your dog a conscious being?http://tinyurl.com/c3zq8ht )

      (See also The raw-experience dogma: Dissolving the “qualia” problemhttp://tinyurl.com/8gh9vbt )

      In near-mode terms, no serious scientists in AI are working on “consciousness.”

      • IMASBA

        In near-mode terms, no serious scientists in AI are working on “consciousness.”

        No AI scientists are working on sentience, period. It’s hard enough getting a machine to learn navigation and anything but very specific tasks.

    • Doug

      ” The algorithms more or less depend on the hardware configuration, ”

      Algorithms do not depend on hardware configuration. Any Turing Complete can run any computable algorithm.

      • IMASBA

        “Algorithms do not depend on hardware configuration. Any Turing Complete can run any computable algorithm.”

        They do in the brain (it literally rewires itself when it learns something new), just like on an old circuit board, I have no idea why you thought I was saying algorithms on ANY computing device imaginable must depend on hardware configuration (they don’t, for the most part, on modern PCs, for example).

      • Daniel Carrier

        You can, in principle, keep track of which axons attach to which nerves in software, and simulate it that way. You can even simulate the individual atoms if you have to. It’s a lot easier to do processing like the brain if you have hardware designed for it, but you could do it with any universal Turing machine.

        The laws of physics are not super-Turing. By extension, any physical system is not super-Turing, and can therefore be simulated on a universal Turing machine.

      • Daniel Carrier

        “Any Turing Complete can run any computable algorithm.”

        It can run it eventually, but it could be very slow. For example, a quantum computer is a Turing machine, and and a classical computer could run the same program, but it will take much, much longer to factor a semiprime.

        The architecture of the human brain allows it to do things that cannot be easily done on the architecture of a normal computer. If they ever do full brain emulation, I’m betting that they will first design a computer with a different architecture.

    • http://humanpetition.blogspot.it/ Alexander Gabriel

      “You’d probably need so much information about the structure and workings of the brain that you might as well build an AI, it takes the same level of technology.”

      I would agree. Or rather, I suspect it would take an even greater level of technology. Biological systems are just messy. Evolution is blind. Technological advance is very contingent also. Why these would converge to the same places is beyond me. It reminds me of what Krugman observes about speech recognition.

      http://krugman.blogs.nytimes.com/2012/12/26/is-growth-over/

      “…. if you’d keep half as open a mind to possible economic systems of the future as you do to possible technology of the future you’d see that truth can be stranger than fiction.”

      When bonobos are incorporated into the human economic system, there may be more grounds for inquiry. Until then I’ll probably assume that vastly superior intelligences will not accept us into theirs.

    • mugasofer

      “You can’t just read out the software of the mind, there is no hard disk equivalent storing all the algorithms and connections between them. The algorithms more or less depend on the hardware configuration, like an old fashion circuit board that doesn’t run software but simply carries out its function because all the physical electronics and wires are set up in the right way.”

      That’s the idea – learn enough neuroscience to predict the behaviour of arbitrary neural structures, then scan a biological brain and simulate it. Less efficient than abstracting the algorithms, but probably easier.

  • Dave

    “Our economic growth rates are limited by the rate at which we can grow labor” This would be true if we were running close to full employment. Given that we aren’t, and rarely do, use our whole labour pool, there is obviously something else limiting our economic growth.

    • IMASBA

      Central banks try to keep unemployment above 2% or so, in addition there is always some mismatch between the skills of the unemployed and the skills that are in demand (meaning some people can’t find jobs that pay enough to live off. Then there is the issue of trust that is lacking during times of crisis so that people hold off investments out of fear (global CO2 emission actually dipped in 2009). Finally, there is the fact that natural resources are limited and apparently people’s willingness to acquire more services (often in exchange for possessing less goods) is limited: automation seems to outpace the creation of new jobs (possibly because people can’t be te-trained fast enough).

    • Daniel Carrier

      We are always close to full employment. According to Wolfram Alpha, the US has never had more than 11% unemployment since 1948 (as far back as the graph goes). http://www.wolframalpha.com/input/?i=US+unemployment+rate

      It may suck to be out of a job 11% of the time, but it’s not like we can grow very much by getting an extra 11% of the population to work. That’s only about four years worth of growth.

  • free_agent

    There wasn’t a war between farmers and industry in the US (other than politically), but there was a war between farmers (the European colonists) and foragers (the Indians). And there is some suggestion from European genetics that the farming people overran and conquered the foragers, since the Y-chromosome diversity in the population is substantially younger than the mitochondrial diversity in the population.

    • http://overcomingbias.com RobinHanson

      Most American indians were not primarily foragers, they were farmers. And populations displacing over thousands of years does not imply war.

      • lemmycaution

        Are humans going to be better off though? The indigenous populations of the new world were not made better off by European colonists.

        Assuming that the productivity of smarter-than-human AI will be driven down to a value close to resources consumed by the smarter-than-human AI, what are the dumber-than-AI humans going to do when they will require much more resources than the smarter and presumably much more productive AI?

        There was no war against horses but they declined from about 20 million in 1900 to about 2 million in 1950.

      • IMASBA

        “what are the dumber-than-AI humans going to do when they will require much more resources than the smarter and presumably much more productive AI?”

        Let them (and the slower portion of AIs) eat cake! After all, that’s the way god intended it.

      • Weaver

        Depends if dumber-than-AI humans can expropriate the AI surplus indefintely. If they can, they live like Kings (or Kim Jong Un). If they can’t they merge or live like pets.

        As you say, we still keep animals even when they’re economically useless.

      • http://twitter.com/AlexeiSadeski Alexei Sadeski

        “Are humans going to be better off though?”

        Maybe, maybe not.

      • Weaver

        Mmm. I think I wasn’t specific enough. Didn’t mean to include the Medo-Amerindians,

    • Weaver

      It’s easy to overrun the opposition when you can generate much higher population densities :-D

      Robin is right though – Amerindians were mostly farmers, with some foraging that increased considerably in importance after they lost the east. But they weren’t good farmers and had low population densities.

  • Pingback: Assorted links

  • Delwin

    “there wasn’t a grand war between foragers and farmers, or between farmers and industry”

    That is exactly what the US Civil War was – an agrarian based society vs. an industrial one.

    • Weaver

      There’s a lot of truth in that, with the divergent economic paths take by north and south fueling a lot of the political conflict, one way or another.

      But a lot of other countries managed to industrialise without a civil war, or with the civil war unconnected to the industrialisation.

      • Stephen Diamond

        The path to industrialization was laid by a war for capitalism against feudalism: the French Revolution. The way to capitalism over most of Europe was paved by the armies of Napoleon.

      • IMASBA

        Nope, capitalism is older than the French revolution, it was already dominant in Italy, the Low Countries, Great Britain, Scandinavia, most overseas colonies and to a lesser extent everywhere in Europe (except Russia). Also, the distinction between feudalism (which basically operated as a free market for the most part) and capitalism is pretty vague, just about the only differences are that under feudalism there were no banks initially (though there were lenders and banks eventually did arise) and the local leaders are 100% guaranteed to come from a selection of privilieged families, where as this is “only” very likely in capitalism.

    • http://www.facebook.com/people/Daniel-Warren-DuPre/1654883285 Daniel Warren DuPre

      Then please explain all those Yankee farmboys in the blue uniforms marching through Georgia.

    • http://twitter.com/AlexeiSadeski Alexei Sadeski

      No, both north and south were industrial, in Hanson’s parlance.

      Hanson is using a very specific terminology here, and it differs from the general population’s definitions.

      • Stephen Diamond

        No, both north and south were industrial, in Hanson’s parlance.

        Then too bad for Hanson’s parlance. (Having one’s own parlance is poor “classic prose,” in any event.)

        The question is whether the transition to industrialism required civil war. It did. It required civil war (or war between societies): in Britain (Cromwell), in France (Robespierre), in Europe (Napoleon), in the U.S. the Civil War. The only truly industrializing country in the world today is China, which required a Communist revolution to get there (which is still a long way to being industrialized).

        Whether this war comes at the times demarcated by Hanson’s “parlance” is beside the point. In some societies, the civil war pave the way for any industrialization; in others it was required for industrialization to proceed. In most important cases, although perhaps not in all (per IMASBA), societies required civil war or external invasion before they could further continue the industrialization process. That’s the conclusion that’s relevant to Hanson’s thesis, notwithstanding his.parlance.

  • Sigivald

    It might be that when the software is available, AI will be possible with today’s computer hardware.

    One assumes this means, at its limit, “something along the lines of a modern supercomputer”, not “common commodity desktop computers”.

    While neither is strictly logically impossible, the latter seems staggeringly unlikely, so far.

  • http://www.facebook.com/people/Brian-Cady/623803976 Brian Cady

    Robots use resources to eliminate labor, as industry has done. This fit well into a world empty of labor and full of resources. But today’s resource world is emptying,and the labor world is full of eager workers. Will the future replace resources with labor, leaving less pollution and more jobs?

  • wandering mind

    So, if robots take over the production of a commodity completely from end to end (say cars) and, in addition, replicate themselves without human intervention or labor, when does the value of the product they produce drop to zero?

    • IMASBA

      Never, a product will always require a non-zero input of raw materials, time and energy.

      • mugasofer

        With enough autonomous robots, those could theoretically also be non-scarce.

    • Daniel Carrier

      When the supply exceeds the demand at a price of zero. This is true of everything. There is more air than we use, even without charging for it, so the value of air is zero, and I can’t sell you air.

      Cars will always have a cost, since we can use those robots for something else, so they would probably stop making cars when the value reaches the cost, and the value will never reach zero. You don’t want to turn perfectly good materials into a worthless car after all.

      • IMASBA

        “There is more air than we use, even without charging for it, so the value of air is zero, and I can’t sell you air.”

        I don’t think it works that way. I’m pretty sure air is no owned by anyone because it moves around very fast on its own, in addition it’s very hard to defend your claim to a part of the air. There being a lot of it is not the issue: airspace is owned, as are the lithosphere and the much of the oceans. Land was owned long before the human population was large enough to occupy even a small fraction of the Earth’s land surface. In a way environmental laws that limit the emission of gaseous pollutants already represent a form of air ownership, with the ownership residing with the UN.

      • Daniel Carrier

        “I’m pretty sure air is no owned by anyone because it moves around very fast on its own”

        So does water. We can still bottle it and sell it.

        “in addition it’s very hard to defend your claim to a part of the air.”

        You can defend your claim to bottled air just as easily as bottled water.

        “airspace is owned”

        Airspace is scarce. There are times when two different people want the same piece of airspace.

        “In a way environmental laws that limit the emission of gaseous pollutants already represent a form of air ownership”

        That’s specifically pure air. It’s like how you could never sell ocean water in the middle of the ocean, but you can easily sell pure water.

      • IMASBA

        “So does water. We can still bottle it and sell it.

        You can defend your claim to bottled air just as easily as bottled water.”

        Bottled air is actually owned and sold at gas stations for example. What doesn’t happen is you having to pay a tax for breathing air, just as you don’t have to pay a tax for drinking water (you only pay for water from a specific source, often because it has been purified). Now I can easily control your (clean) water supply, I can’t easily control the amount of air that will make it’s way towards you because air is all around us (including above our heads for ~50 km) and permeates every nook and cranny. In contrast, even if you live near a river it will take you considerable effort to bring the water home and purify it, which is why you agree to pay a small price for tapwater.

        “Airspace is scarce. There are times when two different people want the same piece of airspace.”

        It’s not really scarce, we couldn’t fill the sky with airplanes if we wanted to but ultimately scarcity is all about perception (how much is enough to satisfy you). There are enough greedy people out there who would see air as scarce because it is necessary and finite and therefore it can be speculated with. It is not entirely impossible that future cash-strapped governments will devise schemes where people and corporations can buy ownership of air and get to charge people a certain amount for just breathing. However popular resistance would be enormous (and rightfully so, air ownership is pure rent-seeking, just like private ownership of deep underground layers in the US).

      • Stephen Diamond

        When the supply exceeds the demand at a price of zero.

        You’re overinterpreting a tautology. If people can obtain a substance at no cost, then supply exceeds demand in the relevant sense. But that doesn’t imply the substantive claim that the relationship between the naturally occurring quantity of a substance and human ability to consume it determines price.

        Of course, allowing the charging of a price for a substance when it could be treated as limitless in practice (imagine a science fiction villain who bottled all the air and charged for access) is probably societally unjustifiable. But that’s something else.

      • Daniel Carrier

        I don’t understand.

        “But that doesn’t imply the substantive claim that the relationship between the naturally occurring quantity of a substance and human ability to consume it determines price.”

        Isn’t that how the free market works? The price is where the supply curve and demand curve intersect.

    • http://www.facebook.com/people/Theresa-Klein/1408551264 Theresa Klein

      When the robots go out of patent.

  • http://www.gwern.net/ gwern

    > There also wasn’t anything like a grand deal to allow farming or industry by paying off folks doing things the old ways.

    Whatever happened to ‘humans will be pensioned off’?

  • http://humanpetition.blogspot.it/ Alexander Gabriel

    I think AI that is smarter than humanity would probably oppress or kill us. If smarter than us, it is likely to be self-improving, and that drive for improvement is likely to conflict with allowing us to waste resources with our inefficient little existences.

    I don’t know if averting AI is possible, but we should try.

    I do not think that AI being developed through emulating human brains is very likely. That is like developing flight through emulating birds.

    Assuming “ems” does create a framework to think about a post-singularity world. Maybe that is Mr. Hanson’s purpose.

    I also do not put stock in expert surveys on this subject.

    Also, technological advance up until this point has been an evolutionary process. So the idea of a rapid “intelligence explosion” in say a period of months or less seems highly implausible to me.

    Implementing AI on today’s hardware might perhaps be possible in some form. But I would expect developing the software advances too would require time.

    • IMASBA

      “I don’t know if averting AI is possible, but we should try.”

      It’s impossible to avert it completely, but we can greatly reduce the incentive to build many AI while simultaneously taking away one reason the AIs could have for killing us and on top of that we’d also be doing the right thing. What is this magical three-birds-with-one-stone solution? Very simple. Let’s for once suppress our seemingly human desire to exploit, abuse and enslave “the other”: recognize the “human” rights of AIs. And when you build an AI you have to pay for its upkeep and “education” for 18 years.

      • http://humanpetition.blogspot.it/ Alexander Gabriel

        “Let’s for once suppress our seemingly human desire to exploit, abuse and enslave”

        I wouldn’t really guess that AIs would oppress us out of malice. But given the choice between say building a ranch on some land and keeping chimpanzee habitat intact, humans might easily choose the former, although not out of some desire to harm chimpanzees.

        I suppose this demonstrates the general idea:

        http://hplusmagazine.com/2012/08/21/the-singhilarity-institute-my-falling-out-with-the-transhumanists/

      • IMASBA

        “But given the choice between say building a ranch on some land and keeping chimpanzee habitat intact, humans might easily choose the former, although not out of some desire to harm chimpanzees.”

        Yeah, but that’s exactly the point I’m trying to make: we could do the right thing and let the Chimpanzee habitat be, just like we can choose to let the first small (and relatively powerless) group of AIs be free, and they might emulate that example. In any case I do think giving AIs one less reason to kill us all is a good thing (if they feel a desire to “improve” they can also feel a desire for revenge), especially if the same measure also makes it less profitable to build a lot of AIs.

        I know it’s sci-fi but in Battlestar Galactica the cylons wipe out humanity solely because they view humanity as a threat since humanity originally enslaved them and only granted them the right to move to a homeworld of their own after a costly war. Had the humans done the right thing from the start humans and cylons would have gone their separate ways (the cylons could have “improved” themselves all they wanted to on other planets) and humanity would not have been wiped out. All of this of course relies on AIs having empathy for those that are different, that is something we could teach them by example, if we don’t then they’ll always see us as a threat.

      • http://humanpetition.blogspot.it/ Alexander Gabriel

        Here’s a question: who’s to say that’s actually the right thing? Are you implying that preserving chimpanzee habitat (or slug habitat, or whatever) trumps human prosperity? This is an issue Nicholas Agar brings up. Under many currently fashionable theories of morality, AIs *might be entirely justified* in killing/oppressing humans.

      • IMASBA

        No humans need to be killed to preserve chimpanzee habitat, humans would just have to forego a tiny fraction of their luxuries (or learn to use protection). That it’s wrong to oppress a minority to give a majority more luxuries (as opposed to spreading the pain/work) was established when slavery was outlawed (this applies to AIs and chimpanzees, maybe not to slugs, but I never mentioned those).

    • mugasofer

      “I do not think that AI being developed through emulating human brains is very likely. That is like developing flight through emulating birds.”

      An “em” isn’t an AI programmed based on our understanding of neuroscience rather than formal logic. It’s a simulation of an existing brain that we scanned.

      • http://humanpetition.blogspot.it/ Alexander Gabriel

        I take your point but still find it intuitively unlikely that a full simulation could easily be “dumb” in the sense of not requiring an abstract understanding of mental modules’ functions.

    • http://humanpetition.blogspot.it/ Alexander Gabriel

      I wanted to add today that I may not have considered enough Mr. Hanson’s interesting concept of a new growth mode, which seems plausible on its face. But even if that happens, I would strongly expect some sort of transitional period where the economic doubling time is intermediate. The idea of AI suddenly “waking up” without warning still seems groundless.

  • itaibn

    Alright, my take:

    “AI takes software, not just hardware.”

    Agreed. This has already been empirically confirmed.

    “AI software progress has been slow.”

    Not clear. Advances in algorithm design, data structures, hardware, and human/computer interfaces have made computers today much more powerful today than they were 20 years ago (and I believe this would remain true even keeping hardware fixed). AI is generally considered to be the field of computer science devoted to tasks which humans are already skilled at, which doesn’t include most of these advances. This subfield may be advancing slowly, but it’s not this is a proper measure of computer intelligence. Counterargument: Any attempt to replace a human niche with software is conventional AI, and if conventional AI goes slowly then humans retain their niches. So overall, I’d say it’s not clear.

    “Emulations might bring AI software sooner.”

    This is saying that the relevant biology is easier than the relevant computer science. I think you’re underestimating the complexity of the biology. Although biology is advancing rather well lately, there is a lot of basic things we don’t yet understand. Also, once we understand the brain, it is unlikely that the best way to emulates its intelligence is through a direct simulation. With regard to other technologies, although they are sometimes inspired by a biological system they are rarely a detailed imitation of one.

    “Emulations would be sudden and human-like.”

    The first emulations will probably take a lot of computing power. While it’s possible that they will quickly get sped up, it’s not certain. If they don’t, the transition won’t be sudden. Also, if ems are used for economic purposes, it is likely that people will remove unnecessary parts, optimize parameters, connect them with other ems or other software, or otherwise take advantage of the fact that an em is a software entity. This will quickly make it not human-like. (By the way, an almost emulation is in fact useful. For example, a visual cortex is good at describing traits of images. These almost emulation are also easier to develop.)

    “Growth rates would be much faster.”

    I’m unopinionated on this issue.

    “There probably won’t be a grand war, or grand deal.”

    Agreed.

  • Daniel Carrier

    “Emulations would be sudden and human-like.”

    I’m not sure about this. Perhaps the first “successful” emulation would be just accurate enough to spend a few months in a coma before adjusting to the changes in how neurons work and ending up as close to the original as you’d expect of someone with severe brain damage.

    • http://www.facebook.com/people/Theresa-Klein/1408551264 Theresa Klein

      To get a human-like mind, you would have to emulate years of learning and development. People don’t just ‘wake up” at birth fully conscious adults. The brain takes years of actually doing stuff, playing with toys, controlling a body, taking in invormation and interacting with the world to reach maturity. The idea that an emulation is just going to turn on and start talking to us is absurd. How would it learn to speak a language anyway?

      • IMASBA

        I think the idea is to copy a mature mind that already knows things like language.

    • http://overcomingbias.com RobinHanson

      A few months delay could still make it quite sudden on industry era time scales.

      • Daniel Carrier

        I meant more that the first few ems would be like stroke victims. They wouldn’t be all that useful.

        Thinking about it more, there’s two basic things that can happen:

        Scanning first: We can make ems, but it takes a while for computational power to catch up to the point where it’s feasible to use. This version is somewhat slow, as ems become more economical.

        Computing first: By the time we learn to make ems, we can already run them quickly. This version is more sudden, but as I already mentioned, the earlier ones would likely act like stroke victims.

  • TheBrett

    One thing to keep in mind is that Comparative Advantage doesn’t stop working even if the AI-guided robotics have an absolute advantage in every particular task involved in ‘work”. If the opportunity cost of using humans* in a task is lower than using the full-robot, then we’ll get humans working.**

    * I asterisked “humans” because there’s not really a partition between “humans” and “robots”. It will be “human-guided robotics, with the humans being assisted by machines to correct for various issues and possibly taking drugs/stimulants/etc” versus “fully automated machinery”.

    ** That work might not be too good, though. If the remaining work gets subdivided into a bunch of tasks that pay too low to maintain socially acceptable standards of living, then you’re in trouble. I’m not sure if that will happen, since we’ve also got a declining work-force size in most countries, but it’s possible.

    • IMASBA

      “If the remaining work gets subdivided into a bunch of tasks that pay too low to maintain socially acceptable standards of living, then you’re in trouble. I’m not sure if that will happen, since we’ve also got a declining work-force size in most countries, but it’s possible.”

      It’s not only possible, it’s highly likely at some point Capitalism simply fails at that point, just as it does when cybernetic or genetic enhancements come on the market, or in Hanson’s scenario where EMs can buy more memory and processing power.

      Of course there are ways out (ration cybernetic/genetic enhancements, shorten the workweek, recognizing the rights of AIs, etc…), they require a fairly strong central government which scares a lot of people on this blog, but that’s just because most of us here are (upper) middle class, white male Americans, a minority subset of humanity, even in academic circles.

  • Pingback: When Will AI Be Created? - Machine Intelligence Research Institute

  • http://www.facebook.com/people/Theresa-Klein/1408551264 Theresa Klein

    The trouble with AI is that it is in need of a paradigm shift. The field is dominated by classical logic-based techniques, because that’s where the money is. But everyone already knows that, from a theoretical standpoint, we’re not going to get human-like intelligence from there. Even neural networks have evolved away from anytihng resembling human brains and have basically been absorbed by machine learning as just one of a million numerical approximation techniques. Computationalism has taken over the field, but computationalism is just a generalization of earlier ideas.
    The only hope lies in dynamicism, but that is a nascent field and it requires embodiment, so the “intelligence” has a world to interact with. Thus, true AI and robotics must evolve in parallel, and since robotics is physical this is hard and slow.

  • Pingback: Samir’s Selection 05/16/2013 (p.m.) | Samir's Selection

  • Pingback: Robots v aliens | The Enlightened Economist

  • https://plus.google.com/113643748896718381845/posts/ Antony

    I take issue with the statement “Our economic growth rates are limited by the rate at which we can grow labor.” Even if we ignore unemployment and assume that you meant “highly specialized and educated labor”, there is a much broader issue of resource exhaustion. The Global Footprint Network estimates that humanity is now consuming resources 50% faster than Earth can regenerate them. Regardless of how accurate this particular estimate is, there can be little doubt that there is an upper cap on growth imposed by the availability of resources used by the industrial society. Fossil fuels, topsoil, phosphorus, rare earth minerals, and many other essential inputs are not unlimited. Whereas intelligent people can reasonably debate how long a particular stock might last, the point is that none will last forever. And then there is the biosphere itself. At some point, rapid climate change, habit destruction, pollution, and overexploitation may destabilize global food chains enough to put economic growth in reverse. (Worse scenarios are easy to come by as well).

    The bottom line is that you cannot talk about growth in purely economic terms any more. Every theory is only an imprecise model of reality, underpinned by a set of assumptions. Earlier generations of economic theorists made an assumption of unlimited natural resources. This caused no problems so long as human population remained relatively small and industrial technology was in its infancy. It is time to revise these assumptions, face the new reality, and stop dreaming that technology will solve all our problems. Robots are not some deus ex machina.

    The problem is overconsumption. And the only way to reduce total consumption without decreasing the standard of living of individuals, is to have fewer individuals. This is not to suggest that war, disease, or some fascist mandatory eugenics program is the answer. Society must discourage breeding by subtler means, so as to effectuate a mental phase shift. It’s an uphill battle since the idea “multiply and replenish the earth” in all its variations is clearly a super-replicator, i.e. it promotes its own survival. Yet, it must be confronted all the same, or else we are all doomed.

    • mugasofer

      I would assume an em civilisation would colonise other planets.

      Then again, it’s pretty mind-boggling we haven’t done that already, hard though it is for meatbags.

  • Pingback: Robots are as smart as humans, but they won’t be taking our jobs – Quartz

  • Pingback: Robots will take our jobs, but it’s hard to say when | The Usual Sources

  • Steve Witham

    “AI software progress has been slow.”

    Google’s pagerank is a counterexample. Google gives good answers to plain-language questions. It’s a simple algorithm that “woke up” when given enough data and power. It’s also the biggest example of a capability that went from only-humans-can to just-ask-the-computer overnight. I think Google search is strangely neglected in discussions of AI.

    Software is also driving cars, translating paragraphs decently, turning spoken words into txt messages, recognizing faces in photographs, piloting quadrotors to play ping pong, designing electronic circuits by pure experiment, guiding doglike robots over rough terrain…

    We won’t know how fast AI progress has been till we get there. Maybe it seems slow right now because we’re just mopping up the last details.

    • IMASBA

      “Maybe it seems slow right now because we’re just mopping up the last details.”

      I can assure you that’s not the case. Current solutions to problems are patchworks of different algorithms, we have nothing resembling an child’s mind: a self aware system that can learn pretty much anything using a limited set of algorithms.

      • Dermot Harnett

        What evidence do we have that a child’s mind uses a limited set of algorithms? Maybe the human brain just runs an even bigger, messier patchwork, with more hardware behind it.

      • IMASBA

        Possible, but unlikely. The amount of storage and processing power required would be stupendously large, especially when remembering how efficient even the tiny brains of fish and insects are at navigating and learning.

    • VV

      Google gives good answers to plain-language questions.

      It doesn’t. It just finds web pages that match the words in your query (accounting for synonyms and spelling errors) and presents them ranked according to an importance metric that happens to be sensible to us.

      It is one fundamentally innovative idea (the PageRank metric) some smaller ideas and lots and lots of excellent engineering and fine tuning.

  • Dermot Harnett

    Wouldn’t copying a human brain almost inevitably give us enough information to build A.I.? It seems implausible that we could run a human intelligence on a computer, and yet still have no idea how to improve on it.

  • Pingback: Un mundo de robots | Maven Trap

  • VV

    Our economic growth rates are limited by the rate at which we can grow labor.

    How does that claim fit with the observation that all developed countries have significant unemployment?

  • Pingback: Robots: Seeking Jobs Or World Domination?

  • Ellen Blanchette

    The way I see it, all scientific discussion aside, the problem is robots don’t buy stuff. They may need some software upgrades which corporations could provide and so a few people developing software for robots will still have jobs (unless you are envisioning a self-repairing and self-improving robot) but replacing human workers with robots is a losing deal in the long run. We already have robots building cars, with probably a tenth of the workforce it took to build them in the past. So all those former auto workers who now have low paying jobs greeting people at WalMart are barely making ends meet. So I ask you, when the real jobs that pay a living wage have been replaced by robots, who will buy the stuff they make? You can already see the result of the combination of computerization and off-shoring of jobs in the last two decades. I could see a positive of bringing back some jobs to the U.S. by adding in robots to other kinds of assembly line work but in the end, we have to find ways to provide good jobs to men and women with families to raise or the nation as a whole will continue in what is now a continuous decline in the standard of living for millions of people. And, surprise, surprise, profits are down in all sectors that rely on consumers. Can anyone tell me how you get around this? I keep feeling that economists just ignore this reality. Unless the answer is the magical economy of the “developing world.” Yes, China and India have enormous populations and as they grow in wealth they will become consumers but they will also become big manufacturers. This is the road to ruin if this is the plan.