Tyler On Robots

Tyler Cowen recently was part of a recorded panel discussion on Will Robots Steal Your Job? I’ve included some quotes below. I think he basically gets things right, at least from the point of view of humans. Oh he says apparently silly things like:

Smart machines will always be complements and not substitutes [for humans], but it will change who they’re complementing.

But from the context you can see he just means that really rich humans, who own a lot of robot-relevant capital, will enjoy having physical human as servants. Tyler also insists change will be gradual, apparently dismissing the whole brain emulation scenario I focus on, in which some change is necessarily rather sudden. Perhaps he thinks that won’t be possible until very late in his scenario.

My main complaint is that Tyler seems to completely ignore the experiences and welfare of the robots themselves (as do the other three panelists). Somewhat like Europeans in 1700 discussing the wisdom of their colonizing the world, but considering only on its effects on Europeans. I doubt this is because Tyler agrees with Bryan Caplan that robots can’t possibly be conscious. What then? Does Tyler simply not care about non-humans?

Those quotes:

[47:05-50:10]

Farhad Manhjoo: Tyler what do you think if that, is it fundamentally a new kind of thing?

Tyler Cowen: I do think its a new kind of thing, but let me outline a more optimistic case, and we can all decide what to make of it, since this is what you want. (laughter)

When smart machines really take off, there will be much much more output. Things will be more efficient, there will be more stuff, it will be higher quality, medicine will be much better. So there’ll be a lot more wealth. That wealth means its possible to support many people, so even if wages are low in a lot of sectors, if you own a pretty small amount of capital, you’ll still be quite well off. An alternative scenario is that governments own capital to some extent, and have more redistribution; there’s a kind of guaranteed annual income in this scenario, because its easy enough to afford it. And maybe a lot of people don’t have jobs in the contemporary sense, but again they still do fine. So as long as total output is going up, which clearly it is in these scenarios, there’s always optimistic corners to these pictures.

The other point I would make is I think smart machines will always be complements and not substitutes, but it will change who they’re complementing. So I was very struck by this woman who was a doctor sitting here a moment ago, and I fully believe that her role will not be replaced by machines. But her role didn’t sound to me like a doctor. It sounded to me like therapist, friend, persuader, motivational coach, placebo effect, all of which are great things. So the more you have these wealthy patients out there, the patients are in essense the people who work with the smart machines and augment their power, those people will be extremely wealthy. Those people will employ in many ways what you might call personal servants. And because those people are so wealthy, those personal servants will also earn a fair amount.

So the gains from trade are always there, there’s still a law of comparative advantage. I think people who are very good at working with the machines will earn much much more. And the others of us will need to find different kinds of jobs. But again if total output goes up, there’s always an optimistic scenario.

Manhjoo: Doesn’t your optimistic scenario require a long transition period? If this technology was going to come about over a period of a hundred years, maybe we could sort of adjust society to that model. But if its going to come about in the next twenty years, lets say, wouldn’t it be much harder to get there?

Cowen: We’re going to have a long transition scenario. You look at something like chess, which is highly manageable, highly regular, it took really quite a long time to get chess playing machines to be able to beat the best humans. Go, the best humans are still better. Shogi its close. You look at a lot of different areas. There medicine, there’s law, there’s economists, and there’s going to proceed at different paces, there’s be a kind of slow gradual turn over of the economy, where people gradually switch to smart machines, and people switch sectors, and I don’t see why its the singularity scenario where we wake up one morning and the terminator arrives and its like “Oh my God.” I think that’s pretty unlikely. …

[55:15-55:40]
Cowen: If I earned ten million dollars a year, I would hire a person just to take around my dry cleaning. I think basically we’ll end up hiring other people to cheer us up. The restaurant example is a very good one. To go to horn and harness is depressing. You go to a restaurant, someone smiles they say hello Mister Cowen, they bring you to the seat, the waiter or waitress comes by, they cheer you up. A lot of jobs will be about motivation. Just like the doctor here is motivating her patients. Motivation will be one of the biggest employment sectors in this future.

[58:55-59:05] M: Tell me why we’d just haver to work for ten hours a week. We’d just have to work for a little bit, because everything would be so cheap?
Cowen: Absolutely, you could work more if you wanted, but …

[1:00:05-1:01:05]
Cowen: Again, I’m putting on the optimists hat, which was the request.

Manhjoo: Yes. But do you actually believe it then? Put on the other hat.

Cowen: You’re going to get a lot of different results. It will depend a great deal on the country, and most of all how politics responds. One issue I also worry about, distinct from this issue, is that I think in some sense very good drugs will be quite a threat to jobs.

Manhjoo: Explain that.

Cowen: If drugs are really fun and totally safe, which may not be possible, we don’t know, but you could imagine that as a technological advance, its really going to cut into a lot of this spending on personal servants, and getting your hair done, and going to the spa; just stay at home and take your drugs, right?
… So that’s a competing force against smart machines. In part the future will be determined by which set of forces are winning that race. …

[1:11:55-1:13:00]
Manhjoo: What about people who just don’t have the aptitude for that kind of work? What do those people do?

Michael Lind: Drugs.

Cowen: Drugs. (laughter) The new order you can already see this, will favor conscientious ness as a personality trait. And on average that will favor female labor I think. I think there will be some decent percentage of people who not quite for economic reasons, but for reasons of temperament, are simply going to do very badly in this order, and I think some of those people will end up living in a kind of shanty town, getting a eery low income, with other people like that. And today we call those prisons, or you have tent cites, or in developing countries, a place like Brazil, you just have people living in low rent areas and not getting public services. And I think that will be part of the new equilibrium. for some people. People basically who are not conscientious enough to simply take a job, there’s a rich person willing to pay them, what they’re supposed to do isn’t that hard, the wage isn’t bad, but they just can’t do it. Those people in my view will be the big losers.

[1:17:55-1:18:30]
Cowen: Don’t just focus on labor income, think about capital income. Anyone who inherits anything form their parents, who admittedly is not everyone, they can just live off of the capital their whole life. Labor is 69% of GDP. If you imagine all that labor, all those jobs, going away, and we have more output, then you just have these staggeringly high returns to capital.

Lind: Unless you have eight kids.

Cowen: Even then, divide by eight. People will still be doing pretty well. So having access to capital income I think is the main question, not what your job will be.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Mark M

    Whether a thinking machine should be considered conscious is subjective and will certainly be debated for as long as humans live.

    If a robot of mine became conscious (or effectively simulated consciousness) the first thing I would do is devolve it back into a not-conscious thing. Or perhaps sell it on eBay. I want my robots to serve me, not to have their own desires and demands.

  • Matt W

    He probably does think robots can’t be conscious. It’s good, because he is right…machines aren’t conscious. Really, didn’t the Chinese Room put an end to this sort of speculation?

  • WiseFather

    Maybe robots should have to interview for our jobs first. In a recent post, I describe what might happen when IBM’s Watson goes for an interview. http://www.ragingwisdom.com/?p=344
    Enjoy!

  • http://twitter.com/opirmusic Spencer Thomas

    What we need is another law for robotics: No AI should ever be made that can ever credibly considered truly sentient/self-aware/pick your favorite thing that defines intelligence in a rough-and-abstract way, even if we know how to do it. Of course, enforcement of said rule is a whole lot easier said than done, but culturally, we should perhaps use that as a starting notion. This becomes more important the closer we get to said level of sophistication.

  • Albert Ling

    How do I know that other people are conscious? Philosophically you can never know for sure, but because you can map the input-output information processing pattern of other people into your own, you infer that they are also consciousness.

    The Chinese Room argument is a trick because in order to correctly translate English to Chinese, get all the structure, syllogisms, metaphors, context, etc. get it all right, you would actually require a form of A.I., and in order to get that, you have to have a information processing structure that is isomorphic to the human brain (some kind of mapping), and if you have that, than you have to assume that it’s conscious, or at least that it REALLY knows Chinese, since it will satisfy every sense that “knowing Chinese” requires.

    I am comfortable with Douglas Hofstadter’s “strange loop” definition of conscious, how about you guys?

  • change

    How can he envision such technological changes and no changes in nations, borders, social structures just the same old US politicians propagating through TV screens? If your robot can figure out how to amass wealth on Christmas Island why would you stay in the American jurisdiction? Why not keep relocating even more often than on usual vacation?

  • Jack

    Robin, do you think there are companies right now with patents for procedures or technologies that likely will eventually be involved in whole brain emulation? Should I be trying to find such companies and purchasing stock?

  • http://hanson.gmu.edu Robin Hanson

    Mark and Spencer, if we could find a way to make creatures just like humans, and just as productive, but who are not conscious, would you prefer that the low wage people who do jobs around you today, e.g., your janitor or garbagemen, be such unconscious human-substitutes?

    • http://www.opir-music.com/ Spencer Thomas

      “if we could find a way to make creatures just like humans, and just as productive, but who are not conscious”

      If we “made” them, and they were not conscious, they would be robots in the currently understood meaning of the word and so I don’t see any issue with that whatsoever. They would be non-human pretty much by definition (and hopefully we’d create them not to suffer either, making them non-animal-like as well.) What we’re talking about is “better robots and software, so good that people couldn’t do better if they tried.” Sounds great.

  • Steve T

    Your link to the video above is not going to the right page.

    Here’s the correct link in case anyone wants to watch the video:

    http://www.newamerica.net/events/2011/will_robots_steal_your_job

    • http://hanson.gmu.edu Robin Hanson

      Oops – fixed; thanks.

  • Wladimir

    Interesting read! Personally, I’m not so sure that robots in the future will resemble humans or independent consciousnesses at all. If you look at the present, we have many specialized robots that are better at specific tasks than humans. This could well continue. More “intelligence” will be added and they will become ever-more dexterous, but I don’t that a conscious generalist robot will ever be put to work for ‘low-wage jobs’.

    I’m not saying it is not possible to build one. In the far future, I can see humans slowly augmenting themselves and becoming “robots” themselves, replacing one biological part at a time. They will be the ones in control, though, and not doing any work as such.

    Mass produced EMs? I think that’s too much bother. It has the same drawbacks as domesticating animals, and more. One consciousness could control millions of robots, like they are part of its body.

    would you prefer that the low wage people who do jobs around you today, e.g., your janitor or garbagemen, be such unconscious human-substitutes?
    Why would they need to be ‘human-substitutes’ at all? Even today, I have vacuum cleaner that automatically cleans my floor. I don’t regard it as a ‘janitor’. Garbage trucks could eventually be automated in a similar way, just a big cleaning machine with enough intelligence to do its job. It won’t get sick of its job, so there is no need to “think about the robots”.

    Of course, consciousness (or free will) could just emerge beyond a certain level of complexity/intelligence. In that case you’re right…

  • Ari T

    Out of curiosity, have you talked with neuroscientists about the likelihood this happening? Maybe an average expert opinion?

    I think its an important philosophical concept and that some people think about these things in foresight but the history is full of people with false predictions about future, especially when it comes to technology. I mean, how many humans predicted Internet, say decades ahead?

    Just keeping sci-fi bias in check. :-)

  • http://daedalus2u.blogspot.com/ daedalus2u

    This is a strange and surreal line of thinking. On the one hand machines will be so efficient and productive that a small pittance of inherited capital will be enough to live like today’s wealthy, but on the other hand people who are unwilling or unable to work or who have no capital will live in squalor or in prison.

    Doesn’t anyone see the cognitive dissonance between those two scenarios? It would cost more to keep someone in prison than to supply them with the capital so they could live like today’s wealthy. The marginal cost to maintain incremental individuals in a wealthy-type lifestyle is tiny. Why would the seemingly rational machines spend more to keep some humans in squalor or prison?

    To use an analogy. It is like a tribe of wealthy earthworms that eat dung fantasizing about a future where these wealthy earthworms have the resources of humanity and all they can imagine is living on their capital in gigantic balls of hundreds of pounds of high quality dung as they relegate their poor earthworm cousins without capital to live in sand on subsistence wages (crumbs of poor quality dung) in return for the subsistence labor of grinding the dung owned by the wealthy worms finer so it is more easily digested.

    Why would the humans the wealthy worms hire spend the effort to differentiate and keep the wealthy worms and the poor worms segregated? Why not just give them all tons and tons of dung? The effort to keep them segregated is more than the effort to supply all of them with more dung than they can possibly consume.

  • Pingback: Overcoming Bias : Me On Ideas In Action