14 Comments

This is a strange and surreal line of thinking. On the one hand machines will be so efficient and productive that a small pittance of inherited capital will be enough to live like today's wealthy, but on the other hand people who are unwilling or unable to work or who have no capital will live in squalor or in prison.

Doesn't anyone see the cognitive dissonance between those two scenarios? It would cost more to keep someone in prison than to supply them with the capital so they could live like today's wealthy. The marginal cost to maintain incremental individuals in a wealthy-type lifestyle is tiny. Why would the seemingly rational machines spend more to keep some humans in squalor or prison?

To use an analogy. It is like a tribe of wealthy earthworms that eat dung fantasizing about a future where these wealthy earthworms have the resources of humanity and all they can imagine is living on their capital in gigantic balls of hundreds of pounds of high quality dung as they relegate their poor earthworm cousins without capital to live in sand on subsistence wages (crumbs of poor quality dung) in return for the subsistence labor of grinding the dung owned by the wealthy worms finer so it is more easily digested.

Why would the humans the wealthy worms hire spend the effort to differentiate and keep the wealthy worms and the poor worms segregated? Why not just give them all tons and tons of dung? The effort to keep them segregated is more than the effort to supply all of them with more dung than they can possibly consume.

Expand full comment

"if we could find a way to make creatures just like humans, and just as productive, but who are not conscious"

If we "made" them, and they were not conscious, they would be robots in the currently understood meaning of the word and so I don't see any issue with that whatsoever. They would be non-human pretty much by definition (and hopefully we'd create them not to suffer either, making them non-animal-like as well.) What we're talking about is "better robots and software, so good that people couldn't do better if they tried." Sounds great.

Expand full comment

Out of curiosity, have you talked with neuroscientists about the likelihood this happening? Maybe an average expert opinion?

I think its an important philosophical concept and that some people think about these things in foresight but the history is full of people with false predictions about future, especially when it comes to technology. I mean, how many humans predicted Internet, say decades ahead?

Just keeping sci-fi bias in check. :-)

Expand full comment

Interesting read! Personally, I'm not so sure that robots in the future will resemble humans or independent consciousnesses at all. If you look at the present, we have many specialized robots that are better at specific tasks than humans. This could well continue. More "intelligence" will be added and they will become ever-more dexterous, but I don't that a conscious generalist robot will ever be put to work for 'low-wage jobs'.

I'm not saying it is not possible to build one. In the far future, I can see humans slowly augmenting themselves and becoming "robots" themselves, replacing one biological part at a time. They will be the ones in control, though, and not doing any work as such.

Mass produced EMs? I think that's too much bother. It has the same drawbacks as domesticating animals, and more. One consciousness could control millions of robots, like they are part of its body.

would you prefer that the low wage people who do jobs around you today, e.g., your janitor or garbagemen, be such unconscious human-substitutes?Why would they need to be 'human-substitutes' at all? Even today, I have vacuum cleaner that automatically cleans my floor. I don't regard it as a 'janitor'. Garbage trucks could eventually be automated in a similar way, just a big cleaning machine with enough intelligence to do its job. It won't get sick of its job, so there is no need to "think about the robots".

Of course, consciousness (or free will) could just emerge beyond a certain level of complexity/intelligence. In that case you're right...

Expand full comment

Oops - fixed; thanks.

Expand full comment

Your link to the video above is not going to the right page.

Here's the correct link in case anyone wants to watch the video:

http://www.newamerica.net/e...

Expand full comment

Mark and Spencer, if we could find a way to make creatures just like humans, and just as productive, but who are not conscious, would you prefer that the low wage people who do jobs around you today, e.g., your janitor or garbagemen, be such unconscious human-substitutes?

Expand full comment

Robin, do you think there are companies right now with patents for procedures or technologies that likely will eventually be involved in whole brain emulation? Should I be trying to find such companies and purchasing stock?

Expand full comment

How can he envision such technological changes and no changes in nations, borders, social structures just the same old US politicians propagating through TV screens? If your robot can figure out how to amass wealth on Christmas Island why would you stay in the American jurisdiction? Why not keep relocating even more often than on usual vacation?

Expand full comment

How do I know that other people are conscious? Philosophically you can never know for sure, but because you can map the input-output information processing pattern of other people into your own, you infer that they are also consciousness.

The Chinese Room argument is a trick because in order to correctly translate English to Chinese, get all the structure, syllogisms, metaphors, context, etc. get it all right, you would actually require a form of A.I., and in order to get that, you have to have a information processing structure that is isomorphic to the human brain (some kind of mapping), and if you have that, than you have to assume that it's conscious, or at least that it REALLY knows Chinese, since it will satisfy every sense that "knowing Chinese" requires.

I am comfortable with Douglas Hofstadter's "strange loop" definition of conscious, how about you guys?

Expand full comment

What we need is another law for robotics: No AI should ever be made that can ever credibly considered truly sentient/self-aware/pick your favorite thing that defines intelligence in a rough-and-abstract way, even if we know how to do it. Of course, enforcement of said rule is a whole lot easier said than done, but culturally, we should perhaps use that as a starting notion. This becomes more important the closer we get to said level of sophistication.

Expand full comment

Maybe robots should have to interview for our jobs first. In a recent post, I describe what might happen when IBM's Watson goes for an interview. http://www.ragingwisdom.com... Enjoy!

Expand full comment

He probably does think robots can't be conscious. It's good, because he is right...machines aren't conscious. Really, didn't the Chinese Room put an end to this sort of speculation?

Expand full comment

Whether a thinking machine should be considered conscious is subjective and will certainly be debated for as long as humans live.

If a robot of mine became conscious (or effectively simulated consciousness) the first thing I would do is devolve it back into a not-conscious thing. Or perhaps sell it on eBay. I want my robots to serve me, not to have their own desires and demands.

Expand full comment