44 Comments

My poll results are in, and they agree with yours: 12/17 picked Machine-like and the remaining 5/17 picked Human-like. (I did not offer a third option.) I phrased the questions exactly as you did, but left out the comment that I regarded as objectionable.

Expand full comment

And a way to distinguish these two explanations would be a question that would require you to honestly dissent from a majority opinion. Since people seem to do better in Asch-style experiments when the answer is to be communicated privately, I'd say that's points for my hypothesis, if I'm understanding your proposed explanation correctly.

Expand full comment

But why _does it_ only look bad to create monsters of the human variety? If you think that's because most people don't know/didn't pay attention to the details of what went on (because otherwise they'd all understand that robots ought to be lively too), then I propose an alternative explanation: readers too have an average attention span, miss important details ("oh, a robot, who cares"), and voice their feelings with the first impression in mind.

That is, I didn't think "I can get away with this because people don't naturally feel empathy for robots", I just didn't naturally feel empathy for the robots.

Expand full comment

No to 1 and 2, yes to 3.(Assuming you got the cooperation of the zombie mothers.)

Expand full comment

This post had a sequence of choices. Which of the other choices in this post would you make to prevent a less happy underclass?

Expand full comment

Guilt for their condition isn't the only reason not to want to create an underclass. Another worry is that they won't be happy doing the low-status jobs and will seek to change the social order; this is why I voted for empty, machine-like.

Expand full comment

I'm not sure how coherent it is to posit an ability to only experience positive emotions. If that were possible, lots of people might go for it, even those motivated by what I postulated.

Expand full comment

'If some kind of robot is going to replace humans on most jobs, would you prefer it to be 1) empty machine-like robots w/ no feelings or inner life, or 2) lively human-like robots full of passion, & humor but with the capacity to suffer, or 3) lively human-like robots who can only experience positive emotions?'

EDIT: I'm tempted to add to 3 '... but who are no less capable of having interesting, meaningful conversations with other humans' to dispell the second issue in my original comment about the uncanny valley of unrelatably happy robots.

Expand full comment

What survey would you suggest to test your theory relative to mine?

Expand full comment

Although blame may be part of the issue, here, I think it's actually a combination of blame and compassion, predicated on the uncertainty of what it would actually be like to be one of these supposedly lively human like robots. I know plenty of 'lively human-like' humans who go through horrible experiences due to the stresses of their work etc, and so 'lively human-like' becomes synonymous with 'vulnerable to emotional suffering'.

I think the real hesitation behind giving these robots lively human-like minds is that we just don't know whether the psychological pros would outweigh the cons with their unique conscious experience. On the whole humans who work really shitty jobs find respite in their family time etc but you've made no claims about where the good parts of a lively human-like robot's day would come from. If the solution is to wire them in such a way that they would genuinely enjoy the work that they do, no matter how menial, the new reason why people might hesitate to support the creation of such robots, is that the conscious experience is now more foreign to their own (what does it feel like to be the kind of person who enjoys cleaning toilets all day every day with no social interaction?) so we feel less confident in signing off on granting conscious experience to such a robot given we only have our own conscious experiences to draw upon when it comes to bringing into the world something which has the capacity to suffer.

I also have the intuition that it's very possible for somebody to outwardly appear happy but actually be suffering quite a bit underneath. An additional hesitation this creates is that if we indeed create these robots and they indeed are somehow suffering underneath their programmed facade, we could never know.

So I don't think this is so much an issue about blame as it is:1) An issue about the confidence with which a person can bring a conscious being into existence without knowing just how much it might suffer throughout its life2) In the case of a guarantee of zero suffering, an issue about what it means to bring a conscious being into existence whose consciousness operates in an alien way compared to our own

Expand full comment

It seems like you do understand me you just disagree. For example, you apparently disagree that it is good to create more creatures with lives worth living. On abuse, you won't be blamed much for abusing a toaster, but you can be for something that looks very human.

Expand full comment

"If you are focused on creating a better world, you’ll probably prefer the human-like robots, as that which choice results in more creatures who find their lives worth living. But if you are focused on avoiding blame, you’ll probably prefer the machine-like robots, as few will blame you for for that choice."

I really can't make head or tails of this.

The first sentence presents the basic construction that more creatures that find their lives worth living = better world. This is a claim you could spend a whole book trying to prove and still come up short.

The second sentence makes the claim that most people would choose unfeeling automatons (to paraphrase) to take over almost all jobs as that would allow people to avoid blame for mistreating them.

Could you expand on how this is actually a dualism?We already create more creatures that find their lives worth living (babies). And we create machines that are expendable for abuse. Reality presents this as an "and" case, not an "or" case.

Expand full comment

Good idea. I don't get much purchase on twitter; I get more on facebook. So I'll try both.

Results will be forthcoming in a week.

Expand full comment

Weirder still: since people vary a lot in their tastes for interaction, the market might demand switchable-mode robots. But what would the ethics be of zombifying a previous non-zombie?

Expand full comment

Reversing the argument, lively robots could make companions for lonely but sociable people.

And there is no reason why you can have only one kind of robot.

Expand full comment

Would a conscious toaster be more productive? If not, we shouldn't make conscious toasters because it would be wasting resources we could devote to supporting living breathing conscious beings. Robots are poorly paid. In fact they aren't paid at all. They are our slaves. Would paying our slaves make them more productive? If conscious toasters were more productive and we could make them, we might well want to, but that would an incredible belief. We would expect conscious toasters would cost us productivity, making fewer conscious beings possible.

Robots are more productive by definition or they wouldn't be adopted, though we do move production to developing countries where it can be more productive and support more conscious beings as well.

Expand full comment