“Human” Seems Low Dimensional
Imagine that there is a certain class of “core” mental tasks, where a single “IQ” factor explains most variance in such task ability, and no other factors explained much variance. If one main factor explains most variation, and no other factors do, then variation in this area is basically one dimensional plus local noise. So to estimate performance on any one focus task, usually you’d want to average over abilities on many core tasks to estimate that one dimension of IQ, and then use IQ to estimate ability on that focus task.
Now imagine that you are trying to evaluate someone on a core task A, and you are told that ability on core task B is very diagnostic. That is, even if a person is bad on many other random tasks, if they are good at B you can be pretty sure that they will be good at A. And even if they are good at many other tasks, if they are bad at B, they will be bad at A. In this case, you would know that this claim about B being very diagnostic on A makes the pair A and B unusual among core task pairs. If there were a big clump of tasks strongly diagnostic about each other, that would show up as another factor explaining a noticeable fraction of the total variance. Making this world higher dimensional. So this claim about A and B might be true, but your prior is against it.
Now consider the question of how “human-like” something is. Many indicators may be relevant to judging this, and one may draw many implications from such a judgment. In principle this concept of “human-like” could be high dimensional, so that there are many separate packages of indicators relevant for judging matching packages of implications. But anecdotally, humans seem to have a tendency to “anthropomorphize,” that is, to treat non-humans as if they were somewhat human in a simple low-dimensional way that doesn’t recognize many dimensions of difference. That is, things just seem more or less human. So the more ways in which something is human-like, the more you can reasonably guess that it will be human like in other ways. This tendency appears in a wide range of ordinary environments, and its targets include plants, animals, weather, planets, luck, sculptures, machines, and software.
We feel more morally responsible for how we treat more human-like things. We are more inclined to anthropomorphize things that seem more similar to humans in their actions or appearance, when we more desire to make sense of our environment, and when we more desire social connection. When these conditions are less met, we are more inclined to “dehumanize”, that is to treat human things as less than fully human. We also dehumanize to feel less morally responsible for our treatment of out-groups.
One study published in Science in 2007 asked 2400 people to make 78 pair-wise comparisons between 13 characters (a baby, chimp, dead woman, dog, fetus, frog, girl, God, man, vegetative man, robot, woman, you) on 18 mental capacities and 6 evaluation judgements. An “experience” factor explained 88% of capacity variation, being correlated with capacities for hunger, fear, pain, pleasure, rage, desire, personality, consciousness, pride, embarrassment, and joy. This factor had a strong 0.85 correlation with a desire to avoid harm to the character. A second “agency” factor explained 8% of the variance, being correlated with capacities for self-control, morality, memory, emotion recognition, planning, communication, and thought. This factor had a strong 0.82 correlation with a desire to punish for wrongdoing. Both factors correlated with liking a character, wanting it to be happy, and seeing it as having a soul (Gray et al. 2007).
Though it would be great to get more data, especially on more than 13 characters, this study does confirm the usual anecdotal description that anthropomorphizing is essentially a low dimensional phenomena. And if true, this fact has implications for how biological humans would treat ems.
My colleague Bryan Caplan insists that because ems would not be made out of familiar squishy carbon-based biochemicals, humans would feel confident that ems have no conscious feelings, and thus eagerly enslave and harshly treat ems, as Bryan says that our moral reluctance is the main reason why most humans today are not harshly treated slaves. However, this in essence claims the existence of a big added factor explaining judgements related to “human-like”, a factor beyond those seen in the above survey.
After all, “consciousness” is already one of the items included in the above survey. But it was just one among many contributors to the main experience factor; it wasn’t overwhelming compare to the rest. And I’m pretty sure that if one tried to add being made of biochemicals as a predictor of this main factor, it would help but remain only one weak predictor among many. You might think that these survey participants are wrong, of course, but we are trying to estimate what typical people will think in the future, not what is philosophically correct.
I’m also pretty sure that while the “robot” in the study was rated low on experience, that was because it was rated low on capacities like for pain, pleasure, rage, desire, and personality. Ems, being more articulate and expressive than most humans, could quickly convince most biological humans that they act very much like creatures with such capacities. You might claim that humans will all insist on rating anything not made of biochemicals as all very low on all such capacities, but that is not what we see in the above survey, nor what we see in how people react to fictional robot characters, such as from Westworld or Battlestar Galactica. When such characters act very much like creatures with these key capacities, they are seen as creatures that we should avoid hurting. I offer to bet $10,000 at even odds that this is what we will see in an extended survey like the one above that includes such characters.
Bryan also says that an ability to select most ems from scans of the few best suited humans implies that ems are extremely docile. While today when we select workers we often value docility, we value many other features more, and tradeoffs between available features result in the most desired workers being far from the most docile. Bryan claims that such tradeoffs will disappear once you can select from among a billion or more humans. But today when we select the world’s best paid actors, musicians, athletes, and writers, a few workers can in fact supply the entire world in related product categories, and we can in fact select from everyone in the world to fill those roles. Yet those roles are not filled with extremely docile people. I don’t see why this tradeoff shouldn’t continue in an age of em.
Added July 17: Bryan rejects my bet because:
I don’t put much stock in any one academic paper, especially on a weird topic. .. Robin’s interpretation of the paper .. is unconvincing to me. .. How so? Unfortunately, we have so little common ground here I’d have to go through the post line-by-line just to get started. .. a survey .. is probably a “far” answer that wouldn’t predict much about concrete behavior.
That is, nothing anyone says can be trusted on this topic, except Bryan’s intuition. He instead proposes a bet where I pay him up front, and he might pay me at our life end.
Seems to me Bryan disagrees not just with me, but also with the authors of this Science paper, as well as its editors and referees at Science. About what the survey means. But he seems to accept that a similar survey would show as I claim. And since he’s on record to say there isn’t that much difference between a survey and a vote, it seems he must accept this for predicting vote outcomes.
Added July 19: I offer to bet anyone $10K at even odds that in the next published survey with a similar size and care to the one above, but with at least twice as many characters, over 80% of the variance will be explained by two factors, neither of which is focused on the substance (e.g., carbon, silicon) out of which a character is made.