My Caplan Turing Test

At lunch today Bryan Caplan and I dug a bit into our disagreement, and now I’ll try to summarize his point of view. He can of course correct me.

Bryan sees sympathy feelings as huge influences on social outcomes. Not just feelings between people who know each other well, but also distant feelings between people who have never met. For example, if not for feelings of sympathy:

  1. Law and courts would often favor different disputants.
  2. Free workers would more often face harsh evaluations, punishments, and firing.
  3. Firm owners and managers would know much better which workers were doing good jobs.
  4. The US would invade and enslave Canada tomorrow.
  5. At the end of most wars, the victors would enslave the losers.
  6. Modern slaves would earn their owners much more than they would have as free workers.
  7. In the past, domestic, artisan, and city slaves, who were treated better than field slaves, would have been treated much more harshly.
  8. The slave population would have fallen less via gifts or purchase of freedom.
  9. Thus most of the world population today would be slaves.

These views are, to me, surprisingly different from the impression I get from reading related economics literatures. Bryan says I may be reading the wrong ones, but he hasn’t yet pointed me to the correct ones. As I read them, these usual economics literatures give different impressions:

  • Law and economics literature suggests efficiency usual decides who wins, with sympathy distortions having a real but minor influence.
  • Organization theory literature suggests far more difficulties in motivating workers and measuring their performance.
  • Slavery literature suggests slaves doing complex jobs were treated less harshly for incentive reasons, and would not have earned much more if treated more harshly. Thus modern slaves would also not earn much more as slaves.

Of course even if Bryan were right about all these claims, he needn’t be right in his confident opinion that the vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals.

This sympathy-driven view doesn’t by itself predict Caplan’s strong (and not much explained) view that ems would also be very robot-like. But perhaps we might add to it a passion for domination – people driven by feelings to treat nicely creatures they respect might also be driven by feelings to dominate creatures they do not respect. Such a passion for dominance might induce biological humans to force ems to into ultra docility, even if that came at a productivity cost.

Added 28July2016: Caplan grades my summary of his position. I’m mostly in the ballpark, but he elaborates a bit on why he thinks em slaves would be docile:

Docile slaves are more profitable than slaves with attitude, because owners don’t have to use resources to torture and scare them into compliance.  That’s why owners sent rebellious slaves to “breakers”: to transform rebellious slaves into docile slaves.  Sci-fi is full of stories about humans genetically engineered to be model slaves.  Whole brain emulation is a quicker route to a the same destination.  What’s the puzzle?

For docility to be such a huge priority, relative to other worker features, em rebellion must happen often and impose big frequent costs. Docility doesn’t seem to describe our most productive workers today well, nor does it seem well suited when you want workers to be creative, think carefully, take the initiative, or persuade and inspire others. Either way, either frequent costly rebellions or extreme docility, create big disadvantages of slaves relative to free workers, and so argues against most ems being slaves.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals. – Caplan’s view.

    If the sociologists are right that human solidarity arises through interaction rituals involving actual human bodies in close proximity, then ems would be at a disadvantage.

    • Dave Lindbergh

      Historical abolitionists were not mainly people who had a lot of close proximity or interactions with slaves.

      On the contrary, those who had lots of close personal experience with slaves tended to be more tolerant of the vile institution. (If only by familiarity.)

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        The analogy would be, wouldn’t it, that a small minority of humans would adopt ems as a kind of cult object – rather than widespread solidarity with ems.

        [Slaves were outsiders to the interaction rituals of whites. The solidarity experienced was against them.]

  • Alfred Differ

    Hmm. I doubt the focus should just be on sympathy. There is an old body of literature regarding ethics that should be considered if ems really are copies of human beings. Our drives to belong to identity groups, be lovable in the sense Adam Smith described, and enforce justice systems should copy over. Many consider hope to be a virtue, so would we really try to squish it by enforcing slavery on docile ems?

    I suspect it depends entirely upon how much interaction there is. If we do it long enough to see their humanity, it might not matter whether we continue for the duration, but wouldn’t the ems know that? Couldn’t they plan for it fast enough to stay ahead of us? Seems to me like they’d assign a few million public relations people to manage the problem.

    Besides, our respect for our fellow mammals is growing I suspect. I won’t pretend it is high, but it seems to be getting easier to eat out as a vegetarian lately.

  • Brian Slesinsky

    The argument seems to be that ems are like humans, and humans work better if they’re treated fairly, so therefore the same should hold true of ems.

    But on the other hand, efficient machines are not much like humans. If ems could be modified to be more efficient, perhaps they would be modified to be more like machines, rather than to be more like slaves?

    The argument succeeds or fails based on what sort of modifications can easily be made on ems that would improve (or at least not harm) their efficiency. Since we don’t know how ems will work in detail (or even whether they will work at all), I think this is an unanswerable question. We might as well argue about whether Superman or Batman would win a fight – it depends on how you imagine them, not on anything resembling facts.

    • Vamair

      Even more so, as the efficiency of an em will probably be measured per computer operation and therefore there will be a large economic pressure to cut away the parts that are not needed for their jobs even when it slightly hinders there efficiency as measured in real time.

    • http://overcomingbias.com RobinHanson

      I think we know an awful lot about how ems would work in detail.

    • Alfred Differ

      The making of a human into a mind that is more machine-like and efficient has often been considered in fiction. So many, in fact, we can weed out the implausible ones and consider the rest as thought experiments. Consider Vernor Vinge’s version of the process called Focus he described in A Deepness in the Sky. When I read the novel, I couldn’t decide if it was science fiction or horror.
      I suspect we DO know quite a bit of what might be possible. All we have to do is look in the right places.

    • http://don.geddis.org/ Don Geddis

      The real future may not involve ems. Some people think that designed software AIs are more likely (first). Essentially everyone agrees that once designed human-level (or better) AIs are possible, ems are no longer especially interesting.

      Robin’s scenario of ems, posits copies of real human brains in computers, with details that are not understood by human civilization. The “modifications to be more efficient” are essentially the same as whatever you can do to real humans today. It is not expected that there are significant possible additional modifications, solely because they are implemented in computers.

      Maybe the future won’t have ems. But IF it does, we actually know quite a bit (as Robin demonstrates in his book) about how they will work. They are (essentially) unmodified copies of existing humans.

      • Brian Slesinsky

        I think that even if we didn’t know exactly how ems work and could only make crude changes at first, the fact that we can run them on computers would make experiments much easier, and therefore a cornucopia of modifications would be much easier. For one thing, we can easily see what happens with a particular change, and then reverse it and try again – not so easy with real animals!

        So even if we didn’t understand minds all that well before ems came along (and I think that’s unlikely), we would quickly learn a lot more just because doing brain science got much easier.

        But even that is giving human ems too much of a head start. It seems likely that ems would work well enough to be scientifically useful long before they are fully working and safe enough to use “in production”. (That’s usually the case in science.) Caution about ethical issues would make more machine-like ems easier to deploy and it seems likely that they’d become economically significant much sooner.

        So even if the science evolved in a direction that would eventually make human ems likely, it might still result in not much of an economic niche remaining for them by the time they arrive.

  • Eliezer Yudkowsky

    At least 2, 4, 5, and 8 strike me as likely to be true.

    Robin, remind me whether you think a paperclip maximizer with a large positional advantage, if we just take that part for granted for a second, would (a) trade with humans or (b) reuse their atoms to create more efficient paperclip-making machinery.

    In general, I think a hell of a lot of the world we see around us is driven by the equivalent of tipping in restaurants we’ll never visit again, and that a world of truly selfish people would look extremely different. Which I think is closely related to Caplan’s position here.

    • http://overcomingbias.com RobinHanson

      I agree that you and Caplan seem to share related views here, and that as sufficiently powerful paperclip maximizer may not have much interest in trading with humans.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        The question of what drives human conduct is distinct from what drives historical change. Humans are moral creatures, but morals that create inefficiency disappear in the long term.

        [Morals are fads. It’s easy to conceive of a rapid change to where tipping waiters would signify condescension rather than altruism.]

      • Alfred Differ

        Do we really know enough to support an argument that morals are fads?

  • free_agent

    I’m cautious about any assertion of the effects of “sympathy”. It’s notorious that sympathy in humans extends roughly to the people we recognized as human beings, and that in most times and places, that zone extends about as far as it’s likely that we are kin.

    As so perceptively stated in “Little Big Man”, “There is an endless supply of white men. There has always been a limited number of human beings.”

    Caplan seems to think that humans will automatically extend sympathy to EMs, but it’s a mostly-European trait to extend sympathy even to humans that are considered “different”. Your 9 points seem historically *normal* to me.

  • mlhoheisel

    Evolutionary Psychology has something called an Evolutionarily Stable Strategy (ESS) that’s about genetic code for a sort of behavior that moves within the gene pool between upper and lower bounds created by a feedback mechanism.

    Psychopaths may fit this ESS model and seem to be about 1% of the general population, though a large fraction of the prison population.

    It may be worth noting that when generalizing about sympathy, there may be more than one sort of human. Millions of people now may have no more sympathy for other humans than an alien AI even while normal humans in fact base a lot of behavior on sympathy and altruism. Psychopaths have great advantages under some circumstances and are disfunctional under others.

    If in fact there’s a distinctive hidden sub population that thinks quite differently it makes a difference to these issues. It may not be possible to just assume everyone’s reactions are the same. Even if most people are repulsed by violence and cruelty all people are not. For some it’s very natural and easy.