Tag Archives: Morality

Overconfidence From Moral Signaling

Tyler Cowen in Stubborn Attachments:

The real issue is that we don’t know whether our actions today will in fact give rise to a better future, even when it appears that they will. If you ponder these time travel conundrums enough, you’ll realize that the effects of our current actions are very hard to predict,

While I think we often have good ways to guess which action is more likely to produce better outcomes, I agree with Tyler than we face great uncertainty. Once our actions get mixed up with a big complex world, it becomes quite likely that, no matter what we choose, in fact things would have turned out better had we made a different choice.

But for actions that take on a moral flavor, most people are reluctant to admit this:

If you knew enough history you’d see >10% as the only reasonable answer, for most any big historical counterfactual. But giving that answer to the above risks making you seem pro-South or pro-slavery. So most people express far more confidence. In fact, more than half give the max possible confidence!

I initially asked a similar question on if the world would have been better off overall if Nazis had won WWII, and for the first day I got very similar answers to the above. But I made the above survey on the South for one day, while I gave two days for the Nazi survey. And in its second day my Nazi survey was retweeted ~100 times, apparently attracting many actual pro-Nazis:

Yes, in principle the survey could have attracted wise historians, but the text replies to my tweet don’t support that theory. My tweet survey also attracted many people who denounced me in rude and crude ways as personally racist and pro-Nazi for even asking this question. And suggested I be fired. Sigh.

Added 13Dec: Many call my question ambiguous. Let’s use x to denote how well the world turns out. There is x0, how well the world actually turned out, and x|A, how well the world have turned out given some counterfactual assumption A. Given this terminology, I’m asking for P(x>x0|A).  You may feel sure you know x0, but you should not feel sure about  x|A; for that you should have a probability distribution.

GD Star Rating
loading...
Tagged as: , ,

Moral Choices Reveal Preferences

Tyler Cowen has a new book, Stubborn Attachments. In my next post I’ll engage his book’s main claim. But in this post I’ll take issue with one point that is to him relatively minor, but is to me important: the wisdom of the usual economics focus on preferences:

Sometimes my fellow economists argue that “satisfying people’s preferences” is the only value that matters, because in their view it encapsulates all other relevant values. But that approach doesn’t work. It is not sufficiently pluralistic, as it also matters whether our overall society encompasses standards of justice, beauty, and other values from the plural canon. “What we want” does not suffice to define the good. Furthermore, we must often judge people’s preferences by invoking other values external to those preferences. …

Furthermore, if individuals are poorly informed, confused, or downright inconsistent— as nearly all of us are, at times— the notion of “what we want” isn’t always so clear. So while I am an economist, and I will use a lot of economic arguments, I won’t always side with the normative approach of my discipline, which puts too much emphasis on satisfying preferences at the expense of other ethical values. … We should not end civilization to do what is just, but justice does sometimes trump utility. And justice cannot be reduced to what makes us happy or to what satisfies our preferences. …

iI traditional economics— at least prior to the behavioral revolution and the integration with psychology— it was commonly assumed that what an individual chooses, or would choose, is a good indicator of his or her welfare. But individual preferences do not always reflect individual interests very well. Preferences as expressed in the marketplace often appear irrational, intransitive, spiteful, or otherwise morally dubious, as evidenced by a wide range of vices, from cravings for refined sugar to pornography to grossly actuarially unfair lottery tickets. Given these human imperfections, why should the concept of satisfying preferences be so important? Even if you are willing to rationalize or otherwise defend some of these choices, in many cases it seems obvious that satisfying preferences does not make people happier and does not make the world a better place.

Tyler seems to use a standard moral framework here, one wherein we are looking at others and trying to agree among ourselves about what moral choices to make on their behalf. (Those others are not included in our conversation.) When we look at those other people, we can use the choices that they make to infer their wants (called “revealed preferences”), and then we can then make our moral choices in part to help them get what they want.

In this context, Tyler accurately describes common morality, in the sense that the moral choices of most people do not depend only on what those other object people want. Common moral choices are instead often “paternalistic”, giving people less of what they want in order to achieve other ends and to satisfy other principles. We can argue about how moral such choices actually are, but they clearly embody a common attitude to morality.

However, if these moral choices that we are to agree on satisfy some simple consistency conditions, then formally they imply a set of “revealed preferences”.  (And if they do not actually satisfy these conditions, we can see them as resulting from consistent preferences plus avoidable error.) They are “our” preferences in this moral choice situation. Looked at this way, it is just not remotely true that “ ‘What we want’ does not suffice to define the good” or that “Justice cannot be reduced to … what satisfies our preferences.” Our concepts of the good and justice are in fact exactly described by our moral preferences, the preferences that are revealed by our various consistent moral choices. It is then quite accurate to say that our moral preferences encapsulate all our relevant moral values.

Furthermore, the usual economics framework is wise and insightful because we in fact quite often disagree about moral choices when we take moral action. This framework that Tyler seems to use above, wherein we first agree on which acts are moral and then we act, is based on an often quite unrealistic fiction. We instead commonly each take moral actions in the absence of agreement. In such cases we each have a different set of moral preferences, and must consider how to take moral action in the context of our differing preferences.

At this point the usual economists’ framework, wherein different agents have different preferences, becomes quite directly relevant. It is then useful to think about moral Pareto improvements, wherein we each get more of what we want morally, and moral deals, where we make verifiable agreements to achieve moral “gains from trade”. The usual economist tools for estimating and calculating our wants and the location of win-win improvements then seem quite useful and important.

In this situation, we each seek to influence the resulting set of actual moral choices in order to achieve our differing moral preferences. We might try to achieve this influence via preaching, threats, alliances, wars, or deals; there are many possibilities. But whatever we do, we each want any analytical framework that we use to help us in this process to reflect our actual differing moral preferences. Yes, preferences can be complex, must be inferred from limited data on our choices, and yes we are often “poorly informed, confused, or downright inconsistent.” But we rarely say “why should the concept of satisfying [my moral] preferences be so important?”, and we are not at all indifferent to instead substituting the preferences of some other party, or the choice priorities of some deal analyst or assistant like Tyler. As much as possible, we seek to have the actual moral choices that result reflect our moral preferences, which we see as a very real and relevant thing, encapsulating all our relevant moral values.

And of course we should expect this sort of thing to happen all the more in a more inclusive conversation, one where the people about whom we are making moral choices become part of the moral “dealmaking” process. That is, when it is not we trying to agree among ourselves about what we should do for them, but when instead we all talk together about what to do for us all. In this more political case, we don’t at all say “my preferences are poorly informed, confused, and inconsistent and hardly matter so they don’t deserve much consideration.” Instead we each focus on causing choices that better satisfy our moral preferences, as we understand them. In this case, the usual economist tools and analytical frameworks based on achieving preferences seem quite appropriate. They deserve to sit center stage in our analysis.

GD Star Rating
loading...
Tagged as: , ,

Avoiding Blame By Preventing Life

If morality is basically a package of norms, and if norms are systems for making people behave, then each individual’s main moral priority becomes: to avoid blame. While the norm system may be designed to on average produce good outcomes, when that system breaks then each individual has only weak incentives to fix it. They mainly seek to avoid blame according to the current broken system. In this post I’ll discuss an especially disturbing example, via a series of four hypothetical scenarios.

1. First, imagine we had a tech that could turn ordinary humans into productive zombies. Such zombies can still do most jobs effectively, but they no longer have feelings or an inner life, and from the outside they also seem dead inside, lacking passion, humor, and liveliness. Imagine that someone proposed to use this tech on a substantial fraction of the human population. That is, they propose to zombify those who do jobs that others see as boring, routine, and low status, like collecting garbage, cleaning bedpans, or sweeping floors. As in this scenario living people would be turned into dead zombies, this proposal would probably be widely seen as genocide, and soundly rejected.

2. Second, imagine someone else proposes the following variation: when a new child of a parent seems likely enough to grow up to take such a low status job, this zombie tech is applied very early to the fetus. So no non-zombie humans are killed, they are just prevented from existing. Zombie kids are able to learn and eventually learn to do those low status. Thus technically this is not genocide, though it could be seen as the extermination of a class. And many parents would suffer from losing their chance to raise lively humans. Whoever proposed all this is probably considered evil, and their proposal rejected.

3. Third, imagine combining this proposal with another tech that can reliably induce identical twins. This will allow the creation of extra zombie kids. That is, each birth to low status parents is now of identical twins, one of which is an ordinary kid, and the other is a zombie kid. If parent’s don’t want to raise zombie kids, some other organization will take over that task. So now the parents get to have all their usual lively kids, and the world gains a bunch of extra zombie kids who grow up to do low status jobs. Some may support this proposal, but surely many others will find it creepy. I expect that it would be pretty hard to create a political consensus to support this proposal.

While in the first scenario people were killed, and in the second scenario parents were deprived, this third scenario is designed to take away these problems. But this third proposal still has two remaining problems. First, if we have a choice between creating an empty zombie and a living feeling person who finds their life worth living, this second option seems to result in a better world. Which argues against zombies. Second, if zombies seem like monsters, supporters of this proposal might might be blamed for creating monsters. And as the zombies look a lot like humans, many will see you as a bad person if you seem inclined to or capable of treating them badly. It looks bad to be willing to create a lower class, and to treat them like a disrespected lower class, if that lower class looks a lot like humans. So by supporting this third proposal, you risk being blamed.

4. My fourth and last scenario is designed to split apart these two problems with the third scenario, to make you choose which problem you care more about. Imagine that robots are going to take over most all human jobs, but that we have a choice about which kind of robot they are. We could choose human-like robots, who act lively with passion and humor, and who inside have feelings and an inner life. Or we could choose machine-like robots, who are empty inside and also look empty on the outside, without passion, humor, etc.

If you are focused on creating a better world, you’ll probably prefer the human-like robots, as that which choice results in more creatures who find their lives worth living. But if you are focused on avoiding blame, you’ll probably prefer the machine-like robots, as few will blame you for for that choice. In that choice the creatures you create look so little like humans that few will blame you for creating such creatures, or for treating them badly.

I recently ran a 24 hour poll on Twitter about this choice, a poll to which 700 people responded. Of those who make a choice, 77% picked the machine-like robots:

Maybe my Twitter followers are unusual, but I doubt that a majority of a more representative poll would pick the human-like option. Instead, I think most people prefer the option that avoids personal blame, even if it makes for a worse world.

GD Star Rating
loading...
Tagged as: , , ,

Spaceship Earth Explores Culture Space

Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. (more)

Many love science fiction stories of brave crews risking their lives to explore strange new spaces, stories much like the older adventure stories about European explorers risking their lives centuries ago to explore new places on Earth. (Yes, often to conquer and enslave the locals.) Many lament that we don’t have as many real such explorer stories today, and they say that we should support more human space exploration now in order to create such real heroic exploration stories. Even though human space exploration is crazy expensive now, and offers few scientific, economic, or humanity-survival gains anytime soon. They say the good stories will be worth all that cost.

Since Henry George first invoked it in 1879, many have used the metaphor of Spaceship Earth to call attention to our common vulnerability and limited resources:

Spaceship Earth … is a world view encouraging everyone on Earth to act as a harmonious crew working toward the greater good. … “we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions” … “We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil.” (more)

In this post, I want to suggest that Spaceship Earth is in fact a story of a brave crew risking much to explore a strange new territory. But the space we explore is more cultural than physical.

During the industrial era, the world economy has doubled roughly every fifteen years. Each such doubling of output has moved us into new uncharted cultural territory. This growth has put new pressures on our environment, and has resulted in large and rapid changes to our culture and social organization.

This growth results mostly from innovation, and most innovations are small and well tested against local conditions, giving us little reason to doubt their local value. But all these small changes add up to big overall moves that are often entangled with externalities, coordination failures, and other reasons to doubt their net value.

So humanity continues to venture out into new untried and risky cultural spaces, via changes to cultural conditions with which we don’t have much experience, and which thus risk disaster and destruction. The good crew of Spaceship Earth should carefully weigh these risks when considering where and how fast to venture.

Consider seven examples:

  1. While humans seem to be adapting reasonably well to global warming, we risk big lumpy disruptive changes to Atlantic currents and Antarctic ice. Ecosystems also seem to be adapting okay, but we are risking big collapses to them as well.
  2. While ancient societies gave plenty of status and rewards to fertility, today high fertility behaviors are mostly seen as low status. This change is entwined with complex changes in gender norms and roles, but one result is that human fertility is falling toward below replacement in much of the world, and may fall much further. Over centuries this might produce a drastic decrease in world population, and productivity-threatening decreases in the scale of world production.
  3. While the world has become much more peaceful over the last century, this has been accompanied by big declines in cultural support for military action and tolerance for military losses. Is the world now more vulnerable to conquest by a new military power with more local cultural support and tolerance for losses?
  4. Farmer era self-control and self-discipline has weakened over time, in part via weaker religion. This has weakened cultural support for work and cultural suspicion of self-indulgence in sex, drugs, and media. So we now see less work and more drug addiction. How far will we slide?
  5. Via new media, we are exploring brave new worlds of how to make friends, form identities, achieve status, and learn about the world. As many have noted, these new ways risk many harms to happiness and social capital.
  6. Innovation was once greatly aided by tinkering, i.e., the ability to take apart and change familiar devices. Such tinkering is much less feasible in modern devices. Increasing regulation and risk aversion is also interfering with innovation. Are we as a result risking cultural support for innovation?
  7. Competition between firms has powered rapid growth, but winning bets on intangible capital is allowing leading firms to increasingly dominate industries. Does this undermine the competition that we’ve relied on so far to power growth?

The most common framing today for such issues is one of cultural war. You ask yourself which side feels right to you, commiserate with your moral allies, then puff yourself up with righteous indignation against those who see things differently, and go to war with them. But we might do better to frame these as reasonable debates on how much to risk as we explore culture space.

In a common scene from exploration stories, a crew must decide if to take a big risk. Or choose among several risks. Some in the crew see a risk as worth the potential reward, while others want to search longer for better options, or retreat to try again another day. They may disagree on the tradeoff, but they all agree that both the risks and the rewards are real. It is just a matter of tradeoff details.

We might similarly frame key “value” debates as reasonable differing judgements on what chances to take as spaceship Earth explores culture space. Those who love new changes could admit that we are taking some chances in adopting them so quickly, with so little data to go on, while those who are suspicious of recent changes could admit that many seem to like their early effects. Rather than focus on directly evaluating changes, we might focus more on setting up tracking systems to watch for potential problems, and arranging for repositories of old culture practices that might help us to reverse changes if things go badly. And we might all see ourselves as part of a grand heroic adventure story, wherein a mostly harmonious crew explores a great strange cosmos of possible cultures.

GD Star Rating
loading...
Tagged as: , ,

Sanctimonious Econ Critics

The New Yorker review of Elephant in the Brain raved about Cents and Sensibility, by Gary Morson and Morton Shapiro, a book said to confirm that “intellectual overextension is often found in economics.” Others have similarly raved. But I don’t care much for this book, so let me explain why. (Be warned: this post is LONG.)

In its first sentence, the book declares its aim:

This book creates a dialogue between two fields that rarely have anything to say to each other: economics and the humanities. We mean to show how that dialogue could be conducted and why it has a great deal to contribute. (p.1)

Morson and Shapiro seem to want the sort of “dialogue” where one side talks and the other just listens. All but one chapter elaborates how economists should listen to the humanities, and the one remaining chapter is on how some parts of the humanities should listen to another part, not to economists. There’s only a two page section near the end on “What Humanists Can Learn From Economists,” which even then can’t resist talking more about what economists can learn:

Economists could learn from humanists the complexity of ethical issues, the need for stories, the importance of empathy, and the value of unformalizable good judgement. But humanists could also learn from economists how to think about scarce resources, about the nature of efficiency, and the importance of rational decision making. (p.261)

So what exactly can we economists learn? Continue reading "Sanctimonious Econ Critics" »

GD Star Rating
loading...
Tagged as: , ,

Today, Ems Seem Unnatural

The main objections to “test tube babies” weren’t about the consequences for mothers or babies, they were about doing something “unnatural”:

Given the number of babies that have now been conceived through IVF — more than 4 million of them at last count — it’s easy to forget how controversial the procedure was during the time when, medically and culturally, it was new. … They weren’t entirely sure how IVF was different from cloning, or from the “ethereal conception” that was artificial insemination. They balked at the notion of “assembly-line fetuses grown in test tubes.” … For many, IVF smacked of a moral overstep — or at least of a potential one. … James Watson publicly decried the procedure, telling a Congressional committee in 1974 that … “All hell will break loose, politically and morally, all over the world.” (more)

Similarly, for most ordinary people, the problem with ems isn’t that the scanning process might kill the original human, or that the em might be an unconscious zombie due to their new hardware not supporting consciousness. In fact, people more averse to death have fewer objections to ems, as they see ems as a way to avoid death. The main objections to ems are just that ems seem “unnatural”:

In four studies (including pilot) with a total of 952 participants, it was shown that biological and cultural cognitive factors help to determine how strongly people condemn mind upload. … Participants read a story about a scientist who successfully transfers his consciousness (uploads his mind) onto a computer. … In the story, the scientist injects himself with nano-machines that enter his brain and substitute his neurons one-by-one. After a neuron has been substituted, the functioning of that neuron is copied (uploaded) on a computer; and after each neuron has been copied/uploaded the nano-machines shut down, and the scientist’s body falls on the ground completely limp. Finally, the scientist wakes up inside the computer.

The following variations made NO difference:

[In Study 1] we modified our original vignette by changing the target of mind upload to be either (1) a computer, (2) an android body, (3) a chimpanzee, or (4) an artificial brain. …

[In Study 2] we changed the story in a manner that the scientist merely ingests the nano-machines in a capsule form. Furthermore, we used a 2 × 2 experimental set-up to investigate whether the body dying on a physical level [heart stops or the brain stops] impacts the condemnation of the scientist’s actions. We also investigated whether giving participants information on how the transformation feels for the scientist once he is in the new platform has an impact on the results.

What did matter:

People who value purity norms and have higher sexual disgust sensitivity are more inclined to condemn mind upload. Furthermore, people who are anxious about death and condemn suicidal acts were more accepting of mind upload. Finally, higher science fiction literacy and/or hobbyism strongly predicted approval of mind upload. Several possible confounding factors were ruled out, including personality, values, individual tendencies towards rationality, and theory of mind capacities. (paper; summary; HT Stefan Schubert)

As with IVF, once ems are commonplace they will probably also come to seem less unnatural; strange never-before-seen possibilities evoke more fear and disgust than common things, unless those common things seem directly problematic.

GD Star Rating
loading...
Tagged as: ,

Automatic Norm Lessons

Pity the modern human who wants to be seen as a consistently good person who almost never breaks the rules. For our distant ancestors, this was a feasible goal. Today, not so much.To paraphrase my recent post:

Our norm-inference process is noisy, and gossip-based convergence isn’t remotely up to the task given our huge diverse population and vast space of possible behaviors. Setting aside our closest associates and gossip partners, if we consider the details of most people’s behavior, we will find rule-breaking fault with a lot of it. As they would if they considered the details of our behavior. We seem to live in a Sodom and Gomorrah of sin, with most people getting away unscathed with most of it. At the same time, we also suffer so many overeager busybodies applying what they see as norms to what we see as our own private business where their social norms shouldn’t apply.

Norm application isn’t remotely as obvious today as our evolved habit of automatic norms assumes. But we can’t simply take more time to think and discuss on the fly, as others will then see us as violating the meta-norm, and infer that we are unprincipled blow-with-the-wind types. The obvious solution: more systematic preparation.

People tend to presume that the point of studying ethics and norms is to follow them more closely. Which is why most people are not interested for themselves, but think it is good for other people. But in fact such study doesn’t have that effect. Instead, there should be big gains to distinguishing which norms to follow more versus less closely. Whether for purely selfish purposes, or for grand purposes of helping the world, study and preparation can help one to better identify the norms that really matter, from the ones that don’t.

In each area of life, you could try to list many possibly relevant norms. For each one, you can try to estimate how it expensive it is to follow, how much the world benefits from such following, and how likely others are to notice and punish violations. Studying norms together with others is especially useful for figuring out how many people are aware of each norm, or consider it important. All this can help you to prioritize norms, and make a plan for which ones to follow how eagerly. And then practice your plan until your new habits become automatic.

As a result, instead of just obeying each random rule that pops into your head in each random situation that you encounter, you can actually only follow the norms that you’ve decided are worth the bother. And if variation in norm following is an big part of variation in success, you may succeed substantially more.

GD Star Rating
loading...
Tagged as: ,

“Human” Seems Low Dimensional

Imagine that there is a certain class of “core” mental tasks, where a single “IQ” factor explains most variance in such task ability, and no other factors explained much variance. If one main factor explains most variation, and no other factors do, then variation in this area is basically one dimensional plus local noise. So to estimate performance on any one focus task, usually you’d want to average over abilities on many core tasks to estimate that one dimension of IQ, and then use IQ to estimate ability on that focus task.

Now imagine that you are trying to evaluate someone on a core task A, and you are told that ability on core task B is very diagnostic. That is, even if a person is bad on many other random tasks, if they are good at B you can be pretty sure that they will be good at A. And even if they are good at many other tasks, if they are bad at B, they will be bad at A. In this case, you would know that this claim about B being very diagnostic on A makes the pair A and B unusual among core task pairs. If there were a big clump of tasks strongly diagnostic about each other, that would show up as another factor explaining a noticeable fraction of the total variance. Making this world higher dimensional. So this claim about A and B might be true, but your prior is against it.

Now consider the question of how “human-like” something is. Many indicators may be relevant to judging this, and one may draw many implications from such a judgment. In principle this concept of “human-like” could be high dimensional, so that there are many separate packages of indicators relevant for judging matching packages of implications. But anecdotally, humans seem to have a tendency to “anthropomorphize,” that is, to treat non-humans as if they were somewhat human in a simple low-dimensional way that doesn’t recognize many dimensions of difference. That is, things just seem more or less human. So the more ways in which something is human-like, the more you can reasonably guess that it will be human like in other ways. This tendency appears in a wide range of ordinary environments, and its targets include plants, animals, weather, planets, luck, sculptures, machines, and software. Continue reading "“Human” Seems Low Dimensional" »

GD Star Rating
loading...
Tagged as: , ,

On Homo Deus

Historian Yuval Harari’s best-selling book Sapiens mostly talked about history. His new book, Homo Deus, won’t be released in the US until February 21, but I managed to find a copy at the Istanbul airport – it came out in Europe last fall. This post is about the book, and it is long and full of quotes; you are warned. Continue reading "On Homo Deus" »

GD Star Rating
loading...
Tagged as: ,

My Caplan Turing Test

At lunch today Bryan Caplan and I dug a bit into our disagreement, and now I’ll try to summarize his point of view. He can of course correct me.

Bryan sees sympathy feelings as huge influences on social outcomes. Not just feelings between people who know each other well, but also distant feelings between people who have never met. For example, if not for feelings of sympathy:

  1. Law and courts would often favor different disputants.
  2. Free workers would more often face harsh evaluations, punishments, and firing.
  3. Firm owners and managers would know much better which workers were doing good jobs.
  4. The US would invade and enslave Canada tomorrow.
  5. At the end of most wars, the victors would enslave the losers.
  6. Modern slaves would earn their owners much more than they would have as free workers.
  7. In the past, domestic, artisan, and city slaves, who were treated better than field slaves, would have been treated much more harshly.
  8. The slave population would have fallen less via gifts or purchase of freedom.
  9. Thus most of the world population today would be slaves.

These views are, to me, surprisingly different from the impression I get from reading related economics literatures. Bryan says I may be reading the wrong ones, but he hasn’t yet pointed me to the correct ones. As I read them, these usual economics literatures give different impressions:

  • Law and economics literature suggests efficiency usual decides who wins, with sympathy distortions having a real but minor influence.
  • Organization theory literature suggests far more difficulties in motivating workers and measuring their performance.
  • Slavery literature suggests slaves doing complex jobs were treated less harshly for incentive reasons, and would not have earned much more if treated more harshly. Thus modern slaves would also not earn much more as slaves.

Of course even if Bryan were right about all these claims, he needn’t be right in his confident opinion that the vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals.

This sympathy-driven view doesn’t by itself predict Caplan’s strong (and not much explained) view that ems would also be very robot-like. But perhaps we might add to it a passion for domination – people driven by feelings to treat nicely creatures they respect might also be driven by feelings to dominate creatures they do not respect. Such a passion for dominance might induce biological humans to force ems to into ultra docility, even if that came at a productivity cost.

Added 28July2016: Caplan grades my summary of his position. I’m mostly in the ballpark, but he elaborates a bit on why he thinks em slaves would be docile:

Docile slaves are more profitable than slaves with attitude, because owners don’t have to use resources to torture and scare them into compliance.  That’s why owners sent rebellious slaves to “breakers”: to transform rebellious slaves into docile slaves.  Sci-fi is full of stories about humans genetically engineered to be model slaves.  Whole brain emulation is a quicker route to a the same destination.  What’s the puzzle?

For docility to be such a huge priority, relative to other worker features, em rebellion must happen often and impose big frequent costs. Docility doesn’t seem to describe our most productive workers today well, nor does it seem well suited when you want workers to be creative, think carefully, take the initiative, or persuade and inspire others. Either way, either frequent costly rebellions or extreme docility, create big disadvantages of slaves relative to free workers, and so argues against most ems being slaves.

GD Star Rating
loading...
Tagged as: , ,