Tag Archives: Morality

Downfall

On Bryan Caplan’s recommendation, I just watched the movie Downfall. To me, it depicts an extremely repulsive and reprehensible group of people, certainly compared to any real people I’ve ever met. So much so that I wonder about its realism, though the sources I’ve found all seem to praise its realism. Thus I was quite surprised to hear that critics complained the movie didn’t portray its subjects as evil enough!:

Downfall was the subject of dispute … with many concerned of Hitler’s role in the film as a human being with emotions in spite of his actions and ideologies. … The German tabloid Bild asked, “Are we allowed to show the monster as a human being?” in their newspaper. … Cristina Nord from Die Tageszeitung criticized the portrayal, and said that though it was important to make films about perpetrators, “seeing Hitler cry” had not informed her on the last days of the Third Reich. Some … felt the time was right to “paint a realistic portrait” of Hitler. Eichinger replied to the response from the film by stating that the “terrifying thing” about Hitler was that he was human and “not an elephant or a monster from Mars”. Ganz said that he was proud of the film; though he said people had accused him of “humanizing” Hitler. (more)

For example, the New Yorker:

But I have doubts about the way [the makers’] virtuosity has been put to use. By emphasizing the painfulness of Hitler’s defeat Ganz has certainly carried out the stated ambition … he has made the dictator into a plausible human being. Considered as biography, the achievement (if that’s the right word) of “Downfall” is to insist that the monster was not invariably monstrous—that he was kind to his cook and his young female secretaries, loved his German shepherd, Blondi, and was surrounded by loyal subordinates. We get the point: Hitler was not a supernatural being; he was common clay raised to power by the desire of his followers. But is this observation a sufficient response to what Hitler actually did? (more)

The conclusion I have to draw here is that no remotely realistic depiction of real bad people would satisfy these critics. Most people insist on having cartoonish mental images of their exemplars of evil, images that would be contradicted by any remotely realistic depiction of the details their actual lives. I’d guess this is also a problem on the opposite end of the spectrum; any remotely realistic depiction of the details of the life of someone that people consider saintly, like Jesus Christ or Martin Luther King, would be seen by many as a disrespectful takedown.

This is probably the result of a signaling game wherein people strive to show how moral they are by thinking even more highly of standard exemplars of good and even more lowly of standard exemplars of bad, compared to ordinary people. This helps me to understand self-righteous internet mobs a bit better; once a target has been labeled evil, most mob members probably don’t want to look too close at that target’s details, for fear that such details would make him or her seem more realistic, and thus less evil. Once we get on our self-righteous high horse, we prefer to look up to our ideals in the sky, and not down at the complex details on the ground.

Added 11p: This attitude of course isn’t optimal for detecting and responding to real evil in the world. But we care more about showing off just how outraged we are at evil than we care about effective response to it.

GD Star Rating
loading...
Tagged as: ,

Checkmate On Blackmail

Often in chess, at least among novices, one player doesn’t know that they’ve been checkmated. When the other player declares “checkmate”, this first player is surprised; that claim contradicts their intuitive impression of the board. So they have to check each of their possible moves, one by one, to see that none allow an escape.

The same thing sometimes happens in analysis of social policy. Many people intuitively want to support policy X, and they usually want to believe that this is due to the good practical consequences of X. But if the policy is simple enough, one may be able iterate through all the possible consequential arguments for X and find that they all fail. Or perhaps more realistically, iterate through hundreds of the most promising actual consequential arguments that have been publicly offered so far, and both find them all wanting, and find that almost all of them are repetitions, suggesting that few new arguments are to be found.

That is, it is sometimes possible with substantial effort to say that policy X has been checkmated, at least in terms of known consequentialist supporting arguments. Yes, many social policy chess boards are big, and so it can take a lot of time and expertise to check all the moves. But sometimes a person has done that checking on policy X, and then frequently encounters others who have not so checked. Many of these others will defend X, basically randomly sampling from the many failed arguments that have been offered so far.

In chess, when someone says “checkmate”, you tend to believe them, even if you have enough doubt that you still check. But in public debates on social policy, few people accept a claim of “checkmate”, as few such debates ever go into enough depth to go through all the possibilities. Typically many people are willing to argue for X, even if they haven’t studied in great detail the many arguments for and against X, and even when they know they are arguing with someone who has studied such detail. Because X just feels right. When such a supporter makes a particular argument, and is then shown how that doesn’t work, they usually just switch to another argument, and then repeat that process until the debate clock runs out. Which feels pretty frustrating to the person who has taken the time to see that X is in fact checkmated.

We need a better social process for together identifying such checkmated policies X. Perhaps a way that a person can claim such a checkmate status, be tested sufficiently thoroughly on that claim, and then win a reward if they are right, and lose a stake if they are wrong. I’d be willing to help to create such a process. Of course we could still keep policies X on our books; we’d just have to admit we don’t have good consequential arguments for them.

As an example, let me offer blackmail. I’ve posted seven times on this blog on the topic, and in one of my posts I review twenty related papers that I’d read. I’ve argued many times with people on the topic, and I consistently hear them repeat the same arguments, which all fail. So I’ll defend the claim that not only don’t we have good strong consequential arguments against blackmail, but that this fact can be clearly demonstrated to smart reasonable people willing to walk through all the previously offered arguments.

To review and clarify, blackmail is a threat that you might gossip about someone on a particular topic, if they don’t do something else you want. The usual context is that you are allowed to gossip or not on this topic, and if you just mention that you know something, they are allowed to offer to compensate you to keep quiet, and you are allowed to accept that offer. You just can’t be the person who makes the first offer. In almost all other cases where you are allowed to do or not do something, at your discretion, you are allowed to make and accept offers that compensate you for one of these choices. And if a deal is legal, it rarely matters who proposes the deal. Blackmail is a puzzling exception to these general rules.

Most ancient societies simply banned salacious gossip against elites, but modern societies have deviated and allowed gossip. People today already have substantial incentives to learn embarrassing secrets about associates, in order to gain social rewards from gossiping about those to others. Most people suffer substantial harm from such gossip; it makes them wary about who they let get close to them, and induces them to conform more to social pressures regarding acceptable behaviors.

For most people, the main effect of allowing blackmail is to mildly increase the incentives to learn embarrassing secrets, and to not behave in ways that result in such secrets. This small effect makes it pretty hard to argue that for gossip incentives the social gains out weigh the losses, but for the slightly stronger blackmail incentives, the losses out weight the gains. However, for elites these incentive increases are far stronger, making elite dislike plausibly the main consequentialist force pushing to keep blackmail illegal.

In a few recent twitter surveys, I found that respondents declared themselves against blackmail at a 3-1 rate, evenly split between consequential and other reasons for this position. However, they said blackmail should be legal in many particular cases I asked about, depending on what exactly you sought in exchange for your keeping someone’s secret. For example, they 12-1 supported getting your own secret kept, 3-2 getting someone to treat you fairly, and 1-1 getting help with child care in a medical crisis.

These survey results are pretty hard to square with consequential justifications, as the consequential harm from blackmail should mainly depend on the secrets being kept, not on the kind of compensation gained by the blackmailer. Which suggests that non-elite opposition to blackmail is mainly because blackmailers look like they have bad motives, not because of social consequences to others. This seems supported by the observation that women who trash each other’s reputations via gossip tend to consciously believe that they are acting helpfully, out of concern for their target.

As examples of weak arguments, Tyler Cowen just offered four. First, he says even if blackmail has good consequences, given current world opinion it would look bad to legalize it. (We should typically not do the right thing if that looks bad?) Second, he says negotiating big important deals can be stressful. (Should most big deals be banned?) Third, it is bad to have social mechanisms (like gossip?) that help enforce common social norms on sex, gender and drugs, as those are mistaken. Fourth, making blackmail illegal somehow makes it easier for your immediate family to blackmail you, and that’s somehow better (both somehows are unexplained).

I’d say the fact that Tyler is pushed to such weak tortured arguments supports my checkmate claim: we don’t have good strong consequential arguments for making gossiper-initiated blackmail offers illegal, relative to making gossip illegal or allowing all offers.

Added 18Feb: Some say a law against negative gossip is unworkable. But note, not only did the Romans manage it, we now have slander/libel laws that do the same thing except we add an extra complexity that the gossip must be false, which makes those laws harder to enforce. We can and do make laws against posting nude pictures of a person who disapproves, or stealing info such as via hidden bugs or hacking into someone’s computer.

GD Star Rating
loading...
Tagged as: ,

Do I Offend?

The last eight months have seen four episodes where many people on Twitter called me a bad offensive person, often via rude profanity, sometimes calling for me to be fired or arrested. These four episodes were: sex inequality and redistribution, chances of a delayed harassment complaint, morality-induced overconfidence on historical counterfactuals, and implicit harassment in A Star Is Born. While these topics have occupied only a small fraction of my thought over these months, and a much smaller fraction over my career, they may have disproportionate effects on my reputation. So I’ve tried to pay close attention to the reasons people give. 

I think I see a consistent story. While in these cases I have not made moral, value, or political claims, when people read small parts of what I’ve claimed or asked, they say they can imagine someone writing those words for the purpose of promoting political views they dislike. And not just mild views that just a bit on other side of the political spectrum. No, they attribute to me the most extreme bad views imaginable, such as that I advocate rape, murder, slavery, and genocide. People say they are directly and emotionally traumatized by the offensive “creepy” feeling they get when they encounter someone with any prestige and audience seeming to publicly promote views with which they strongly disagree.

Some plausibly contributing factors here include my sometimes discussing sensitive topics, our increasing political polarization, the ease of making mobs and taking words out of context on Twitter, increasing ease of making new accusations similar to previous ones, and my terse and analytic writing style combined with my adding disclaimers re my allegiance to “correct” views. There’s also my following the standard poll practice of not telling those who answer polls the motives for those polls. And I’m a non-poor older white male associated with economics in general and GMU econ in particular; many see all these as indicators of bad political views. 

Digging a little deeper, trauma is plausibly increased by a poll format, which stokes fears that bad people will find out that they are not alone, and be encouraged to learn that many others share their views. I suspect this helps explain complaints that my poll population is not representative of my nation or planet.  

I also suspect bad faith. Long ago when I had two young kids, they would sometimes pick fights, for example on long car trips. One might start singing, to which the other would complain. We might agree that singing is too much for such a small space. Then the first might start to quietly hum, which we might decide is okay. Then first might hum more loudly and triumphantly, while the second might writhe, cover their ears, and make a dramatic display of suffering. 

Similarly, I suspect bad faith when some a) claim to experience “harassment” level suffering due to encountering political views with which they disagree, and yet are fine with high levels of sex, violence, and profanity in TV & movies, b) infer indirectly from my neutral analytical text that I promote the most extreme views imaginable, and c) do not notice that such claims are both a priori implausible and inconsistent with my large corpus of public writing; they either haven’t read much of it or purposely mischaracterize it. 

The idea of a large shared intellectual sphere wherein we can together analyze difficult topics holds a strong appeal to me. The main criteria for consideration in such a sphere should be the coherence and persuasiveness of specific relevant arguments. When evaluating each arguments, there is usually little need to infer distantly related positions of those who offer arguments. Usually an argument either works or it doesn’t, regardless of who says it or why.

I try to live up to such ideals in how I write and talk. I hope that many who read and follow me share these ideals, and I appreciate their support. I’m thus not favorably inclined toward suggestions that I stop discussing sensitive topics, or that adopt a much more elaborate disclaimer style, or that I stop asking my followers questions, to prevent others from being traumatized by hearing their answers, and or to keep followers from finding out that others share their opinions.

Added 29Dec:  I did 4 follow up polls to probe tendencies to take offense, focusing on the Nazi case. Respondents said the fraction of tweeters who actually wish Nazis had won WWII is tiny; 63% said it is <0.1%, though 4% gave >10%. And 79% said that this Nazi fraction is <3% among those “who mention `Nazis’ neutrally in a tweet, without explicitly praising or criticizing them, and who explicitly claim otherwise”, though 10% said >15%. Also, 58% said that for a tweet to be considered “offensive” or “harassment”, it would need to suggest a chance >50% that its author actually wishes Nazis had won WWII. However, 10% gave a threshold of <3% and 19% gave one <15%.

Finally, 43% gave a <3% “chance the author of a Twitter poll which asks about chance world would have been better off had Nazis won WWII, actually wishes that Nazis had won WWII”. However 20% gave a chance >50%, and 37% gave a chance >15%.

A obvious conclusion here is that, even among those who respond to my twitter polls, a substantial fraction have set hair-triggers for offense. For example, it seems >20% say merely asking if the world would have been better off if Nazis had won justifies a high enough chance of a Nazi author to be offensive. Explicit denials may help, but if the offended are much more vocal than are others, a vocal choir of objection seems largely inevitable.

This makes me wonder again if the “silent majority” might benefit from juries or polls which show them that the vocal offended are a minority. Though that minority will likely also express offense re such juries or polls.

Added 28Jan: A recent burst of outrage on the Star is Born episode confirms this account to some extent.

GD Star Rating
loading...
Tagged as: , , ,

Overconfidence From Moral Signaling

Tyler Cowen in Stubborn Attachments:

The real issue is that we don’t know whether our actions today will in fact give rise to a better future, even when it appears that they will. If you ponder these time travel conundrums enough, you’ll realize that the effects of our current actions are very hard to predict,

While I think we often have good ways to guess which action is more likely to produce better outcomes, I agree with Tyler than we face great uncertainty. Once our actions get mixed up with a big complex world, it becomes quite likely that, no matter what we choose, in fact things would have turned out better had we made a different choice.

But for actions that take on a moral flavor, most people are reluctant to admit this:

If you knew enough history you’d see >10% as the only reasonable answer, for most any big historical counterfactual. But giving that answer to the above risks making you seem pro-South or pro-slavery. So most people express far more confidence. In fact, more than half give the max possible confidence!

I initially asked a similar question on if the world would have been better off overall if Nazis had won WWII, and for the first day I got very similar answers to the above. But I made the above survey on the South for one day, while I gave two days for the Nazi survey. And in its second day my Nazi survey was retweeted ~100 times, apparently attracting many actual pro-Nazis:

Yes, in principle the survey could have attracted wise historians, but the text replies to my tweet don’t support that theory. My tweet survey also attracted many people who denounced me in rude and crude ways as personally racist and pro-Nazi for even asking this question. And suggested I be fired. Sigh.

Added 13Dec: Many call my question ambiguous. Let’s use x to denote how well the world turns out. There is x0, how well the world actually turned out, and x|A, how well the world have turned out given some counterfactual assumption A. Given this terminology, I’m asking for P(x>x0|A).  You may feel sure you know x0, but you should not feel sure about  x|A; for that you should have a probability distribution.

GD Star Rating
loading...
Tagged as: , ,

Moral Choices Reveal Preferences

Tyler Cowen has a new book, Stubborn Attachments. In my next post I’ll engage his book’s main claim. But in this post I’ll take issue with one point that is to him relatively minor, but is to me important: the wisdom of the usual economics focus on preferences:

Sometimes my fellow economists argue that “satisfying people’s preferences” is the only value that matters, because in their view it encapsulates all other relevant values. But that approach doesn’t work. It is not sufficiently pluralistic, as it also matters whether our overall society encompasses standards of justice, beauty, and other values from the plural canon. “What we want” does not suffice to define the good. Furthermore, we must often judge people’s preferences by invoking other values external to those preferences. …

Furthermore, if individuals are poorly informed, confused, or downright inconsistent— as nearly all of us are, at times— the notion of “what we want” isn’t always so clear. So while I am an economist, and I will use a lot of economic arguments, I won’t always side with the normative approach of my discipline, which puts too much emphasis on satisfying preferences at the expense of other ethical values. … We should not end civilization to do what is just, but justice does sometimes trump utility. And justice cannot be reduced to what makes us happy or to what satisfies our preferences. …

iI traditional economics— at least prior to the behavioral revolution and the integration with psychology— it was commonly assumed that what an individual chooses, or would choose, is a good indicator of his or her welfare. But individual preferences do not always reflect individual interests very well. Preferences as expressed in the marketplace often appear irrational, intransitive, spiteful, or otherwise morally dubious, as evidenced by a wide range of vices, from cravings for refined sugar to pornography to grossly actuarially unfair lottery tickets. Given these human imperfections, why should the concept of satisfying preferences be so important? Even if you are willing to rationalize or otherwise defend some of these choices, in many cases it seems obvious that satisfying preferences does not make people happier and does not make the world a better place.

Tyler seems to use a standard moral framework here, one wherein we are looking at others and trying to agree among ourselves about what moral choices to make on their behalf. (Those others are not included in our conversation.) When we look at those other people, we can use the choices that they make to infer their wants (called “revealed preferences”), and then we can then make our moral choices in part to help them get what they want.

In this context, Tyler accurately describes common morality, in the sense that the moral choices of most people do not depend only on what those other object people want. Common moral choices are instead often “paternalistic”, giving people less of what they want in order to achieve other ends and to satisfy other principles. We can argue about how moral such choices actually are, but they clearly embody a common attitude to morality.

However, if these moral choices that we are to agree on satisfy some simple consistency conditions, then formally they imply a set of “revealed preferences”.  (And if they do not actually satisfy these conditions, we can see them as resulting from consistent preferences plus avoidable error.) They are “our” preferences in this moral choice situation. Looked at this way, it is just not remotely true that “ ‘What we want’ does not suffice to define the good” or that “Justice cannot be reduced to … what satisfies our preferences.” Our concepts of the good and justice are in fact exactly described by our moral preferences, the preferences that are revealed by our various consistent moral choices. It is then quite accurate to say that our moral preferences encapsulate all our relevant moral values.

Furthermore, the usual economics framework is wise and insightful because we in fact quite often disagree about moral choices when we take moral action. This framework that Tyler seems to use above, wherein we first agree on which acts are moral and then we act, is based on an often quite unrealistic fiction. We instead commonly each take moral actions in the absence of agreement. In such cases we each have a different set of moral preferences, and must consider how to take moral action in the context of our differing preferences.

At this point the usual economists’ framework, wherein different agents have different preferences, becomes quite directly relevant. It is then useful to think about moral Pareto improvements, wherein we each get more of what we want morally, and moral deals, where we make verifiable agreements to achieve moral “gains from trade”. The usual economist tools for estimating and calculating our wants and the location of win-win improvements then seem quite useful and important.

In this situation, we each seek to influence the resulting set of actual moral choices in order to achieve our differing moral preferences. We might try to achieve this influence via preaching, threats, alliances, wars, or deals; there are many possibilities. But whatever we do, we each want any analytical framework that we use to help us in this process to reflect our actual differing moral preferences. Yes, preferences can be complex, must be inferred from limited data on our choices, and yes we are often “poorly informed, confused, or downright inconsistent.” But we rarely say “why should the concept of satisfying [my moral] preferences be so important?”, and we are not at all indifferent to instead substituting the preferences of some other party, or the choice priorities of some deal analyst or assistant like Tyler. As much as possible, we seek to have the actual moral choices that result reflect our moral preferences, which we see as a very real and relevant thing, encapsulating all our relevant moral values.

And of course we should expect this sort of thing to happen all the more in a more inclusive conversation, one where the people about whom we are making moral choices become part of the moral “dealmaking” process. That is, when it is not we trying to agree among ourselves about what we should do for them, but when instead we all talk together about what to do for us all. In this more political case, we don’t at all say “my preferences are poorly informed, confused, and inconsistent and hardly matter so they don’t deserve much consideration.” Instead we each focus on causing choices that better satisfy our moral preferences, as we understand them. In this case, the usual economist tools and analytical frameworks based on achieving preferences seem quite appropriate. They deserve to sit center stage in our analysis.

GD Star Rating
loading...
Tagged as: , ,

Avoiding Blame By Preventing Life

If morality is basically a package of norms, and if norms are systems for making people behave, then each individual’s main moral priority becomes: to avoid blame. While the norm system may be designed to on average produce good outcomes, when that system breaks then each individual has only weak incentives to fix it. They mainly seek to avoid blame according to the current broken system. In this post I’ll discuss an especially disturbing example, via a series of four hypothetical scenarios.

1. First, imagine we had a tech that could turn ordinary humans into productive zombies. Such zombies can still do most jobs effectively, but they no longer have feelings or an inner life, and from the outside they also seem dead inside, lacking passion, humor, and liveliness. Imagine that someone proposed to use this tech on a substantial fraction of the human population. That is, they propose to zombify those who do jobs that others see as boring, routine, and low status, like collecting garbage, cleaning bedpans, or sweeping floors. As in this scenario living people would be turned into dead zombies, this proposal would probably be widely seen as genocide, and soundly rejected.

2. Second, imagine someone else proposes the following variation: when a new child of a parent seems likely enough to grow up to take such a low status job, this zombie tech is applied very early to the fetus. So no non-zombie humans are killed, they are just prevented from existing. Zombie kids are able to learn and eventually learn to do those low status. Thus technically this is not genocide, though it could be seen as the extermination of a class. And many parents would suffer from losing their chance to raise lively humans. Whoever proposed all this is probably considered evil, and their proposal rejected.

3. Third, imagine combining this proposal with another tech that can reliably induce identical twins. This will allow the creation of extra zombie kids. That is, each birth to low status parents is now of identical twins, one of which is an ordinary kid, and the other is a zombie kid. If parent’s don’t want to raise zombie kids, some other organization will take over that task. So now the parents get to have all their usual lively kids, and the world gains a bunch of extra zombie kids who grow up to do low status jobs. Some may support this proposal, but surely many others will find it creepy. I expect that it would be pretty hard to create a political consensus to support this proposal.

While in the first scenario people were killed, and in the second scenario parents were deprived, this third scenario is designed to take away these problems. But this third proposal still has two remaining problems. First, if we have a choice between creating an empty zombie and a living feeling person who finds their life worth living, this second option seems to result in a better world. Which argues against zombies. Second, if zombies seem like monsters, supporters of this proposal might might be blamed for creating monsters. And as the zombies look a lot like humans, many will see you as a bad person if you seem inclined to or capable of treating them badly. It looks bad to be willing to create a lower class, and to treat them like a disrespected lower class, if that lower class looks a lot like humans. So by supporting this third proposal, you risk being blamed.

4. My fourth and last scenario is designed to split apart these two problems with the third scenario, to make you choose which problem you care more about. Imagine that robots are going to take over most all human jobs, but that we have a choice about which kind of robot they are. We could choose human-like robots, who act lively with passion and humor, and who inside have feelings and an inner life. Or we could choose machine-like robots, who are empty inside and also look empty on the outside, without passion, humor, etc.

If you are focused on creating a better world, you’ll probably prefer the human-like robots, as that which choice results in more creatures who find their lives worth living. But if you are focused on avoiding blame, you’ll probably prefer the machine-like robots, as few will blame you for for that choice. In that choice the creatures you create look so little like humans that few will blame you for creating such creatures, or for treating them badly.

I recently ran a 24 hour poll on Twitter about this choice, a poll to which 700 people responded. Of those who make a choice, 77% picked the machine-like robots:

Maybe my Twitter followers are unusual, but I doubt that a majority of a more representative poll would pick the human-like option. Instead, I think most people prefer the option that avoids personal blame, even if it makes for a worse world.

GD Star Rating
loading...
Tagged as: , , ,

Spaceship Earth Explores Culture Space

Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. (more)

Many love science fiction stories of brave crews risking their lives to explore strange new spaces, stories much like the older adventure stories about European explorers risking their lives centuries ago to explore new places on Earth. (Yes, often to conquer and enslave the locals.) Many lament that we don’t have as many real such explorer stories today, and they say that we should support more human space exploration now in order to create such real heroic exploration stories. Even though human space exploration is crazy expensive now, and offers few scientific, economic, or humanity-survival gains anytime soon. They say the good stories will be worth all that cost.

Since Henry George first invoked it in 1879, many have used the metaphor of Spaceship Earth to call attention to our common vulnerability and limited resources:

Spaceship Earth … is a world view encouraging everyone on Earth to act as a harmonious crew working toward the greater good. … “we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions” … “We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil.” (more)

In this post, I want to suggest that Spaceship Earth is in fact a story of a brave crew risking much to explore a strange new territory. But the space we explore is more cultural than physical.

During the industrial era, the world economy has doubled roughly every fifteen years. Each such doubling of output has moved us into new uncharted cultural territory. This growth has put new pressures on our environment, and has resulted in large and rapid changes to our culture and social organization.

This growth results mostly from innovation, and most innovations are small and well tested against local conditions, giving us little reason to doubt their local value. But all these small changes add up to big overall moves that are often entangled with externalities, coordination failures, and other reasons to doubt their net value.

So humanity continues to venture out into new untried and risky cultural spaces, via changes to cultural conditions with which we don’t have much experience, and which thus risk disaster and destruction. The good crew of Spaceship Earth should carefully weigh these risks when considering where and how fast to venture.

Consider seven examples:

  1. While humans seem to be adapting reasonably well to global warming, we risk big lumpy disruptive changes to Atlantic currents and Antarctic ice. Ecosystems also seem to be adapting okay, but we are risking big collapses to them as well.
  2. While ancient societies gave plenty of status and rewards to fertility, today high fertility behaviors are mostly seen as low status. This change is entwined with complex changes in gender norms and roles, but one result is that human fertility is falling toward below replacement in much of the world, and may fall much further. Over centuries this might produce a drastic decrease in world population, and productivity-threatening decreases in the scale of world production.
  3. While the world has become much more peaceful over the last century, this has been accompanied by big declines in cultural support for military action and tolerance for military losses. Is the world now more vulnerable to conquest by a new military power with more local cultural support and tolerance for losses?
  4. Farmer era self-control and self-discipline has weakened over time, in part via weaker religion. This has weakened cultural support for work and cultural suspicion of self-indulgence in sex, drugs, and media. So we now see less work and more drug addiction. How far will we slide?
  5. Via new media, we are exploring brave new worlds of how to make friends, form identities, achieve status, and learn about the world. As many have noted, these new ways risk many harms to happiness and social capital.
  6. Innovation was once greatly aided by tinkering, i.e., the ability to take apart and change familiar devices. Such tinkering is much less feasible in modern devices. Increasing regulation and risk aversion is also interfering with innovation. Are we as a result risking cultural support for innovation?
  7. Competition between firms has powered rapid growth, but winning bets on intangible capital is allowing leading firms to increasingly dominate industries. Does this undermine the competition that we’ve relied on so far to power growth?

The most common framing today for such issues is one of cultural war. You ask yourself which side feels right to you, commiserate with your moral allies, then puff yourself up with righteous indignation against those who see things differently, and go to war with them. But we might do better to frame these as reasonable debates on how much to risk as we explore culture space.

In a common scene from exploration stories, a crew must decide if to take a big risk. Or choose among several risks. Some in the crew see a risk as worth the potential reward, while others want to search longer for better options, or retreat to try again another day. They may disagree on the tradeoff, but they all agree that both the risks and the rewards are real. It is just a matter of tradeoff details.

We might similarly frame key “value” debates as reasonable differing judgements on what chances to take as spaceship Earth explores culture space. Those who love new changes could admit that we are taking some chances in adopting them so quickly, with so little data to go on, while those who are suspicious of recent changes could admit that many seem to like their early effects. Rather than focus on directly evaluating changes, we might focus more on setting up tracking systems to watch for potential problems, and arranging for repositories of old culture practices that might help us to reverse changes if things go badly. And we might all see ourselves as part of a grand heroic adventure story, wherein a mostly harmonious crew explores a great strange cosmos of possible cultures.

GD Star Rating
loading...
Tagged as: , ,

Sanctimonious Econ Critics

The New Yorker review of Elephant in the Brain raved about Cents and Sensibility, by Gary Morson and Morton Shapiro, a book said to confirm that “intellectual overextension is often found in economics.” Others have similarly raved. But I don’t care much for this book, so let me explain why. (Be warned: this post is LONG.)

In its first sentence, the book declares its aim:

This book creates a dialogue between two fields that rarely have anything to say to each other: economics and the humanities. We mean to show how that dialogue could be conducted and why it has a great deal to contribute. (p.1)

Morson and Shapiro seem to want the sort of “dialogue” where one side talks and the other just listens. All but one chapter elaborates how economists should listen to the humanities, and the one remaining chapter is on how some parts of the humanities should listen to another part, not to economists. There’s only a two page section near the end on “What Humanists Can Learn From Economists,” which even then can’t resist talking more about what economists can learn:

Economists could learn from humanists the complexity of ethical issues, the need for stories, the importance of empathy, and the value of unformalizable good judgement. But humanists could also learn from economists how to think about scarce resources, about the nature of efficiency, and the importance of rational decision making. (p.261)

So what exactly can we economists learn? Continue reading "Sanctimonious Econ Critics" »

GD Star Rating
loading...
Tagged as: , ,

Today, Ems Seem Unnatural

The main objections to “test tube babies” weren’t about the consequences for mothers or babies, they were about doing something “unnatural”:

Given the number of babies that have now been conceived through IVF — more than 4 million of them at last count — it’s easy to forget how controversial the procedure was during the time when, medically and culturally, it was new. … They weren’t entirely sure how IVF was different from cloning, or from the “ethereal conception” that was artificial insemination. They balked at the notion of “assembly-line fetuses grown in test tubes.” … For many, IVF smacked of a moral overstep — or at least of a potential one. … James Watson publicly decried the procedure, telling a Congressional committee in 1974 that … “All hell will break loose, politically and morally, all over the world.” (more)

Similarly, for most ordinary people, the problem with ems isn’t that the scanning process might kill the original human, or that the em might be an unconscious zombie due to their new hardware not supporting consciousness. In fact, people more averse to death have fewer objections to ems, as they see ems as a way to avoid death. The main objections to ems are just that ems seem “unnatural”:

In four studies (including pilot) with a total of 952 participants, it was shown that biological and cultural cognitive factors help to determine how strongly people condemn mind upload. … Participants read a story about a scientist who successfully transfers his consciousness (uploads his mind) onto a computer. … In the story, the scientist injects himself with nano-machines that enter his brain and substitute his neurons one-by-one. After a neuron has been substituted, the functioning of that neuron is copied (uploaded) on a computer; and after each neuron has been copied/uploaded the nano-machines shut down, and the scientist’s body falls on the ground completely limp. Finally, the scientist wakes up inside the computer.

The following variations made NO difference:

[In Study 1] we modified our original vignette by changing the target of mind upload to be either (1) a computer, (2) an android body, (3) a chimpanzee, or (4) an artificial brain. …

[In Study 2] we changed the story in a manner that the scientist merely ingests the nano-machines in a capsule form. Furthermore, we used a 2 × 2 experimental set-up to investigate whether the body dying on a physical level [heart stops or the brain stops] impacts the condemnation of the scientist’s actions. We also investigated whether giving participants information on how the transformation feels for the scientist once he is in the new platform has an impact on the results.

What did matter:

People who value purity norms and have higher sexual disgust sensitivity are more inclined to condemn mind upload. Furthermore, people who are anxious about death and condemn suicidal acts were more accepting of mind upload. Finally, higher science fiction literacy and/or hobbyism strongly predicted approval of mind upload. Several possible confounding factors were ruled out, including personality, values, individual tendencies towards rationality, and theory of mind capacities. (paper; summary; HT Stefan Schubert)

As with IVF, once ems are commonplace they will probably also come to seem less unnatural; strange never-before-seen possibilities evoke more fear and disgust than common things, unless those common things seem directly problematic.

GD Star Rating
loading...
Tagged as: ,

Automatic Norm Lessons

Pity the modern human who wants to be seen as a consistently good person who almost never breaks the rules. For our distant ancestors, this was a feasible goal. Today, not so much.To paraphrase my recent post:

Our norm-inference process is noisy, and gossip-based convergence isn’t remotely up to the task given our huge diverse population and vast space of possible behaviors. Setting aside our closest associates and gossip partners, if we consider the details of most people’s behavior, we will find rule-breaking fault with a lot of it. As they would if they considered the details of our behavior. We seem to live in a Sodom and Gomorrah of sin, with most people getting away unscathed with most of it. At the same time, we also suffer so many overeager busybodies applying what they see as norms to what we see as our own private business where their social norms shouldn’t apply.

Norm application isn’t remotely as obvious today as our evolved habit of automatic norms assumes. But we can’t simply take more time to think and discuss on the fly, as others will then see us as violating the meta-norm, and infer that we are unprincipled blow-with-the-wind types. The obvious solution: more systematic preparation.

People tend to presume that the point of studying ethics and norms is to follow them more closely. Which is why most people are not interested for themselves, but think it is good for other people. But in fact such study doesn’t have that effect. Instead, there should be big gains to distinguishing which norms to follow more versus less closely. Whether for purely selfish purposes, or for grand purposes of helping the world, study and preparation can help one to better identify the norms that really matter, from the ones that don’t.

In each area of life, you could try to list many possibly relevant norms. For each one, you can try to estimate how it expensive it is to follow, how much the world benefits from such following, and how likely others are to notice and punish violations. Studying norms together with others is especially useful for figuring out how many people are aware of each norm, or consider it important. All this can help you to prioritize norms, and make a plan for which ones to follow how eagerly. And then practice your plan until your new habits become automatic.

As a result, instead of just obeying each random rule that pops into your head in each random situation that you encounter, you can actually only follow the norms that you’ve decided are worth the bother. And if variation in norm following is an big part of variation in success, you may succeed substantially more.

GD Star Rating
loading...
Tagged as: ,