Tag Archives: Morality

Protecting Hypocritical Idealism

I’m told that soldiers act a lot more confident and brave when they are far from battle, relative to when it looms immediate in front of them.

When presented with descriptions of how most citizens of Nazi Germany didn’t resist or oppose the regime much, most people claim they would have done different. Which of course is pretty unlikely for most of them. But there’s an obvious explanation of this “social desirability bias”. Their subconscious expects a larger positive payoff from presenting an admirable view of themselves to associates, relative to the smaller negative payoff from making themselves more likely to actually do what they said, should they actually find themselves in a Nazi regime.

When the covid pandemic first appeared, elites and experts voiced their long-standing position that masks and travel restrictions were not effective in a pandemic. Which let them express their pro-inclusive global-citizen liberal attitudes. Their subconscious foresaw only a small chance that they’d actually face a real and big pandemic. And if that ever happened, they could and did lower the cost of this previous attitude by just suddenly and without explanation changing their minds.

For many decades it has been an article of faith among a large fraction of these same sort of experts and elites that advanced aliens must be peaceful egalitarian eco-friendly non-expansionist powers, who would if they saw us scold and lecture us about our wars, nukes, capitalism, expansion, and eco-damage. Like our descendants are presented to be in Star Trek or the Culture novels.

Because in this scenario aliens would be the highest status creatures around, and it is important to these humans that the highest in status agree with their politics. I confidently predict that their attitudes would quickly change if they were actually confronted with unknown but very real alien powers nearby.

This predictable hypocrisy could be exposed if people would back these beliefs with bets. But of course they don’t. They aren’t exactly sure why, but most just feel “uncomfortable” with that. Visible and open betting market odds that disagreed with them would also expose this hypocrisy, but most such also oppose allowing those, mostly also for vague “uncomfortable” reasons. Their unconscious knows better what are those reasons, but knows also not to tell.

GD Star Rating
Tagged as: , , ,

Why We Fight Over Fiction

We tell stories with language, and so prefer to tell the kind of stories that ordinary language can describe well.

Consider how language can describe a space of physical stuff and how to navigate through that stuff. In a familiar sort of space, a few sparse words can evoke a vivid description, such as of a city street or a meadow. And a few words relating to landmarks in such a space can be effective at telling you how to navigate from one place to another.

But imagine an arbitrary space of partially-opaque swirling strangeness, in a highly curved 11-dimensional space. In principle our most basic and general spatial language could describe this too, and instruct navigation there. But in practice that would require a lot more words, and slow the story to a crawl. So few authors would try, though a filmmaker might try just using visuals.

Or consider stories with non-human minds. In principle those who study minds in the abstract can conceive of a vast space of possible minds, and can use a basic and general language of mental acts to describe how each such mind might make a decision, or send a communication, and what those might be. But in practice such descriptions would be long, boring, and unfamiliar to most readers.

So in practice even authors writing about aliens or AIs stick to describing human-like minds, where their usual language for describing what actors decide and say is fast, fluid, and relatable. Authors even prefer human characters with familiar minds, and so avoid characters who think oddly, such as those with autism.

Just as authors focus on telling stories in familiar spaces with familiar minds, they also focus on telling stories in familiar moral universes. This effect is, if anything, even stronger than the space and mind effects, as moral colors are even more central to our need for stories. Compared to other areas of our lives, we especially want our stories to help us examine and affirm our moral stances.

In a familiar moral universe, there many be competing considerations re what acts are moral, making it sometimes hard to decide if an act is moral. Other considerations may weigh against morality, and reader/viewers may not always sympathize most with the most moral characters, who may not win in the end. Moral characters may have unattractive features (like being ugly). There may even be conflicts between characters who see different familiar moral universes.

These are the familiar sorts of “moral ambiguity” in stories said to have that feature, such as The Sopranos or Game of Thrones. But you’ll note that these are almost all stories told in familiar moral universes. By which I mean that we are quite familiar with how to morally evaluate the sort of actions that happen there. The set of acts is familiar, as are their consequences, and the moral calculus used to judge them.

But there is another sort of “moral ambiguity” that reader/viewers hate, and so authors studiously avoid. And that is worlds where we find it hard to judge the morality of actions, even when those actions have big consequences for characters. Where our usual quick and dirty moral language doesn’t apply very well. Where even though in principle our most basic and general moral languages might be able to work out rough descriptions and evaluations, in practice that would be tedious and unsatisfying.

And, strikingly, the large complex social structures and organizations that dominate our world are mostly not familiar moral universes to most of us. For example, big firms, agencies, and markets. The worlds of Moral Mazes and of Pfeffer’s Power. (In fiction: Jobs.) Our stories thus tend to avoid such contexts, unless they happen to allow an especially clear moral calculus. Such as a firm polluting to cause cancer, or a boss sexually harassing a subordinate.

As I’ve discussed before, our social world has changed greatly over the last few centuries. Our language has changed fast enough to describe the new physical objects and spaces that have arisen, at least those with which ordinary people must deal, if not the many new strange objects and spaces behind the scenes that enable our new world. But we have not gone remotely as fast at coming to agree on moral stances toward the new choices possible in such social structures.

This is why our stories tend to take place in relatively old fashioned social worlds. Consider the popularity of the Western, or of pop science fiction stories like Star Wars that are essentially Westerns with more gadgets. Stories that take place in modern settings tend to focus on personal, romantic, and family relations, as these remain to us relatively familiar moral universes. Or on artist biopics. Or on big conflicts like war or corrupt police or politicians. For which we have comfortable moral framings.

Stories we write today set in say the 1920s feel to us more comfortable than do stories set in the 2020s, or than stories written in the 1920s and set in that time. That is because stories written today can inherit a century of efforts to work out clearer moral stances on which 1920s actions would be more moral. For example, as to our eyes female suffrage is clearly good, we can see any characters from then who doubted it as clearly evil in the eyes of good characters. As clear as if they tortured kittens. To our eyes, their world has now clearer moral colors, and stories set there work better as stories for us.

This is also why science fiction tends to make most people more wary of anticipated futures. The easiest engaging stories to tell about strange futures are on how acts there that seem to violate the rules in our current moral universe. Like about how nuclear rockets spread radioactivity near their launch site, instead of the solar civilization they enable. Much harder to describe how new worlds will induce new moral universes.

This highlights an important feature of our modern world, and an important process that continues within it. Our social world has changed a lot faster than has our shared moral evaluations of typical actions possible in our new world. And our telling stories, and coming to agree on which stories we embrace, is a big part of creating such a fluid language of shared moral evaluations.

This helps to explain why we invest so much time and energy into fiction, far more than did any of our ancestors. Why story tellers are given high and activist-like status, and why we fight so much to convince others to share our beliefs on which stories are best. Our moral evaluations of the main big actions that influence our world today, and that built our world from past worlds, are still up for grabs. And the more we build such shared evaluations, the more we’ll be able to tell satisfying stories set in the world in which we live, rather than set in the fantasy and historical worlds with which we must now make do.

(This post is an elaboration of this Twitter thread.)

GD Star Rating
Tagged as: , ,

ALL Big Punishment Is “Cruel”

Cruel – willfully causing pain or suffering to others, or feeling no concern about it.

Cruelty is pleasure in inflicting suffering or inaction towards another’s suffering when a clear remedy is readily available. …affirmative violence is not necessary for an act to be cruel. … are four distinct conceptions of cruelty. … first … above in degree and beyond in type the [suffering] allowed by applicable norms. … second … fault of character consisting in deriving personal delight from causing and witnessing suffering … punishment or other violence is a means to restore the offset in the cosmic order of the universe caused by a wrongdoing. Anything that goes beyond what is necessary for this restoration, then, is cruel. … third … the pain or the sense of degradation and humiliation experienced … fourth … accumulation of all the prior conceptions. (More)

A great many things seem quite wrong with the U.S. legal system, especially in criminal law. I’ve tried to work out comprehensive solutions, but I should also identify more modest changes, more likely to be adopted. And one big way our criminal law seems broken is our huge prison population, which is near a world and historical peak of residents per capita.

Many people say we define too many acts as crimes, that we make it too easy for authorities to prosecute people, and that we punish many crimes too severely. And while those seem like fine issues to explore, I see an even clearer case that jail is usually the wrong way to punish crime. Let me explain.

In principle jail can serve many functions, such as education, reform, isolation, and punishment. But prison is now more expensive than college; few see it as a cost effective way to learn. And few believe that US jails actually reform many convicts. Yes, convicts do tend to commit fewer crimes over time, but that’s mainly due to age, not reform efforts. Jail cuts residents off from their prior social connections, such as jobs and family, and connects them instead to other criminals. Which seems bad for reform. 

Jail does isolate convicts, making it harder for them to commit many crimes. But we can isolate most convicts nearly as well and far more cheaply with curfews, travel limits, and ankle bracelets. Whole isolated towns might be set up for convicts. And if isolation were the main issue setting who we sent to jail and for how long, then for each person we put into jail we’d keep them there until we saw a substantial decline in our estimate of the harm they might do if released. 

But in fact the median time served in state prison is 1.3 years, way too short a time to usually expect to see much change. And even if there is a substantial decline soon after a peak crime age, we don’t vary sentence lengths in this way with age.

Furthermore, exile offers a much cheaper way to isolate. Let convicts leave the nation for a specified period if any other nation will take them. Not every one would be taken, but each one who is taken represents a big savings. Worried about them sneaking back unseen? Just make severe punishments for that. Maybe even make them post a bond on it.

So if education, reform, and isolation are poor explanations of jail, that leaves punishment. Ancient societies used fines more often, which they often took from family members if the convict couldn’t pay, and they often enslaved convicts to make them pay. But as we aren’t willing to do these things, we can’t get much money out of most convicts, which is why we need other punishments. 

Note that I’m not saying that jail could not in principle achieve other ends, nor even that jails do not to some degree achieve other ends. I’m saying instead that the widespread popular support for using jails today mainly comes from a widespread perception that jails achieve punishment, which most see as a desirable end. 

The classic logic of criminal punishment is that most people are less likely to commit a crime if they anticipate a substantially higher chance that doing so will result in their experiencing a “punishment” that they will dislike. (Relative to the chance if they don’t commit the crime.) Yes, this effect may be weak, but most people aren’t convinced of other approaches, and they aren’t willing to give up on this approach. 

But a big problem with using jail to punish is that our jails are terribly expensive, relative to feasible alternatives. For example, most of our jails are relatively “nice” and comfortable, with nice food, beds, climate control, entertainment, etc. At least compared to other jails in history. But typically X years in a nice jail gives the same expected punishment (i.e., anticipated dislike) as Y years in a mean jail, for Y < X. So if a mean jail costs no more per year than a nice jail, this is a cost savings.

Our history and the world today clearly demonstrate that it is possible to create jails that are less nice than ours. Furthermore, corporal punishment (often called “torture”) is even cheaper than mean jails. This was quite common in ancient societies, and is still used in some places today. For any sentence of X years in jail, there is some amount of corporal punishment, e.g., N lashings, that gives the same expected punishment at a far lower cost. 

Some say that torture and mean jails are more “cruel” than nice jails, and thus immoral, and thus forbidden. But as you can see from the above definitions, when the amounts of these things are adjusted to produce the same amount of anticipated dislike for each, then some of them cannot be more “cruel” than others in the sense of the dislike convicts expect to experience. The only grounds then offered for saying that some are more “cruel” is that some might induce more inappropriate “delight from causing and witnessing suffering”. 

Yet I can find no evidence suggesting that observers achieve more inappropriate delight from criminal punishments in the form of mean jails or corporal punishment. Yes, we can see many people taking delight today in the suffering of convicts in the jails that we now have. And those people would likely switch their design to focus on other forms of suffering, if those happened instead. But I see little reason to think that those who today do not delight in seeing convicts suffer from existing prisons would start to so delight after a switch to other punishments. 

An interesting way to vividly show everyone that our jails today are in fact just as “cruel” ways to punish as these alternatives would be to give convicts a choice. A convict who is sentenced to X years in ordinary jail might be offered the choice to switch to Y years in mean jail, or to N lashings (or other corporal punishment). Or perhaps even to E years in exile. A simple standard mapping function between X and Y,N,E could be used, one that is adjusted continuously to get pre-determined fractions of convicts to choose each option.

(With enough data, these mappings might depend on age, gender, etc. Some small fraction of convicts might be forced into each option, to induce reliable data on option effects on convicts. Over time, the pre-determined fractions could be adjusted toward the cheaper options if their outcomes continue to seem acceptable relative to costs.) 

Under this system, it seems harder to complain that it is more cruel to give convicts the option to choose something other than ordinary jail, relative to just forcing convicts into ordinary jail. And the fact that many convicts do choose other options should yell a big loud lesson to all: convicts suffer a lot in ordinary jail. They lose big chucks of their lives, including their careers, friends, and families. If you see a person suffering under torture, and you realize that this person chose torture over ordinary jail, that tells you just how much they hate and dislike ordinary jail. It tells you that you should not on empathy-for-them grounds feel good about yourself for instead forcing them into the option from which they ran. 

Yes, when each convict picks their favorite punishment option from a set, that will on average reduce their expected punishment. But not usually by a lot, and the X in sentences of X years of ordinary jail could be adjusted up a bit to compensate. In this situation, one reason to exclude an option is if we are much more uncertain about how much each person dislikes that option, relative to other options. It is better to know how much we are punishing a convict. 

But I see no reason now to think that we are now much more certain about dislike for ordinary jail, relative to mean jail, corporate punishment, or exile. When the judges who sentence convicts do know something about the option preferences about a particular convict, they might on that basis exclude some of those options for that particular convict. For example, exile might be a weaker punishment for someone who recently lived abroad. 

Yes, there’d be a tendency by those who embrace a criminal culture to take the “toughest” punishment option available, to signal toughness to associates. But this would on average hurt them, and isn’t that different from many other things they do to show toughness. Doesn’t seem a big problem to me.

So that’s my pitch. Let’s stop wasting so much on expensive jails, when we could instead produce the same punishments at a lower cost by giving convicts a choice between types of punishment. This would also show everyone just how cruel we have been by putting convicts in our current jails. A majority of those who answer my Twitter polls approve; what about you?

So why do people see the other options are more cruel? My guess is the representativeness heuristic is at fault. People imagine a random moment from within the punishment, and neglect how many minutes there are in each punishment.

GD Star Rating
Tagged as: ,

A Perfect Storm of Inflexibility

Most biological species specialize for particular ecological niches. But some species are generalists, “specializing” in doing acceptably well in a wider range of niches, and thus also in rapidly changing niches. Generalist species tend to be more successful at generating descendant species. Humans are such a generalist species, in part via our unusual intelligence.

Today, firms in rapidly changing environments focus more on generality and flexibility. For example, CEO Andy Grove focused on making Intel flexible:

In Only the Paranoid Survive, Grove reveals his strategy for measuring the nightmare moment every leader dreads–when massive change occurs and a company must, virtually overnight, adapt or fall by the wayside–in a new way.

A focus on flexibility is part of why tech firms tend more often to colonize other industries today, rather than vice versa.

War is an environment that especially rewards generality and flexibility. “No plan survives contact with the enemy,” they say. Militaries often lose by preparing too well for the last war, and not adapting flexibly enough to new context. We usually pay extra for military equipment that can function in a wider range of environments, and train soldiers for a wider range of scenarios than we train most workers.

Centralized control has many costs, but one of its benefits is that it promotes rapid thoughtful coordination. Which is why most wars are run from a center.

Familiar social institutions tend to be run by those who have run parts of them well recently. As a result, long periods of peace and stability tend to promote specialists, who have learned well how to win within a relatively narrow range of situations. And those people tend to change our rules and habits to suit themselves.

Thus rule and habit changes tend to improve performance for rulers and their allies within the usual situations, often at the expense of flexibility for a wider range of situations. As a result, long periods of peace and stability tend to produce fragility, making us more vulnerable to big sudden changes. This is in part why software rots, and why institutions rot as well. (Generality is also often just more expensive.)

Through most of the farming era, war was the main driver pushing generality and flexibility. Societies that became too specialized and fragile lost the next big war, and were replaced by more flexible competitors. Revolutions and pandemics also contributed.

As the West has been peaceful and stable for a long time now, alas we must expect that our institutions and culture have been becoming more fragile, and more vulnerable to big unexpected crises. Such as this current pandemic. And in fact the East, which has been adapting to a lot more changes over the last few decades, including similar pandemics, has been more flexible, and is doing better. Being more authoritarian and communitarian also helps, as it tends to help in war-like times.

In addition to these two considerations, longer peace/stability and more democracy, we have two more reasons to expect problems with inflexibility in this crisis. The first is that medical experts tend to think less generally. To put it bluntly, most are bad at abstraction. I first noticed this when I was a RWJF social science health policy scholar, and under an exchange program I went to the RWJF medical science health policy scholar conference.

Biomed scholars are amazing in managing enormous masses of details, and bringing up just the right examples for any one situation. But most find it hard to think about probabilities, cost-benefit tradeoffs, etc. In my standard talk on my book Age of Em, I show this graph of the main academic fields, highlighting the fields I’ve studied:

Academia is a ring of fields where all the abstract ones are on one side, far from the detail-oriented biomed fields on the other side. (I’m good at and love abstractions, but have have limited tolerance or ability for mastering masses of details.) So to the extent pandemic policy is driven by biomed academics, don’t expect it to be very flexible or abstractly reasoned. And my personal observation is that, of the people I’ve seen who have had insightful things to say recently about this pandemic, most are relatively flexible and abstract polymaths and generalists, not lost-in-the-weeds biomed experts.

The other reason to expect a problem with flexibility in responding to this pandemic is: many of the most interesting solutions seem blocked by ethics-driven medical regulations. As communities have strong needs to share ethical norms, and most people aren’t very good at abstraction, ethical norms tend to be expressed relatively concretely. Which makes it hard to change them when circumstances change rapidly. Furthermore we actually tend to punish the exceptional people who reason more abstractly about ethics, as we don’t trust them to have the right feelings.

Now humans do seem to have a special wartime ethics, which is more abstract and flexible. But we are quite reluctant to invoke that without war, even if millions seem likely to die in a pandemic. If billions seemed likely to die, maybe we would. We instead seem inclined to invoke the familiar medical ethics norm of “pay any cost to save lives”, which has pushed us into apparently endless and terribly expensive lockdowns, which may well end up doing more damage than the virus. And which may not actually prevent most from getting infected, leading to a near worst possible outcome. In which we would pay a terrible cost for our med ethics inflexibility.

When a sudden crisis appears, I suspect that generalists tend to know that this is a potential time for them to shine, and many of them put much effort into seeing if they can win respect by using their generality to help. But I expect that the usual rulers and experts, who have specialized in the usual ways of doing things, are well aware of this possibility, and try all the harder to close ranks, shutting out generalists. And much of the public seems inclined to support them. In the last few weeks, I’ve heard far more people say “don’t speak on pandemic policy this unless you have a biomed Ph.D”, than I’ve ever in my lifetime heard people say “don’t speak on econ policy without an econ Ph.D.” (And the study of pandemics is obviously a combination of medical and social science topics; social scientists have much relevant expertise.)

The most likely scenario is that we will muddle through without actually learning to be more flexible and reason more generally; the usual experts and rulers will maintain control, and insist on all the usual rules and habits, even if they don’t work well in this situation. There are enough other things and people to blame that our inflexibility won’t get the blame it should.

But there are some more extreme scenarios here where things get very bad, and then some people somewhere are seen to win by thinking and acting more generally and flexibly. In those scenarios, maybe we do learn some key lessons, and maybe some polymath generalists do gain some well-deserved glory. Scenarios where this perfect storm of inflexibility washes away some of our long-ossified systems. A dark cloud’s silver lining.

GD Star Rating
Tagged as: , , , ,

Plot Holes & Blame Holes

We love stories, and the stories we love the most tend to support our cherished norms and morals. But our most popular stories also tend to have many gaping plot holes. These are acts which characters could have done instead of what they did do, to better achieve their goals. Not all such holes undermine the morals of these stories, but many do.

Logically, learning of a plot hole that undermines a story’s key morals should make us like that story less. And for a hole that most everyone actually sees, that would in fact happen. This also tends to happen when we notice plot holes in obscure unpopular stories.

But this happens much less often for widely beloved stories, such as Star Wars, if only a small fraction of fans are aware of the holes. While the popularity of the story should make it easier to tell most fans about holes, fans in fact try not to hear, and punish those who tell them. (I’ve noticed this re my sf reviews; fans are displeased to hear beloved stories don’t make sense.)

So most fans remain ignorant of holes, and even fans who know mostly remain fans. They simply forget about the holes, or tell themselves that there probably exist easy hole fixes – variations on the story that lack the holes yet support the same norms and morals. Of course such fans don’t usually actually search for such fixes, they just presume they exist.

Note how this behavior contrasts with typical reactions to real world plans. Consider when someone points out a flaw in our tentative plan for how to drive from A to B, how to get food for dinner, how to remodel the bathroom, or how to apply for a job. If the flaw seems likely to make our plan fail, we seek alternate plans, and are typically grateful to those who point out the flaw. At least if they point out flaws privately, and we haven’t made a big public commitment to plans.

Yes, we might continue with our basic plan if we had good reasons to think that modest plan variations could fix the found flaws. But we wouldn’t simply presume that such variations exist, regardless of flaws. Yet this is mostly what we do for popular story plot holes. Why the different treatment?

A plausible explanation is that we like to love the same stories as others; loving stories is a coordination game. Which is why 34% of movie budgets were spent on marketing in ’07, compared to 1% for the average product. As long as we don’t expect a plot hole to put off most fans, we don’t let it put us off either. And a plausible partial reason to coordinate to love the same stories is that we use stories to declare our allegiance to shared norms and morals. By loving the same stories, we together reaffirm our shared support for such morals, as well as other shared cultural elements.

Now, another way we show our allegiance to shared norms and morals is when we blame each other. We accuse someone of being blameworthy when their behavior fits a shared blame template. Well, unless that person is so allied to us or prestigious that blaming them would come back to hurt us.

These blame templates tend to correlate with destructive behavior that makes for a worse (local) world overall. For example, we blame murder and murder tends to be destructive. But blame templates are not exactly and precisely targeted at making better outcomes. For example, murderers are blamed even when their act makes a better world overall, and we also fail to blame those who fail to murder in such situations.

These deviations make sense if blame templates must have limited complexity, due to being socially shared. To support shared norms and morals, blame templates must be simple enough so most everyone knows what they are, and can agree on if they match particular cases. If the reality of which behaviors are actually helpful versus destructive is more complex than that, well then good behavior in some detailed “hole” cases must be sacrificed, to allow functioning norms/morals.

These deviations between what blame templates actually target, and what they should target to make a better (local) world, can be seen as “blame holes”. Just as a plot may seem to make sense on a quick first pass, with thought and attention required to notice its holes, blame holes are typically not noticed by most who only work hard enough to try to see if a particular behavior fits a blame template. While many are capable of understanding an explanation of where such holes lie, they are not eager to hear about them, and they still usually apply hole-plagued blame templates even when they see their holes. Just like they don’t like to hear about plot holes in their favorite stories, and don’t let such holes keep them from loving those stories.

For example, a year ago I asked a Twitter poll on the chances that the world would have been better off overall had Nazis won WWII. 44% said that chance was over 10% (the highest category offered). My point was that history is too uncertain to be very sure of the long term aggregate consequences of such big events, even when we are relatively sure about which acts tend to promote good.

Many then said I was evil, apparently seeing me as fitting the blame template of “says something positive about Nazis, or enables/encourages others to do so.” I soon after asked a poll that found only 20% guessing it was more likely than not that the author of such a poll actually wishes Nazis had won WWII. But the other 80% might still feel justified in loudly blaming me, if they saw my behavior as fitting a widely accepted blame template. I could be blamed regardless of the factual truth of what I said or intended.

Recently many called Richard Dawkins evil for apparently fitting the template “says something positive about eugenics” when he said that eugenics on humans would “work in practice” because “it works for cows, horses, pigs, dogs & roses”. To many, he was blameworthy regardless of the factual nature or truth of his statement. Yes, we might do better to instead use the blame template “endorses eugenics”, but perhaps too few are capable in practice of distinguishing “endorses” from “says something positive about”. At least maybe most can’t reliably do that in their usual gossip mode of quickly reading and judging something someone said.

On reflection, I think a great deal of our inefficient behavior and policies can be explained via limited-complexity blame templates. For example, consider the template:

Blame X if X interacts with Y on dimension D, Y suffers on D, no one should suffer on D, and X “could have” interacted so as to reduce that suffering more.

So, blame X who hires Y for a low wage, risky, or unpleasant job. Blame X who rents a high price or peeling paint room to Y. Blame food cart X that sells unsavory or unsafe food to Y. Blame nation X that lets in immigrant Y who stays poor afterward. Blame emergency room X who failed to help arriving penniless sick Y. Blame drug dealer X who sells drugs to poor, sick, or addicted Y. Blame client X who buys sex, an organ, or a child from Y who would not sell it if they were much richer.

So a simple blame template can help explain laws on min wages, max rents, job & room quality regs, food quality rules, hospital care rules, and laws prohibiting drugs, organ sales, and prostitution. Yes, by learning simple economics many are capable of seeing that these rules can actually make targets Y worse off, via limiting their options. But if they don’t expect others to see this, they still tend to apply the usual blame templates. Because blame templates are socially shared, and we each tend to be punished from deviating from them, either by violating them, or failing to disapprove of violators.

In another post soon I hope to say more about the role of, and limits on, simplified blame templates. For this post, I’m content to just note their central causal roles.

Added 8am: Another key blame template happens in hierarchical organizations. When something bad seems to happen to a division, the current leader takes all the blame, even if recently replaced prior leader. Rising stars gain by pushing short term gains at the expense of long term losses, and being promoted fast enough so as not to be blamed for those losses.

Re my deliberate exposure proposal, many endorse a norm that those who propose policies intended to combine good and bad effects should immediately cause themselves to suffer the worst possible bad effects personally, even in the absence of implementing their proposal. Poll majorities, however, don’t support such norms.

GD Star Rating
Tagged as: , ,

Defrock Deregulation Economists?

Recent economics Nobel prize winner Paul Romer is furious that economists have sometimes argued for deregulation; he wants them “defrocked”, & cast from the profession: 

New generation of economists argued that tweaks … would enable the market to regulate itself, obviating the need for stringent government oversight. … To regain the public’s trust, economists should … emphasize the limits of their knowledge … even if it requires them to publicly expel from their ranks any member of the community who habitually overreaches. …

Consider the rapid spread of cost-benefit analysis … Lacking clear guidance from voters, legislators, regulators, and judges turned to economists, who resolved the uncertainty by [estimating] … the amount that society should spend to save a life. … [This] seems to have worked out surprisingly well … The trouble arose when the stakes were higher … it is all too easy for a firm … to arrange for a pliant pretend economist to … [defend them] with a veneer of objectivity and scientific expertise. …

Imagine making the following proposal in the 1950s: Give for-profit firms the freedom to develop highly addictive painkillers and to promote them via … marketing campaigns targeted at doctors. Had one made this pitch to [non-economists] back then, they would have rejected it outright. If pressed to justify their decision, they [would have said] … it is morally wrong to let a company make a profit by killing people … By the 1990s, … language and elaborate concepts of economists left no opening for more practically minded people to express their values plainly. …

Until the 1980s, the overarching [regulatory] trend was toward restrictions that reined in these abuses. … United States [has since been] going backward, and in many cases, economists—even those acting in good faith—have provided the intellectual cover for this retreat. …

In their attempt to answer normative questions that the science of economics could not address, economists opened the door to economic ideologues who lacked any commitment to scientific integrity. Among these pretend economists, the ones who prized supposed freedom (especially freedom from regulation) over all other concerns proved most useful …  When the stakes were high, firms sought out these ideologues to act as their representatives and further their agenda. And just like their more reputable peers, these pretend economists used the unfamiliar language of economics to obscure the moral judgments that undergirded their advice. …

Throughout his entire career, Greenspan worked to give financial institutions more leeway … If economists continue to let people like him define their discipline, the public will send them back to the basement, and for good reason. …

The alternative is to make honesty and humility prerequisites for membership in the community of economists. The easy part is to challenge the pretenders. The hard part is to say no when government officials look to economists for an answer to a normative question. Scientific authority never conveys moral authority. No economist has a privileged insight into questions of right and wrong, and none deserves a special say in fundamental decisions about how society should operate. Economists who argue otherwise and exert undue influence in public debates about right and wrong should be exposed for what they are: frauds. (more)

Oddly, Romer is famous for advocating “charter city” experiments, which can be seen as a big way to escape from the usual regulations.

So how does Romer suggest we identify “pretend” economists who are to be “exposed as frauds” and “publicly expelled from economists’ ranks”? He seems to say they are problematic on big but not small issues because firms bribe them, but he admits some are well-meaning, and doesn’t accuse Greenspan of taking bribes. So I doubt he’d settle for expelling only those who are clearly bribed. 

That seems to leave only the fact that they argue for less regulation when common moral intuitions call for more. (Especially when they mention “freedom”.) Perhaps he wants economists to be expelled when they argue for deregulation, or perhaps when they offer economic analysis contrary to moral intuitions. Both sound terrible to me as intellectual standards.

Look, people quite often express “moral” opinions that are combinations of simple moral intuitions together with intuitions about how social systems work. If they are mistaken about that second part, and if we can gain separate estimates on their moral intuitions, then economic analysis has the potential to produce superior combinations.

This is exactly what economists try to do when applying value of life estimates, and this can also be done regarding deregulation. The key point is that when people act on their moral intuitions, then we can use their actions to estimate their morals, and thus include their moral weights in our analysis.

In particular, I don’t find it obviously wrong to let for-profit firms market drugs to doctors, nor do I think it remotely obvious that this is the main cause of a consistent four-decade rise in drug deaths.

Yes of course, it is a problem if professionals can be bribed to give particular recommendations. But in most of these disputes parties on many sides are willing to offer such distorting rewards. My long-standing recommendation is to use conditional betting markets to induce more honest advice from such professionals, but so far few support that.

GD Star Rating
Tagged as: , ,

End War Or Mosquitoes?

Malaria may have killed half of all the people that ever lived. (more)

Over one million people die from malaria each year, mostly children under five years of age, with 90% of malaria cases occurring in Sub-Saharan Africa. (more)

378,000 people worldwide died a violent death in war each year between 1985 and 1994. (more)

Over the last day I’ve done two Twitter polls, one of which was my most popular poll ever. Each poll was on whether, if we had the option, we should try to end a big old nemesis of humankind. One was on mosquitoes, the other on war:

In both cases the main con argument is a worry about unintended side effects. Our biological and social systems are both very complex, with each part having substantial and difficult to understand interactions with many other parts. This makes it hard to be sure that an apparently bad thing isn’t actually causing good things, or preventing other bad things.

Poll respondents were about evenly divided on ending mosquitoes, but over 5 to 1 in favor of ending war. Yet mosquitoes kill many more people than do wars, mosquitoes are only a small part of our biosphere with only modest identifiable benefits, and war is a much larger part of key social systems with much easier to identify functions and benefits. For example, war drives innovation, deposes tyrants, and cleans out inefficient institutional cruft that accumulates during peacetime. All these considerations favor ending mosquitoes, relative to ending war.

Why then is there so much more support for ending war, relative to mosquitoes? The proximate cause seems obvious: in our world, good people oppose both war and also ending species. Most people probably aren’t thinking this through, but are instead just reacting to this surface ethical gloss. Okay, but why is murderous nature so much more popular than murderous features of human systems? Perhaps in part because we are much more eager to put moral blame on humans, relative to nature. Arguing to keep war makes you seem like allies of deeply evil humans, while arguing to keep mosquitoes only makes you allies of an indifferent nature, which makes you far less evil by association.

GD Star Rating
Tagged as: , ,


On Bryan Caplan’s recommendation, I just watched the movie Downfall. To me, it depicts an extremely repulsive and reprehensible group of people, certainly compared to any real people I’ve ever met. So much so that I wonder about its realism, though the sources I’ve found all seem to praise its realism. Thus I was quite surprised to hear that critics complained the movie didn’t portray its subjects as evil enough!:

Downfall was the subject of dispute … with many concerned of Hitler’s role in the film as a human being with emotions in spite of his actions and ideologies. … The German tabloid Bild asked, “Are we allowed to show the monster as a human being?” in their newspaper. … Cristina Nord from Die Tageszeitung criticized the portrayal, and said that though it was important to make films about perpetrators, “seeing Hitler cry” had not informed her on the last days of the Third Reich. Some … felt the time was right to “paint a realistic portrait” of Hitler. Eichinger replied to the response from the film by stating that the “terrifying thing” about Hitler was that he was human and “not an elephant or a monster from Mars”. Ganz said that he was proud of the film; though he said people had accused him of “humanizing” Hitler. (more)

For example, the New Yorker:

But I have doubts about the way [the makers’] virtuosity has been put to use. By emphasizing the painfulness of Hitler’s defeat Ganz has certainly carried out the stated ambition … he has made the dictator into a plausible human being. Considered as biography, the achievement (if that’s the right word) of “Downfall” is to insist that the monster was not invariably monstrous—that he was kind to his cook and his young female secretaries, loved his German shepherd, Blondi, and was surrounded by loyal subordinates. We get the point: Hitler was not a supernatural being; he was common clay raised to power by the desire of his followers. But is this observation a sufficient response to what Hitler actually did? (more)

The conclusion I have to draw here is that no remotely realistic depiction of real bad people would satisfy these critics. Most people insist on having cartoonish mental images of their exemplars of evil, images that would be contradicted by any remotely realistic depiction of the details their actual lives. I’d guess this is also a problem on the opposite end of the spectrum; any remotely realistic depiction of the details of the life of someone that people consider saintly, like Jesus Christ or Martin Luther King, would be seen by many as a disrespectful takedown.

This is probably the result of a signaling game wherein people strive to show how moral they are by thinking even more highly of standard exemplars of good and even more lowly of standard exemplars of bad, compared to ordinary people. This helps me to understand self-righteous internet mobs a bit better; once a target has been labeled evil, most mob members probably don’t want to look too close at that target’s details, for fear that such details would make him or her seem more realistic, and thus less evil. Once we get on our self-righteous high horse, we prefer to look up to our ideals in the sky, and not down at the complex details on the ground.

Added 11p: This attitude of course isn’t optimal for detecting and responding to real evil in the world. But we care more about showing off just how outraged we are at evil than we care about effective response to it.

GD Star Rating
Tagged as: ,

Checkmate On Blackmail

Often in chess, at least among novices, one player doesn’t know that they’ve been checkmated. When the other player declares “checkmate”, this first player is surprised; that claim contradicts their intuitive impression of the board. So they have to check each of their possible moves, one by one, to see that none allow an escape.

The same thing sometimes happens in analysis of social policy. Many people intuitively want to support policy X, and they usually want to believe that this is due to the good practical consequences of X. But if the policy is simple enough, one may be able iterate through all the possible consequential arguments for X and find that they all fail. Or perhaps more realistically, iterate through hundreds of the most promising actual consequential arguments that have been publicly offered so far, and both find them all wanting, and find that almost all of them are repetitions, suggesting that few new arguments are to be found.

That is, it is sometimes possible with substantial effort to say that policy X has been checkmated, at least in terms of known consequentialist supporting arguments. Yes, many social policy chess boards are big, and so it can take a lot of time and expertise to check all the moves. But sometimes a person has done that checking on policy X, and then frequently encounters others who have not so checked. Many of these others will defend X, basically randomly sampling from the many failed arguments that have been offered so far.

In chess, when someone says “checkmate”, you tend to believe them, even if you have enough doubt that you still check. But in public debates on social policy, few people accept a claim of “checkmate”, as few such debates ever go into enough depth to go through all the possibilities. Typically many people are willing to argue for X, even if they haven’t studied in great detail the many arguments for and against X, and even when they know they are arguing with someone who has studied such detail. Because X just feels right. When such a supporter makes a particular argument, and is then shown how that doesn’t work, they usually just switch to another argument, and then repeat that process until the debate clock runs out. Which feels pretty frustrating to the person who has taken the time to see that X is in fact checkmated.

We need a better social process for together identifying such checkmated policies X. Perhaps a way that a person can claim such a checkmate status, be tested sufficiently thoroughly on that claim, and then win a reward if they are right, and lose a stake if they are wrong. I’d be willing to help to create such a process. Of course we could still keep policies X on our books; we’d just have to admit we don’t have good consequential arguments for them.

As an example, let me offer blackmail. I’ve posted seven times on this blog on the topic, and in one of my posts I review twenty related papers that I’d read. I’ve argued many times with people on the topic, and I consistently hear them repeat the same arguments, which all fail. So I’ll defend the claim that not only don’t we have good strong consequential arguments against blackmail, but that this fact can be clearly demonstrated to smart reasonable people willing to walk through all the previously offered arguments.

To review and clarify, blackmail is a threat that you might gossip about someone on a particular topic, if they don’t do something else you want. The usual context is that you are allowed to gossip or not on this topic, and if you just mention that you know something, they are allowed to offer to compensate you to keep quiet, and you are allowed to accept that offer. You just can’t be the person who makes the first offer. In almost all other cases where you are allowed to do or not do something, at your discretion, you are allowed to make and accept offers that compensate you for one of these choices. And if a deal is legal, it rarely matters who proposes the deal. Blackmail is a puzzling exception to these general rules.

Most ancient societies simply banned salacious gossip against elites, but modern societies have deviated and allowed gossip. People today already have substantial incentives to learn embarrassing secrets about associates, in order to gain social rewards from gossiping about those to others. Most people suffer substantial harm from such gossip; it makes them wary about who they let get close to them, and induces them to conform more to social pressures regarding acceptable behaviors.

For most people, the main effect of allowing blackmail is to mildly increase the incentives to learn embarrassing secrets, and to not behave in ways that result in such secrets. This small effect makes it pretty hard to argue that for gossip incentives the social gains out weigh the losses, but for the slightly stronger blackmail incentives, the losses out weight the gains. However, for elites these incentive increases are far stronger, making elite dislike plausibly the main consequentialist force pushing to keep blackmail illegal.

In a few recent twitter surveys, I found that respondents declared themselves against blackmail at a 3-1 rate, evenly split between consequential and other reasons for this position. However, they said blackmail should be legal in many particular cases I asked about, depending on what exactly you sought in exchange for your keeping someone’s secret. For example, they 12-1 supported getting your own secret kept, 3-2 getting someone to treat you fairly, and 1-1 getting help with child care in a medical crisis.

These survey results are pretty hard to square with consequential justifications, as the consequential harm from blackmail should mainly depend on the secrets being kept, not on the kind of compensation gained by the blackmailer. Which suggests that non-elite opposition to blackmail is mainly because blackmailers look like they have bad motives, not because of social consequences to others. This seems supported by the observation that women who trash each other’s reputations via gossip tend to consciously believe that they are acting helpfully, out of concern for their target.

As examples of weak arguments, Tyler Cowen just offered four. First, he says even if blackmail has good consequences, given current world opinion it would look bad to legalize it. (We should typically not do the right thing if that looks bad?) Second, he says negotiating big important deals can be stressful. (Should most big deals be banned?) Third, it is bad to have social mechanisms (like gossip?) that help enforce common social norms on sex, gender and drugs, as those are mistaken. Fourth, making blackmail illegal somehow makes it easier for your immediate family to blackmail you, and that’s somehow better (both somehows are unexplained).

I’d say the fact that Tyler is pushed to such weak tortured arguments supports my checkmate claim: we don’t have good strong consequential arguments for making gossiper-initiated blackmail offers illegal, relative to making gossip illegal or allowing all offers.

Added 18Feb: Some say a law against negative gossip is unworkable. But note, not only did the Romans manage it, we now have slander/libel laws that do the same thing except we add an extra complexity that the gossip must be false, which makes those laws harder to enforce. We can and do make laws against posting nude pictures of a person who disapproves, or stealing info such as via hidden bugs or hacking into someone’s computer.

GD Star Rating
Tagged as: ,

Do I Offend?

The last eight months have seen four episodes where many people on Twitter called me a bad offensive person, often via rude profanity, sometimes calling for me to be fired or arrested. These four episodes were: sex inequality and redistribution, chances of a delayed harassment complaint, morality-induced overconfidence on historical counterfactuals, and implicit harassment in A Star Is Born. While these topics have occupied only a small fraction of my thought over these months, and a much smaller fraction over my career, they may have disproportionate effects on my reputation. So I’ve tried to pay close attention to the reasons people give. 

I think I see a consistent story. While in these cases I have not made moral, value, or political claims, when people read small parts of what I’ve claimed or asked, they say they can imagine someone writing those words for the purpose of promoting political views they dislike. And not just mild views that just a bit on other side of the political spectrum. No, they attribute to me the most extreme bad views imaginable, such as that I advocate rape, murder, slavery, and genocide. People say they are directly and emotionally traumatized by the offensive “creepy” feeling they get when they encounter someone with any prestige and audience seeming to publicly promote views with which they strongly disagree.

Some plausibly contributing factors here include my sometimes discussing sensitive topics, our increasing political polarization, the ease of making mobs and taking words out of context on Twitter, increasing ease of making new accusations similar to previous ones, and my terse and analytic writing style combined with my adding disclaimers re my allegiance to “correct” views. There’s also my following the standard poll practice of not telling those who answer polls the motives for those polls. And I’m a non-poor older white male associated with economics in general and GMU econ in particular; many see all these as indicators of bad political views. 

Digging a little deeper, trauma is plausibly increased by a poll format, which stokes fears that bad people will find out that they are not alone, and be encouraged to learn that many others share their views. I suspect this helps explain complaints that my poll population is not representative of my nation or planet.  

I also suspect bad faith. Long ago when I had two young kids, they would sometimes pick fights, for example on long car trips. One might start singing, to which the other would complain. We might agree that singing is too much for such a small space. Then the first might start to quietly hum, which we might decide is okay. Then first might hum more loudly and triumphantly, while the second might writhe, cover their ears, and make a dramatic display of suffering. 

Similarly, I suspect bad faith when some a) claim to experience “harassment” level suffering due to encountering political views with which they disagree, and yet are fine with high levels of sex, violence, and profanity in TV & movies, b) infer indirectly from my neutral analytical text that I promote the most extreme views imaginable, and c) do not notice that such claims are both a priori implausible and inconsistent with my large corpus of public writing; they either haven’t read much of it or purposely mischaracterize it. 

The idea of a large shared intellectual sphere wherein we can together analyze difficult topics holds a strong appeal to me. The main criteria for consideration in such a sphere should be the coherence and persuasiveness of specific relevant arguments. When evaluating each arguments, there is usually little need to infer distantly related positions of those who offer arguments. Usually an argument either works or it doesn’t, regardless of who says it or why.

I try to live up to such ideals in how I write and talk. I hope that many who read and follow me share these ideals, and I appreciate their support. I’m thus not favorably inclined toward suggestions that I stop discussing sensitive topics, or that adopt a much more elaborate disclaimer style, or that I stop asking my followers questions, to prevent others from being traumatized by hearing their answers, and or to keep followers from finding out that others share their opinions.

Added 29Dec:  I did 4 follow up polls to probe tendencies to take offense, focusing on the Nazi case. Respondents said the fraction of tweeters who actually wish Nazis had won WWII is tiny; 63% said it is <0.1%, though 4% gave >10%. And 79% said that this Nazi fraction is <3% among those “who mention `Nazis’ neutrally in a tweet, without explicitly praising or criticizing them, and who explicitly claim otherwise”, though 10% said >15%. Also, 58% said that for a tweet to be considered “offensive” or “harassment”, it would need to suggest a chance >50% that its author actually wishes Nazis had won WWII. However, 10% gave a threshold of <3% and 19% gave one <15%.

Finally, 43% gave a <3% “chance the author of a Twitter poll which asks about chance world would have been better off had Nazis won WWII, actually wishes that Nazis had won WWII”. However 20% gave a chance >50%, and 37% gave a chance >15%.

A obvious conclusion here is that, even among those who respond to my twitter polls, a substantial fraction have set hair-triggers for offense. For example, it seems >20% say merely asking if the world would have been better off if Nazis had won justifies a high enough chance of a Nazi author to be offensive. Explicit denials may help, but if the offended are much more vocal than are others, a vocal choir of objection seems largely inevitable.

This makes me wonder again if the “silent majority” might benefit from juries or polls which show them that the vocal offended are a minority. Though that minority will likely also express offense re such juries or polls.

Added 28Jan: A recent burst of outrage on the Star is Born episode confirms this account to some extent.

GD Star Rating
Tagged as: , , ,