Tag Archives: Hypocrisy

Why Crime Discretion?

Our criminal law system gives discretion to many actors, in effect, pardon criminals, vary their punishment. Police officers and their bosses can choose not to arrest, or to charge with a lower crime, prosecutors and their bosses can choose not to prosecute, to prosecute for a lower crime, or to settle on a lower crime, judges and juries can choose not to convict and to make mild or severe sentences, and governors and presidents can pardon them, and prisons can parole them.

If you were the victim of a crime, you might be disturbed to see that so many people can in effect pardon the criminal who hurt you. Also, as these parties are paid far less to deal with that criminal than how much that criminal could suffer, you could reasonably be worried about bribes and other forms of bias and corruption. Even if you think there should be some discretion in the system, you might think that should be limited more, such as to only the judge. Why do we have so much discretion in our system?

To find out, I did this Twitter poll:

I also did two other polls, the same except “speeding” was replaced by “trespassing” and “in general”. In all three polls, by a roughly 3-1 ratio respondents thought that discretion would favor them personally. And in all cases, there is a substantial correlation between thinking that correlation benefits you and that it benefits society. However, for speeding, which is the case where they should have the most personal knowledge on the consequences of discretion, they were split evenly, about 1-1, on if discretion helps cut net social harm. And in the other two cases, where they personally know much less, they guessed about 3-2 that discretion cut net social harm.

To me, the obvious interpretation here is this: the main reason most people favor crime law discretion is that they expect to personally benefit from it. They are willing to presume that it benefits society in areas they don’t know much about, but they admit that it doesn’t in the areas they know best. This seems analogous to people estimating much higher accuracy for media reports in areas they don’t know about, compared to areas in which they’ve seen how media coverage compares to personal knowledge.

GD Star Rating
a WordPress rating system
Tagged as: ,

Why Weakly Enforced Rules?

While some argue that we should change our laws to open our borders, it is more common for pro-immigrant folks to argue for weaker enforcement of anti-immigration laws. They want fewer government agencies to be authorized to help enforcement, fewer resources to go into finding violators, and weaker punishment of violators. Similar things happen regarding prostitution and adultery; many complain about enforcement of such laws, and yet don’t support eliminating them.

The recently celebrated “criminal justice reform” didn’t make fewer things illegal, or substitute more efficient forms of punishment (eg torture, exile) for less efficient prison. It mainly just reduces jail sentence durations. When I probed supporters, they confirmed they didn’t want fewer things illegal or more efficient enforcement.

The policing reforms that many want are not to substitute more cost-effective enforcers such as bounty hunters, or stronger punishments against police misconduct, but to instead just have police do less: pull over fewer drivers, investigate fewer suspects, etc.

When I claim that stronger norm enforcement is a big advantage of legalized blackmail, many people say that’s exactly the problem; they want less enforcement of common norms. For example, Scott Sumner:

Great literature and great films often turn people violating society’s norms into sympathetic characters, especially when they are ground down by “the machine”. I suspect that the almost universal public opposition to legalizing blackmail reflects society’s view (subconscious to be sure) that enforcing these norms (especially for non-criminal activities) requires a “light touch”, and that turning shaming into an highly profitable industry will do more harm than good. It will turn society into a mean, backstabbing culture. The people hurt most will be sensitive good people who made a mistake, not callous gang members who don’t care if others think they are evil.

On the surface, all of these positions seems puzzling to me; if a norm or law isn’t worth enforcing well, why not eliminate it? Some possible explanations:

  1. People like the symbolism of being against things they don’t really want to stop. It is more about wanting to look like the sort of person who doesn’t fully approve of such things.
  2. Having more rules that are only weakly enforced allows the usual systems more ways to arbitrarily punish some folks via selective enforcement. You might like this if you share such system’s tastes re who to arbitrarily punish. Or if you want to signal submission to authorities who want to use such power.
  3. If these things were actually legal and licit, people might sometimes publicly suggest that you are engaging in them. But if they are illicit or illegal, there’s a norm against accusing someone of doing them without substantial evidence. So if you want to discourage others from lightly accusing you of such things, you may want those activities to be officially disapproved, even if you don’t actually want to discourage them.
  4. We mainly want these norms and laws to help us deal with some disliked “criminal class” out there, a class that we don’t actually interact with much. So when we see real cases in our familiar word, they seem like they are not in that class, and thus we don’t want our norms or laws to apply to them. We only want less enforcement for folks in our world.
  5. What else?

Added 26Feb: I clearly didn’t communicate well in this post, as many commenters and this responding post saw me as arguing that all punishment, conditional on being caught and convicted, should either be zero or max extreme (eg death). Yes of course it is often reasonable to use intermediate punishments.

But enforcement also includes a chance of being caught, not just a degree of punishment, and there are issues of the cost-effectiveness of the processes to catch and punish people. There are many who want less punishment if caught, and less chance of catching, for most all offenses, and don’t want more cost effective catching or punishment, for fear that this might lead to more catching or punishing. To me, this seems hard to explain via just thinking that we’ve overestimated the optimal punishment level for some particular offenses.

Added 3Mar: A striking example is how in WWI recruits were supposed to be age 19 or older, but it was easy to lie and get in at younger ages, and most everyone knew of someone who had done this. We tsk tsk about child soldiers elsewhere, but don’t seem much ashamed of our own.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Dominance Hides in Prestige Clothing

21 months ago, I said: 

We like to give others the impression that we personally mainly want prestige in ourselves and our associates, and that we only grant others status via the prestige they have earned. But let me suggest that, compared to this ideal, we actually want more dominance in ourselves and our associates than we like to admit, and we submit more often to dominance. In the following, I’ll offer three lines of evidence for this claim. First consider that we like to copy the consumer purchases of people that we envy, but not of people we admire for being “warm” and socially responsible. … Second, consider the fact that when our bosses or presidents retire and leave office, their legitimate prestige should not have diminished much. … Yet others usually show far less interest in associating with such retirees. … For my third line of evidence, … for long term mates we more care about prestige features that are good for the group, but for short term mates, we care more about dominance features that are more directly useful to us personally. (more)

Today I’ll describe a fourth line of evidence: when ranking celebrities, we don’t correct much for the handicaps that people face. Let me explain.

Dominance is about power, while prestige is about ability. Now on average having more ability does tend to result in having more power. But there are many other influences on power besides individual ability. For example, there’s a person’s family’s wealth and influence, and the power they gained via associating with powerful institutions and friends.  

As I know the world of intellectuals better than other worlds, let give examples from there. Intellectuals who go to more prestigious schools and who get better jobs at more prestigious institutions have clear advantages in this world. And those whose parents were intellectuals, or who grew up in more intellectual cultures, had advantages. Having more financial support and access to better students to work with are also big helps. But when we consider which intellectuals to most praise and admire (e.g., who deserves a Nobel prize), we mainly look at the impact they’ve had, without correcting this much for these many advantages and obstacles. 

Oh sure, when it is we ourselves who are judged, we are happy to argue that our handicaps should be corrected for. After all, most of us don’t have as many advantages as do the most successful people. And we are sometimes willing to endorse correcting for handicaps with politically allied groups. So if we feel allied with the religious and politically conservative, we may note that they tend more obstacles in intellectual worlds today. And if we feel allied with women or ethnic minorities, we may also endorse taking into account the extra obstacles that they often face. 

But these corrections are often half-hearted, and they seem the exceptions that prove a rule: when we pick our intellectual heroes, we don’t correct much for all these handicaps and advantages. We mainly just want powerful dominant heroes. 

In acting, music, and management, being good looking is a big advantage. But while we tend to say that we disapprove of this advantage, we don’t correct for it much when evaluating such people. Oscar awards are mostly the pretty actors, for example. 

GD Star Rating
a WordPress rating system
Tagged as: ,

News Accuracy Bonds

Fake news is a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media. This false information is mainly distributed by social media, but is periodically circulated through mainstream media. Fake news is written and published with the intent to mislead in order to damage an agency, entity, or person, and/or gain financially or politically, often using sensationalist, dishonest, or outright fabricated headlines to increase readership, online sharing, and Internet click revenue. (more)

One problem with news is that sometimes readers who want truth instead read (or watch) and believe news that is provably false. That is, a news article may contain claims that others are capable of proving wrong to a sufficiently expert and attentive neutral judge, and some readers may be fooled against their wishes into believing such news.

Yes, news can have other problems. For example, there can be readers who don’t care much about truth, and who promote false news and its apparent implications. Or readers who do care about truth may be persuaded by writing whose mistakes are too abstract or subtle to prove wrong now to a judge. I’ve suggested prediction markets as a partial solution to this; such markets could promote accurate consensus estimates on many topics which are subtle today, but which will eventually become sufficiently clear.

In this post, however, I want to describe what seems to me the simple obvious solution to the more basic problem of truth-seekers believing provably-false news: bonds. Those who publish or credential an article could offer bonds payable to anyone who shows their article to be false. The larger the bond, the higher their declared confidence in their article. With standard icons for standard categories of such bonds, readers could easily note the confidence associated with each news article, and choose their reading and skepticism accordingly.

That’s the basic idea; the rest of this post will try to work out the details.

While articles backed by larger bonds should be more accurate on average, the correlation would not be exact. Statistical models built on the dataset of bonded articles, some of which eventually pay bonds, could give useful rough estimates of accuracy. To get more precise estimates of the chance that an article will be shown to be in error, one could create prediction markets on the chance that an individual article will pay a bond, with initial prices set at statistical model estimates.

Of course the same article should have a higher chance of paying a bond when its bond amount is larger. So even better estimates of article accuracy would come from prediction markets on the chance of paying a bond, conditional on a large bond amount being randomly set for that article (for example) a week after it is published. Such conditional estimates could be informative even if only one article in a thousand is chosen for such a very large bond. However, since there are now legal barriers to introducing prediction markets, and none to introducing simple bonds, I return to focusing on simple bonds.

Independent judging organizations would be needed to evaluate claims of error. A limited set of such judging organizations might be certified to qualify an article for any given news bond icon. Someone who claimed that a bonded article was in error would have to submit their evidence, and be paid the bond only after a valid judging organization endorsed their claim.

Bond amounts should be held in escrow or guaranteed in some other way. News firms could limit their risk by buying insurance, or by limiting how many bonds they’d pay on all their articles in a given time period. Say no more than two bonds paid on each day’s news. Another option is to have the bond amount offered be a function of the (posted) number of readers of an article.

As a news article isn’t all true or false, one could distinguish degrees of error. A simple approach could go sentence by sentence. For example, a bond might pay according to some function of the number of sentences (or maybe sentence clauses) in an article shown to be false. Alternatively, sentence level errors might be combined to produce categories of overall article error, with bonds paying different amounts to those who prove each different category. One might excuse editorial sentences that do not intend to make verifiable newsy claims, and distinguish background claims from claims central to the original news of the article. One could also distinguish degrees of error, and pay proportional to that degree. For example, a quote that is completely made up might be rated as completely false, while a quote that is modified in a way that leaves the meaning mostly the same might count as a small fractional error.

To the extent that it is possible to verify partisan slants across large sets of articles, for example in how people or organizations are labeled, publishers might also offer bonds payable to those than can show that a publisher has taken a consistent partisan slant.

A subtle problem is: who pays the cost to judge a claim? On the one hand, judges can’t just offer to evaluate all claims presented to them for free. But on the other hand, we don’t want to let big judging fees stop people from claiming errors when errors exist. To make a reasonable tradeoff, I suggest a system wherein claim submissions include a fee to pay for judging, a fee that is refunded double if that claim is verified.

That is, each bond specifies a maximum amount it will pay to judge that bond, and which judging organizations it will accept.  Each judging organization specifies a max cost to judge claims of various types. A bond is void if no acceptable judge’s max is below that bond’s max. Each submission asking to be paid a bond then submits this max judging fee. If the judges don’t spend all of their max judging fee evaluating this case, the remainder is refunded to the submission. It is the amount of the fee that the judges actually spend that will be refunded double if the claim is supported. A public dataset of past bonds and their actual judging fees could help everyone to estimate future fees.

Those are the main subtleties that I’ve considered. While there are ways to set up such a system better or worse, the basic idea seems robust: news publishers who post bonds payable if their news is shown to be wrong thereby credential their news as more accurate. This can allow readers to more easily avoid believing provably-false news.

A system like that I’ve just proposed has long been feasible; why hasn’t it been adopted already? One possible theory is that publishers don’t offer bonds because that would remind readers of typical high error rates:

The largest accuracy study of U.S. papers was published in 2007 and found one of the highest error rates on record — just over 59% of articles contained some type of error, according to sources. Charnley’s first study [70 years ago] found a rate of roughly 50%. (more)

If bonds paid mostly for small errors, then bond amounts per error would have to be very small, and calling reader attention to a bond system would mostly remind them of high error rates, and discourage them from consuming news.

However, it seems to me that it should be possible to aggregate individual article errors into measures of overall article error, and to focus bond payouts on the most mistaken “fake news” type articles. That is, news error bonds should mostly pay out on articles that are wrong overall, or at least quite misleading regarding their core claims. Yes, a bit more judgment might be required to set up a system that can do this. But it seems to me that doing so is well within our capabilities.

A second possible theory to explain the lack of such a system today is the usual idea that innovation is hard and takes time. Maybe no one ever tried this with sufficient effort, persistence, or coordination across news firms. So maybe it will finally take some folks who try this hard, long, and wide enough to make it work. Maybe, and I’m willing to work with innovation attempts based on this second theory.

But we should also keep a third theory in mind: that most news consumers just don’t care much for accuracy. As we discuss in our book The Elephant in the Brain, the main function of news in our lives may be to offer “topics in fashion” that we each can all riff on in our local conversations, to show off our mental backpacks of tools and resources. For that purpose, it doesn’t much matter how accurate is such news. In fact, it might be easier to show off with more fake news in the mix, as we can then show off by commenting on which news is fake. In this case, news bonds would be another example of an innovation designed to give us more of what we say we want, which is not adopted because we at some level know that we have hidden motives and actually want something else.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

A Coming Hypocralypse?

Many people have been working hard for a long time to develop tech that helps to read people’s feelings. They are working on ways to read facial expressions, gazes, word choices, tones of voice, sweat, skin conductance, gait, nervous habits, and many other body features and motions. Over the coming years, we should expect this tech to consistently get cheaper and better at reading more subtler feelings of more people in more kinds of contexts more reliably.

Much of this tech will be involuntary. While your permission and assistance may help such tech to read you better, others will often be able to read you using tech that they control, on their persons or and in the buildings around you. They can use tech integrated with other complex systems that is thus hard to monitor and regulate. Yes, some defenses are possible, such as via wearing dark sunglasses or burqas, and electronically modulating your voice. But such options seem rather awkward and I doubt most people will be willing to use them much in most familiar social situations. And I doubt that regulation will greatly reduce the use of this tech. The overall trend seems clear: our true feelings will become more visible to people around us.

We are often hypocritical about our feelings. That is, we pretend to some degree to have certain acceptable public feelings, while actually harboring different feelings. Most people know that this happens often, but our book The Elephant in the Brain suggests that we still vastly underestimate typical levels of hypocrisy. We all mask our feelings a lot, quite often from ourselves. (See our book for many more details.)

These two facts, better tech for reading feelings and widespread hypocrisy, seem to me to be on a collision course. As a result, within a few decades, we may see something of a “hypocrisy apocalypse”, or “hypocralypse”, wherein familiar ways to manage hypocrisy become no longer feasible, and collide with common norms, rules, and laws. In this post I want to outline some of the problems we face.

Long ago, I was bullied as a child. And so I know rather well that one of the main defenses that children develop to protect themselves against bullies is to learn to mask their feelings. Bullies tend to see kids who are visibly scared or distraught as openly inviting them to bully. Similarly, many adults protect themselves from salespeople and sexual predators by learning to mask their feelings. Masked feelings also helps us avoid conflict with rivals at work and in other social circles. For example, we learn to not visibly insult or disrespect big people in rowdy bars if we don’t want to get beaten up.

Tech that unmasks feelings threatens to weaken the protections that masked feelings provide. That big guy in a rowdy bar may use new tech to see that everyone else there can see that you despise him, and take offense. You bosses might see your disrespect for them, or your skepticism regarding their new initiatives. Your church could see that you aren’t feeling very religious at church service. Your school and nation might see that your pledge of allegiance was not heart-felt. And so on.

While these seem like serious issues, change will be mostly gradual and so we may have time to flexibly search in the space of possible adaptations. We can try changing with whom we meet how for what purposes, and what topics we consider acceptable to discuss where. We can be more selective who we make more visible and how.

I worry more about collisions between better tech for reading feelings and common social norms, rules, and laws. Especially norms and laws that we adopt for more symbolic purposes, instead of to actually manage our interactions. These things tend to be less responsive to changing conditions.

For example, today we often consider it to be unacceptable “sexual harassment” to repeatedly and openly solicit work associates for sex, especially after they’ve clearly rejected the solicitor. We typically disapprove not just of direct requests, but also of less direct but relatively clear invitation reminders, such as visible leers, sexual jokes, and calling attention to your “junk”. And of course such rules make a great deal of sense.

But what happens when tech can make it clearer who is sexually attracted how much to whom? If the behavior that led to these judgements was completely out each person’s control, it might be hard to blame on anyone. We might then socially pretend that it doesn’t exist, though we might eagerly check it out privately. Unfortunately, our behavior will probably continue to modulate the processes that produce such judgements.

For example, the systems that judge how attracted you are to someone might focus on the moments when you directly look at that person, when your face is clearly visible to some camera, under good lighting. Without your wearing sunglasses or a burqa. So the longer you spend directly looking at someone under such conditions, the better the tech will be able to see your attraction. As a result, your choice to spend more time looking directly at them under favorable reading conditions might be seen as an intentional act, a choice to send the message that you are sexually attracted to them. And thus your continuing to do so after they have clearly rejected you might be seen as sexual harassment.

Yes, a reasonable world might adjust rules on sexual harassment to account for many complex changing conditions. But we may not live in a reasonable world. I’m not making any specific claims about sexual harassment rules, but symbolic purposes influence many of the norms and laws that we adopt. That is, we often support such rules not because of the good consequences of having them, but because we like the way that our personal support for such rules makes us look personally. For example, many support laws against drugs and prostitution even when they believe that such laws do little to discourage such things. They want to be personally seen as publicly taking a stand against such behavior.

Consider rules against expressing racism and sexism. And remember that the usual view is that everyone is at least a bit racist and sexist, in part because they live in a racist and sexist society. What happens when we can collect statistics on each person regarding how their visible evaluations of the people around them correlate with the race and sex of those people? Will we then punish white males for displaying statistically-significantly low opinions of non-whites and non-males via their body language? (That’s like a standard we often apply to firms today.) As with sexual harassment, the fact that people can moderate these readings via their behaviors may make these readings seem to count as intentional acts. Especially since they can be tracking the stats themselves, to see the impression they are giving off. To some degree they choose to visibly treat certain people around them with disrespect. And if we are individually eager to show that we personally disapprove of racism and sexism, we may publicly support strict application of such rules even if that doesn’t actually deal well with real problems of racism and sexism in the world.

Remember that this tech should improve gradually. So for the first cases that set key precedents, the tech will be weak and thus flag very few people as clearly harassers or racists or sexists. And those few exceptions are much more likely to be people who actually did intend to harass and express racism or sexism, and who embody extreme versions of such behavior. While they will also probably tend to be people who are weird and non-conformist in other ways, this tech for reading feelings may initially seem to do well to help us identify and deal with problematic people. For example, we may be glad that tech can identity the priests who most clearly lust after the young boys around them.

But as the tech gets better it will slowly be able to flag more and more people as sending disapproved messages. The rate will drift upward from one person in ten thousand to one in a thousand to one percent and so on. People may then start to change their behavior in bigger ways, to avoid being flagged, but that may be too little too late, especially if large video, etc. libraries of old behaviors are available to process with new methods.

At this point we may reach a “hypocralypse”, where rules that punish hypocrisy collide in a big way with tech that can expose hypocrisy. That is, where tech that can involuntarily show our feelings intersects with norms and laws that punish the expression of common but usually hidden feelings. Especially when such rules are in part symbolically motivated.

What happens then, I don’t know. Do white males start wearing burqas, do we regulate this tech heavily, or do we tone down and relax our many symbolic rules? I’ll hope for the best, but I still fear the worst.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Dalio’s Principles

When I write and talk about hidden motives, many respond by asking how they could be more honest about their motives. I usually emphasize that we have limited budgets for honesty, and that it is much harder to be honest about yourself than others. And it is especially hard to be honest about the life areas that are the most sacred to you. But some people insist on trying to be very honest, and our book can make them unhappy when they see just how far they have to go.

It is probably easier to be honest if you have community support for honesty. And that makes it interesting to study the few groups who have gone the furthest in trying to create such community support. An interesting example is the hedge fund Bridgewater, as described in Dalio’s book Principles:

An idea meritocracy where people can speak up and say what they really think. (more)

#1 New York Times Bestseller … Ray Dalio, one of the world’s most successful investors and entrepreneurs, shares the unconventional principles that he’s developed, refined, and used over the past forty years to create unique results in both life and business—and which any person or organization can adopt to help achieve their goals. … Bridgewater has made more money for its clients than any other hedge fund in history and grown into the fifth most important private company in the United States. … Along the way, Dalio discovered a set of unique principles that have led to Bridgewater’s exceptionally effective culture. … It is these principles … that he believes are the reason behind his success. … are built around his cornerstones of “radical truth” and “radical transparency,” … “baseball cards” for all employees that distill their strengths and weaknesses, and employing computerized decision-making systems to make believability-weighted decisions. (more)

This book seems useful if you were the absolute undisputed ruler of a firm, so that you could push a culture of your choice and fire anyone who seems to resist. And were successful enough to have crowds eager to join, even after you’d fired many. And didn’t need to coordinate strongly with customers, suppliers, investors, and complementors. Which I guess applies to Dalio.

But he has little advice to offer those who don’t sit in an organization or social network that consistently rewards “radical truth.” He offers no help in thinking about how to trade honesty against the others things your social contexts will demand of you. Dalio repeatedly encourages honesty, but he admits that it is often painful, and that many aren’t suited for it. He mainly just says to push through the pain, and get rid of people who resist it, and says that these big visible up-front costs will all be worth it in the long run.

Dalio also seems to equate conflict and negative opinions with honesty. That is, he seeks a culture where people can say things that others would rather not hear, but doesn’t seem to consider that such negative opinions need not be “honest” opinions. The book makes hundreds of claims, but doesn’t cite outside sources, nor compare itself to other writings on the subject. Dalio doesn’t point to particular evidence in support of particular claims, nor give them any differing degrees of confidence, nor credit particular people as the source of particular claims. It is all just stuff he’s all sure of, that he endorses, all supported by the evidence of his firm’s success.

I can believe that the firm Bridgewater is full of open conflict, with negative opinions being frequently and directly expressed. And it would be interesting to study social behavior in such a context. I accept that this firm functions doing things this way. But I can’t tell if it succeeds because of or in spite of this open conflict. Yes this firm succeeds, but then so do many others with very different cultures. The fact that the top guy seems pretty self-absorbed and not very aware of the questions others are likely to ask of his book is not a good sign.

But if its a bad sign its not much of one; plenty of self-absorbed people have built many wonderful things. What he has helped to build might in fact be wonderful. Its just too bad that we can’t tell much about that from his book.

25May2019: Someone who wishes to remain anonymous just wrote to me saying:

I just happen to read your article from last year on Dalios Principles. I was struck by the quality of your observations. In Bridgewater terms I am ‘believable’ in assessing this as I was there for a number of years. Your inferences are especially insightful given you did not work there. Well done on your article.
GD Star Rating
a WordPress rating system
Tagged as: ,

Skip Value Signals

Consider the following two polls I recently held on Twitter:

As writers, these respondents think that readers won’t engage their arguments for factual claims on a policy relevant topics unless shown that the author shares the values of their particular political faction. But as readers they think they need no signal of shared values to convince them to engage such an argument. If these readers and writers are the same group, then they believe themselves to be hypocritical. They uphold an ideal that value signals should not be needed, but they do not live up to this ideal.

This seems to me part of a larger ideal worth supporting. The ideal is of a community of conversation where everything is open for discussion, people write directly and literally, and people respond mostly analytically to the direct and literal meanings of what people say. People make direct claims and explicit arguments, and refer to dictionaries for disputes about words mean. There’s little need for or acceptance of discussion of what people really meant, and any such claims are backed up by direct explicit arguments based on what people actually and directly said. Even when you believe there is subtext, your text should respond to their text, not to their subtext. Autists may be especially at home in such a community, but many others can find a congenial home there.

A simple way to promote these norms is to skip value signals. Just make your claims, but avoid adding extra signals of shared values. If people who respond leap to the conclusion that you must hold opposing values, calmly correct them, pointing out that you neither said nor implied such a thing. Have your future behavior remain consistent with that specific claim, and with the larger claim that you follow these norms. Within a context, the more who do this, and the more who support them, then the more reluctant others will become to publicly accuse people of saying things that they did not directly say. Especially due to missing value signals.

Of course this is unlikely to become the norm in all human conversation. But it can be the norm within particular intellectual communities. Being a tenured professor who has and needs little in the way of grants or other institutional support, I am in an especially strong position to take such a stance, to promote these norms in my conversation contexts. To make it a bit easier for others to follow. And so I do. You are welcome.

GD Star Rating
a WordPress rating system
Tagged as: ,

Mysterious Motivation

Our lives are full of evidence that we don’t understand what motivates us. Kevin Simler and I recently published a book arguing that even though we humans are built to readily and confidently explain our motivations regarding pretty much everything we do, we in fact greatly misjudge our motives in ten big specific areas of life. For example, even though we think we choose medical treatments mainly to improve our health, we more use medicine to show concern about others, and to let them show concern about us. But a lot of other supporting evidence also suggests that we don’t understand our motivations. 

For example, when advertisers and sales-folk try to motivate us to buy products and services, they pay great attention to many issues that we would deny are important to us. We often make lists of the features we want in friends, lovers, homes, and jobs, and then find ourselves drawn to options that don’t score well on these lists. Managers struggle to motivate employees, and often attend to different issues to what employees say motivate them. 

While books on how to write fiction say motivation is central to characters and plot, most fiction attempts focused on the motives we usually attribute to ourselves fall flat, and feel unsatisfying. We are bothered by scenes showing just one level of motivation, such as a couple simply enjoying a romantic meal without subtext, as we expect multiple levels. 

While most people see their own lives as having meaning, they also find it easy to see lives different from theirs are empty and meaningless, without motivation. Teens often see this about most adult lives, and adults often see retired folks this way. Many see the lives of those with careers that don’t appeal to them, such as accounting, as empty and meaningless. Artists see non-artists this way. City dwellers often see those who live in suburbia this way, and many rural folks see city folks this way. Many modern people see the lives of most everyone before the industrial era as empty. We even sometimes see our own lives as meaningless, when our lives seem different enough from the lives we once had, or hoped to have.  

Apparently, an abstract description of a life can easily seem empty. Lives seem meaningful, with motivation, when we see enough concrete details about them that we can relate to, either via personal experience or compelling stories. I think this is so why many have call the world I describe in Age of Em a hell, even though to me it seems an okay world compared to most in history. They just don’t see enough relatable detail.  

Taken together, this all suggests great error in our abstract thinking about motivations. We find motivation in our own lives and in some fictional lives. And if our subconscious minds can pattern-match with enough detail of a life description, we might see it as similar enough to what we would find motivating to agree that such a life is likely motivating. But without sufficiently detailed pattern-matching, few abstract life descriptions seem motivating or meaningful to us. In the abstract, we just don’t understand why people with such lives get up in the morning, or don’t commit suicide. 

Motivation is pretty central to human behavior. If you don’t know the point of what you do, how can you calculate whether to do more or less, or something different? And how can you offer useful advice to others on what to do if you don’t know why they do what they do? So being told that you don’t actually understand your motives and those of others should be pretty shocking, and grab your attention. But in fact, it usually doesn’t.

It seems that, just as we are built to assume that we automatically know local norms, without needing much thought, we are also built to presume that we know our motives. We make decisions and, if asked, we have motives to which we attribute our behavior. But we don’t care much about abstract patterns of discrepancies between the two. We care about specific discrepancies, which could make us vulnerable to specific accusations that our motives violate norms in specific situations. Otherwise, as long as we believe that our behavior is achieving our actual motives, we don’t much care what those motives are. Whatever we want must be a good thing to want, and following intuition is good enough to get it; we don’t need to consciously think about it.  

I guess I’m weird, because I find the idea that I don’t know my motives, or what would motivate myself or others, quite disturbing.

GD Star Rating
a WordPress rating system
Tagged as: ,

How Best Help Distant Future?

I greatly enjoyed Charles Mann’s recent book The Wizard and the Prophet. It contained the following stat, which I find to be pretty damning of academia:

Between 1970 and 1989, more than three hundred academic studies of the Green Revolution appeared. Four out of five were negative. p.437

Mann just did a related TED talk, which I haven’t seen, and posted this related article:

The basis for arguing for action on climate change is the belief that we have a moral responsibility to people in the future. But this is asking one group of people to make wrenching changes to help a completely different set of people to whom they have no tangible connection. Indeed, this other set of people doesn’t exist. There is no way to know what those hypothetical future people will want.

Picture Manhattan Island in the 17th century. Suppose its original inhabitants, the Lenape, could determine its fate, in perfect awareness of future outcomes. In this fanciful situation, the Lenape know that Manhattan could end up hosting some of the world’s great storehouses of culture. All will give pleasure and instruction to countless people. But the Lenape also know that creating this cultural mecca will involve destroying a diverse and fecund ecosystem. I suspect the Lenape would have kept their rich, beautiful homeland. If so, would they have wronged the present?

Economists tend to scoff at these conundrums, saying they’re just a smokescreen for “paternalistic” intellectuals and social engineers “imposing their own value judgments on the rest of the world.” (I am quoting the Harvard University economist Martin Weitzman.) Instead, one should observe what people actually do — and respect that. In their daily lives, people care most about the next few years and don’t take the distant future into much consideration. …

Usually economists use 5 percent as a discount rate — for every year of waiting, the price goes down 5 percent, compounded. … The implications for climate change are both striking and, to many people, absurd: at a 5 percent discount rate, economist Graciela Chichilnisky has calculated, “the present value of the earth’s aggregate output discounted 200 years from now is a few hundred thousand dollars.” … Chichilnisky, a major figure in the IPCC, has argued that this kind of thinking is not only ridiculous but immoral; it exalts a “dictatorship of the present” over the future.

Economists could retort that people say they value the future, but don’t act like it, even when the future is their own. And it is demonstrably true that many — perhaps most — men and women don’t set aside for retirement, buy sufficient insurance, or prepare their wills. If people won’t make long-term provisions for their own lives, why should we expect people to bother about climate change for strangers many decades from now? …

In his book, Scheffler discusses Children of Men … The premise of both book and film is that humanity has become infertile, and our species is stumbling toward extinction. … Our conviction that life is worth living is “more threatened by the prospect of humanity’s disappearance than by the prospect of our own deaths,” Scheffler writes. The idea is startling: the existence of hypothetical future generations matters more to people than their own existence. What this suggests is that, contrary to economists, the discount rate accounts for only part of our relationship to the future. People are concerned about future generations. But trying to transform this general wish into specific deeds and plans is confounding. We have a general wish for action but no experience working on this scale, in this time-frame. …

Overall, climate change asks us to reach for higher levels on the ladder of concern. If nothing else, the many misadventures of foreign aid have shown how difficult it is for even the best-intentioned people from one culture to know how to help other cultures. Now add in all the conundrums of working to benefit people in the future, and the hurdles grow higher. Thinking of all the necessary actions across the world, decade upon decade — it freezes thought. All of which indicates that although people are motivated to reach for the upper rungs, our efforts are more likely to succeed if we stay on the lower, more local rungs.

I side with economists here. The fact that we can relate emotionally to Children of Men hardly shows that people would actually react as it depicts. Fictional reactions often differ greatly from real ones. And I’m skeptical of Mann’s theory that we really do care greatly about helping the distant future, but are befuddled by the cognitive complexity of the task. Consider two paths to helping the distant future:

  1. Lobby via media and politics for collective strategies to prevent global warming now.
  2. Save resources personally now to be spent later to accommodate any problems then.

The saving path seems much less cognitively demanding than the lobby path, and in fact quite feasible cognitively. Resources will be useful later no matter what are the actual future problems and goals. Yes, the saving path faces agency costs, to control distant future folks tasked with spending your savings. But the lobby path also has agency costs, to control government as an agent.

Yes, the value of the saving path relative to the lobby path is reduced to the degree that prevention is cheaper than accommodation, or collective action more effective than personal action. But the value of the saving path increases enormously with time, as investments typically grow about 5% per year. And cognitive complexity costs of the lobby path also increase exponentially with time, as it becomes harder to foresee the problems and values of the distant future. (Ems wouldn’t be grateful for your global warming prevention, for example.)

Wait long enough to help and the relative advantage of the saving path should become overwhelming. So the fact that we see far more interest in the lobby path, relative to the savings path, really does suggest that people just don’t care that much about the distant future, and that global warning concern is a smokescreen for other policy agendas. No matter how many crocodile tears people shed regarding fictional depictions.

Added 5a: The posited smokescreen motive would be hidden, and perhaps unconscious.

Added 6p: I am told that in a half dozen US it is cheap to create trusts and foundations that can accumulate assets for centuries, and then turn to helping with problems then, all without paying income or capital gains taxes on the accumulating assets.

GD Star Rating
a WordPress rating system
Tagged as: , ,

A Salute To Median Calm

It is a standard trope of fiction that people often get angry when they suffer life outcomes well below what they see as their justified expectations. Such sore losers are tempted to retaliate against the individuals and institutions they blame for their loss, causing increasing damage until others agree to fix the unfairness.

Most outcomes, like income or fame, are distributed with mean outcomes well above median outcomes. As a result, well over half of everyone gets an outcome below what that they could have reasonably expected. So if this sore loser trope were true, there’d be a whole lot of angry folks causing damage. Maybe even most people would be this angry. Hard to see how civilization could function here. This scenario is often hoped-for by those who seek dramatic revolutions to fix large scale social injustices.

Actually, however, even though most people might plausibly see themselves as unfairly assigned to be losers, few become angry enough to cause much damage. Oh most people will have resentments and complaints, and this may lead on occasion to mild destruction, but most people are mostly peacefully. In the words of the old song, while they may not get what they want, they mostly get what they need.

Not only do most people achieve much less than the average outcomes, they achieve far less than the average outcomes that they see in media and fiction. Furthermore, most people eventually realize that the world is often quite hypocritical about the qualities it rewards. That is, early in life people are told that certain admired types of efforts and qualities are the ones with the best chance to lead to high outcomes. But later people learn that in fact that other less cooperative or fair strategies are often rewarded more. They may thus reasonably conclude that the game was rigged, and that they failed in part because they were fooled for too long.

Given all this, we should be somewhat surprised, and quite grateful, to live in such a calm world. Most people fall below the standard of success set by average outcomes, and far below that set by typical media-visible outcomes. And they learn that their losses are caused in part by winners taking illicit strategies and lying to them about the rewards to admired strategies. Yet contrary to the common fictional trope, this does not induce them to angrily try to burn down our shared house of civilization.

So dear mostly-calm near-median person, I respectfully salute you. Without you and your stoic acceptance, civilization would not be possible. Perhaps I should salute men a bit more, as they are more prone to violent anger, and suffer higher variance and thus higher mean to median outcome ratios. And perhaps the old a bit more too, as they see more of the world’s hypocrisy, and can hope much less for success via big future reversals. But mostly, I salute you all. Humans are indeed amazing creatures.

GD Star Rating
a WordPress rating system
Tagged as: ,