Tag Archives: Hypocrisy

Dreamtime Social Games

Ten years ago, I posted one of my most popular essays: “This is the Dreamtime.” In it, I argued that, because we are rich,

Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialed to the max.

Today I want to talk about dreamtime social games.

For at least a million years, our ancestors wandered the Earth in small bands of 20-50 people. These groups were so big that they ran out of food if they stayed in one place, which is why they wandered. But such groups were big and smart enough to spread individual risks well, and to be relative safe from predators.

So in good times at least, the main environment that mattered to our forager ancestors was each other. That is, they succeeded or failed mostly based on winning social games. Those who achieved higher status in their group gained more food, protection, lovers, and kids. And so, while foragers pretended that they were all equal, they actually spent much of their time and energy trying to win such status games. They tried to look impressive, to join respected alliances, to undermine rival alliances, and so on. Usually in the context of grand impractical leisure and play.

As I described recently, status is usually based on a wide range of clues regarding one’s impressiveness, and the relative weight on these clues does vary across cultures. But there are many generic clues that tend to be important in most all cultures, including strength, courage, intelligence, wit, art, loyalty, social support etc.

When an ability was important for survival in a local environment, cultural selection tended to encourage societies to put more weight on that ability in local status ratings, especially when their society felt under threat. So given famine, hunters gain status, given war warriors gain status, and when searching for a new home explorers gain status.

But when the local environment seemed less threatening, humans have tended to revert back to a more standard human social game, focused on less clearly useful abilities. And the more secure a society, and the longer it has felt secure, the more strongly it reverts. So across history the social worlds of comfortable elites have been remarkably similar. In the social worlds such as Versailles, Tales of Genji, or Google today, we see less emphasis on abilities that help win in larger harsher world, or that protect this smaller world from larger worlds, and more emphasis on complex internal politics based on beauty, wit, abstract ideas, artistic tastes, political factions, and who likes who.

That is, as people feel safer, local status metrics and social institutions drift toward emphasizing likability over effectiveness, popularity and impressiveness over useful accomplishment, and art and design over engineering. And as our world has been getting richer and safer for many centuries now, our culture has long been moving toward emphasizing such forager values and attitudes. (Though crises like wars often push us back temporarily.)

“Liberals” tend to have moved further on this path than “conservatives”, as indicated by typical jobs:

jobs that lean conservative … [are] where there are rare big bad things that can go wrong, and you want workers who can help keep them from happening. … Conservatives are more focused on fear of bad things, and protecting against them. … Jobs that lean liberal… [have] small chances that a worker will cause a rare huge success … [or] people who talk well.

Also, “conservative” attitudes toward marriage have focused on raising kids and on a division of labor in production, while “liberal” attitudes have focused on sex, romance, and sharing leisure activities.

Rather than acknowledging that our status priorities change as we feel safer, humans often give lip service to valuing useful outcomes, while actually more valuing the usual social game criteria. So we pretend to go to school to learn useful class material, but we actually gain prestige while learning little that is useful. We pretend that we pick lawyers who win cases, yet don’t bother to publish track records and mainly pick lawyers based on institutional prestige. We pretend we pick doctors to improve health, but also don’t publish track records and mainly pick via institutional prestige, and don’t notice that there’s little correlation between health and medicine. We pretend to invest in hedge funds to gain higher returns, but really gain status via association with impressive fund managers, and pay via lower average returns.

I recently realized that, alas, my desire to move our institutions more toward “paying for results” is at odds with this strong social trend. Our institutions could be much more effective at getting us the things we say we want out of them, but we seem mostly content to let them be run by the usual social status games. We put high status people in change and give them a lot of discretion, as long as they give lip service to our usual practical goals. It feels to most people like a loss in collective status if they let their institutions actually focus too much on results.

A focus on results would probably result in the rise to power of less impressive looking people who manage to get more useful things done. That is what we’ve seen when firms have adopted prediction markets. At first firms hope that such markets may help them identify the best informed employees. But are are disappointed to learn that winners tend not to look socially impressive, but are more nerdy difficult inarticulate contrarians. Not the sort they actually want to promote.

Paying more for results would feel to most people like having to invite less suave and lower class engineers or apartment sups to your swanky parties because they are useful as associates. Or having to switch from dating hip hunky Tinder dudes to reliable practical guys with steady jobs. In status terms, that all feels less like admiring prestige and more like submitting to domination, which is a forager no-no. Paying for results is the sort of thing that poor practical people have to do, not rich prestigious folks like you.

Of course our society is full of social situations where practical people get enough rewards to keep them doing practical things. So that the world actually works. People sometimes try to kill such things, but then they suffer badly and learn to stop. But most folks who express interest in social reforms seem to care more about projecting their grand hopes and ideals, relative to making stuff work better. Strong emotional support for efficiency-driven reform must come from those who have deeply felt the sting of inefficiency. Perhaps regarding crime?

Ordinary human intuitions work well for playing the usual social status games. You can just rely on standard intuitions re who you like and are impressed by, and who you should say what to. In contrast, figuring out how to actually and effectively pay for results is far more complex, and depends more on the details of your world. So good solutions there are unlikely to be well described by simple slogans, and are not optimized for showing off one’s good values. Which, alas, seems another big obstacle to creating better institutions.

GD Star Rating
loading...
Tagged as: , ,

How Idealists Aid Cheaters

Humans have long used norms to great advantage to coordinate behavior. Each norm requires or prohibits certain behavior in certain situations, and the norm system requires that others who notice norm violations call attention to those violations and coordinate to discourage or punish them.

This system is powerful, but not infinitely so. If a small enough group of people notice a minor enough norm violation, and are friendly enough with each other and with the violator, they often coordinate instead to not enforce the norm, and yet pretend that they did so. That is, they let cheaters get away with it.

To encourage norm enforcement, our social systems make many choices of how many people typically see each behavior or its signs. We pair up police in squad cars, and decide how far away in the police organizational structure sits internal affairs. Many kinds of work is double-checked by others, sometimes from independent agencies. Schools declare honor-codes that justify light checking. At times, we “measure twice and cut once.”

These choices of how much to check are naturally tied to our estimates of how strongly people tend to enforce norms. If even small groups who observe violations will typically enforce them, we don’t need to check as much or as carefully, or to punish as much when we catch cheaters. But if large diverse groups commonly manage to coordinate to evade norm enforcement, then we need frequent checks by diverse people who are widely separated organizationally, and we need to punish cheaters more when we catch them.

I’ve been reading the book Moral Mazes for the last few months; it is excellent, but also depressing, which is why it takes so long to read. It makes a strong case, through many detailed examples, that in typical business organizations, norms are actually enforced far less than members pretend. The typical level of checking is in fact far too little to effectively enforce common norms, such as against self-dealing, bribery, accounting lies, fair evaluation of employees, and treating similar customers differently. Combining this data with other things I know, I’m convinced that this applies not only in business, but in human behavior more generally.

We often argue about this key parameter of how hard or necessary it is to enforce norms. Cynics tend to say that it is hard and necessary, while idealists tend to say that it is easy and unnecessary. This data suggests that cynics tend more to be right, even as idealists tend to win our social arguments.

One reason idealists tend to win arguments is that they impugn the character and motives of cynics. They suggest that cynics can more easily see opportunities for cheating because cynics in fact intend to and do cheat more, or that cynics are losers who seek to make excuses for their failures, by blaming the cheating of others. Idealists also tend to say what while other groups may have norm enforcement problems, our group is better, which suggests that cynics are disloyal to our group.

Norm enforcement is expensive, but worth it if we have good social norms, that discourage harmful behaviors. Yet if we under-estimate how hard norms are to enforce, we won’t check enough, and cheaters will get away with cheating, canceling much of the benefit of the norm. People who privately know this fact will gain by cheating often, as they know they can get away with it. Conversely, people who trust norm enforcement to work will be cheated on, and lose.

When confronted with data, idealists often argue, successfully, that it is good if people tend to overestimate the effectiveness of norm enforcement, as this will make them obey norms more, to everyone’s benefit. They give this as a reason to teach this overestimate in schools and in our standard public speeches. And so that is what societies tend to do. Which benefits those who, even if they give lip service to this claim in public, are privately selfish enough to know it is a lie, and are willing to cheat on the larger pool of gullible victims that this policy creates.

That is, idealists aid cheaters.

Added 26Aug: In this post, I intended to define the words “idealist” and “cynic” in terms of how hard or necessary it is to enforce norms. The use of those words has distracted many. Not sure what are better words though.

GD Star Rating
loading...
Tagged as: , ,

Paternalism Is About Status

… children, whom he finds delightful and remarkably self-sufficient from the age of 4. He chalks this up to the fact that they are constantly lied to, can go anywhere and in their first years of life are given pretty much anything they please. If the baby wants the butcher knife, the baby gets the butcher knife. This novel approach may not sound like appropriate parenting, but Kulick observes that the children acquire their self-sufficiency by learning to seek out their own answers and by carefully navigating their surroundings at an early age. … the only villagers whom he’s ever seen beat their children are the ones who left to attend Catholic school. (more)

Bofi forager parenting is quite permissive and indulgent by Western standards. Children spend more time in close physical contact with parents, and are rarely directed or punished by parents. Children are allowed to play with knives, machete, and campfires without the warnings or interventions of parents; this permissive patently style has been described among other forager groups as well. (more)

Much of the literature on paternalism (including my paper) focuses on justifying it: how much can a person A be helped by allowing a person B to prohibit or require particular actions in particular situations? Such as parents today often try to do with their children. Most of this literature focuses on various deviations from simple rational agent models, but my paper shows that this is not necessary; B can help A even when both are fully rational. All it takes is for B to sometimes know things that A does not.

However, this focus on justification distracts from efforts to explain the actual variation in paternalism that we see around us. Sometimes third parties endorse and support the ability of B to prohibit or require actions by A, and sometimes third parties oppose and discourage such actions. How can we best explain which happens where and when?

First let me set aside situations where A authorizes B to, at some future date, limit or require actions by A. People usually justify this in terms of self-control, i.e., where A today disagrees with future A’s preferences. To me this isn’t real paternalism, which I see as more essentially about the extra info that B may hold.

Okay, let’s start with a quick survey of some of the main observed correlates of paternalism. Continue reading "Paternalism Is About Status" »

GD Star Rating
loading...
Tagged as: , ,

Advice Wiki

People often give advice to others; less often, they request advice from others. And much of this advice is remarkably bad. For example, such as the advice to “never settle” in pursuing your career dreams.

When A takes advice from B, that is often seen as raising the status of B and lowering that of A. As a result, people often resist listening to advice, they ask for advice as a way to flatter and submit, and they give advice as a way to assert their status and goodness. For example, advisors often tell others to do what they did, as a way to affirm that they have good morals, and achieved good outcomes via good choices.

These hidden motives understandably detract from the average quality of advice as a guide to action. And the larger is this quality reduction, the more potential there is for creating value via alternative advice institutions. I’ve previously suggested using decision markets for advice in many contexts. In this post, I want to explore a simpler/cheaper approach: a wiki full of advice polls. (This is like something I proposed in 2013.)

Imagine a website where you could browse a space of decision contexts, connected to each other by the subset relation. For example under “picking a career plan after high school”, there’s “picking a college attendance plan” and under that there’s “picking a college” and “picking a major”. For each decision context, people can submit proposed decision advice, such as “go to the highest ranked college you can get into” for “pick a college”. You and anyone could then vote to say which advice they endorse in which contexts, and you see the current voter distribution over advice opinion.

Assume participants can be anonymous if they so choose, but can also be labelled with their credentials. Assume that they can change their votes at anytime, and that the record of each vote notes which options were available at the time. From such voting records, we might see not just the overall distribution of opinion regarding some kind of decision, but also how that distribution varies with quality indicators, such as how much success a person has achieved in related life areas. One might also see how advice varies with level of abstraction in the decision space; is specific advice different from general advice?

Of course such poll results aren’t plausibly as accurate as those resulting from decision markets, at least given the same level of participation. But they should also be much easier to produce, and so might attract far more participation. The worse are our usual sources of advice, the higher the chance that these polls could offer better advice. Compared to asking your friends and family, these distributions of advice less suffer from particular people pushing particular agenda, and anonymous advice may suffer less from efforts to show off. At least it might be worth a try.

Added 1Aug: Note that decision context can include features of the decision maker, and that decision advice can include decision functions, which map features of the decision context to particular decisions.

GD Star Rating
loading...
Tagged as: , ,

Beware Nggwal

Consider the fact that this was a long standing social equilibrium:

During an undetermined time period preceding European contact, a gargantuan, humanoid spirit-God conquered parts of the Sepik region of Papua New Guinea. … Nggwal was the tutelary spirit for a number of Sepik horticulturalist societies, where males of various patriclans were united in elaborate cult systems including initiation grades and ritual secrecy, devoted to following the whims of this commanding entity. …

a way of maintaining the authority of the older men over the women and children; it is a system directed against the women and children, … In some tribes, a woman who accidentally sees the [costumed spirit or the sacred paraphernalia] is killed. … it is often the responsibility of the women to provide for his subsistence … During the [secret] cult’s feasts, it is the senior members who claim the mantle of Nggwal while consuming the pork for themselves. …

During the proper ritual seasons, Ilahita Arapesh men would wear [ritual masks/costumes], and personify various spirits. … move about begging small gifts of food, salt, tobacco or betelnut. They cannot speak, but indicate their wishes with various conventional gestures, …
Despite the playful, Halloween-like aspects of this practice … 10% of the male masks portrayed [violent spirits] , and they were associated with the commission of ritually sanctioned murder. These murders committed by the violent spirits were always attributed to Nggwal.

The costumes of the violent spirits would gain specific insignia after committing each killing, … “Word goes out that Nggwal has “swallowed” another victim; the killer remains technically anonymous, even though most Nggwal members know, or have a strong inkling of, his identity.” … are universally feared, and nothing can vacate a hamlet so quickly as one of these spooks materializing out of the gloom of the surrounding jungle. … Nggwal benefits some people at the expense of others. Individuals of the highest initiation level within the Tambaran cult have increased status for themselves and their respective clans, and they have exclusive access to the pork of the secret feasts that is ostensibly consumed by Nggwal. The women and children are dominated severely by Nggwal and the other Tambaran cult spirits, and the young male initiates must endure severe dysphoric rituals to rise within the cult. (more)

So in these societies, top members of secret societies could, by wearing certain masks, literally get away with murder. These societies weren’t lawless; had these men committed murder without the masks, they would have been prosecuted and punished.

Apparently many societies have had such divisions between an official legal system that was supposed to fairly punish anyone for hurting others, along side less visible but quite real systems whereby some elites could far more easily get away with murder. Has this actually been the usual case in history?

GD Star Rating
loading...
Tagged as: ,

Why Crime Discretion?

Our criminal law system gives discretion to many actors, in effect, pardon criminals, vary their punishment. Police officers and their bosses can choose not to arrest, or to charge with a lower crime, prosecutors and their bosses can choose not to prosecute, to prosecute for a lower crime, or to settle on a lower crime, judges and juries can choose not to convict and to make mild or severe sentences, and governors and presidents can pardon them, and prisons can parole them.

If you were the victim of a crime, you might be disturbed to see that so many people can in effect pardon the criminal who hurt you. Also, as these parties are paid far less to deal with that criminal than how much that criminal could suffer, you could reasonably be worried about bribes and other forms of bias and corruption. Even if you think there should be some discretion in the system, you might think that should be limited more, such as to only the judge. Why do we have so much discretion in our system?

To find out, I did this Twitter poll:

I also did two other polls, the same except “speeding” was replaced by “trespassing” and “in general”. In all three polls, by a roughly 3-1 ratio respondents thought that discretion would favor them personally. And in all cases, there is a substantial correlation between thinking that correlation benefits you and that it benefits society. However, for speeding, which is the case where they should have the most personal knowledge on the consequences of discretion, they were split evenly, about 1-1, on if discretion helps cut net social harm. And in the other two cases, where they personally know much less, they guessed about 3-2 that discretion cut net social harm.

To me, the obvious interpretation here is this: the main reason most people favor crime law discretion is that they expect to personally benefit from it. They are willing to presume that it benefits society in areas they don’t know much about, but they admit that it doesn’t in the areas they know best. This seems analogous to people estimating much higher accuracy for media reports in areas they don’t know about, compared to areas in which they’ve seen how media coverage compares to personal knowledge.

GD Star Rating
loading...
Tagged as: ,

Why Weakly Enforced Rules?

While some argue that we should change our laws to open our borders, it is more common for pro-immigrant folks to argue for weaker enforcement of anti-immigration laws. They want fewer government agencies to be authorized to help enforcement, fewer resources to go into finding violators, and weaker punishment of violators. Similar things happen regarding prostitution and adultery; many complain about enforcement of such laws, and yet don’t support eliminating them.

The recently celebrated “criminal justice reform” didn’t make fewer things illegal, or substitute more efficient forms of punishment (eg torture, exile) for less efficient prison. It mainly just reduces jail sentence durations. When I probed supporters, they confirmed they didn’t want fewer things illegal or more efficient enforcement.

The policing reforms that many want are not to substitute more cost-effective enforcers such as bounty hunters, or stronger punishments against police misconduct, but to instead just have police do less: pull over fewer drivers, investigate fewer suspects, etc.

When I claim that stronger norm enforcement is a big advantage of legalized blackmail, many people say that’s exactly the problem; they want less enforcement of common norms. For example, Scott Sumner:

Great literature and great films often turn people violating society’s norms into sympathetic characters, especially when they are ground down by “the machine”. I suspect that the almost universal public opposition to legalizing blackmail reflects society’s view (subconscious to be sure) that enforcing these norms (especially for non-criminal activities) requires a “light touch”, and that turning shaming into an highly profitable industry will do more harm than good. It will turn society into a mean, backstabbing culture. The people hurt most will be sensitive good people who made a mistake, not callous gang members who don’t care if others think they are evil.

On the surface, all of these positions seems puzzling to me; if a norm or law isn’t worth enforcing well, why not eliminate it? Some possible explanations:

  1. People like the symbolism of being against things they don’t really want to stop. It is more about wanting to look like the sort of person who doesn’t fully approve of such things.
  2. Having more rules that are only weakly enforced allows the usual systems more ways to arbitrarily punish some folks via selective enforcement. You might like this if you share such system’s tastes re who to arbitrarily punish. Or if you want to signal submission to authorities who want to use such power.
  3. If these things were actually legal and licit, people might sometimes publicly suggest that you are engaging in them. But if they are illicit or illegal, there’s a norm against accusing someone of doing them without substantial evidence. So if you want to discourage others from lightly accusing you of such things, you may want those activities to be officially disapproved, even if you don’t actually want to discourage them.
  4. We mainly want these norms and laws to help us deal with some disliked “criminal class” out there, a class that we don’t actually interact with much. So when we see real cases in our familiar word, they seem like they are not in that class, and thus we don’t want our norms or laws to apply to them. We only want less enforcement for folks in our world.
  5. What else?

Added 26Feb: I clearly didn’t communicate well in this post, as many commenters and this responding post saw me as arguing that all punishment, conditional on being caught and convicted, should either be zero or max extreme (eg death). Yes of course it is often reasonable to use intermediate punishments.

But enforcement also includes a chance of being caught, not just a degree of punishment, and there are issues of the cost-effectiveness of the processes to catch and punish people. There are many who want less punishment if caught, and less chance of catching, for most all offenses, and don’t want more cost effective catching or punishment, for fear that this might lead to more catching or punishing. To me, this seems hard to explain via just thinking that we’ve overestimated the optimal punishment level for some particular offenses.

Added 3Mar: A striking example is how in WWI recruits were supposed to be age 19 or older, but it was easy to lie and get in at younger ages, and most everyone knew of someone who had done this. We tsk tsk about child soldiers elsewhere, but don’t seem much ashamed of our own.

GD Star Rating
loading...
Tagged as: , ,

Dominance Hides in Prestige Clothing

21 months ago, I said: 

We like to give others the impression that we personally mainly want prestige in ourselves and our associates, and that we only grant others status via the prestige they have earned. But let me suggest that, compared to this ideal, we actually want more dominance in ourselves and our associates than we like to admit, and we submit more often to dominance. In the following, I’ll offer three lines of evidence for this claim. First consider that we like to copy the consumer purchases of people that we envy, but not of people we admire for being “warm” and socially responsible. … Second, consider the fact that when our bosses or presidents retire and leave office, their legitimate prestige should not have diminished much. … Yet others usually show far less interest in associating with such retirees. … For my third line of evidence, … for long term mates we more care about prestige features that are good for the group, but for short term mates, we care more about dominance features that are more directly useful to us personally. (more)

Today I’ll describe a fourth line of evidence: when ranking celebrities, we don’t correct much for the handicaps that people face. Let me explain.

Dominance is about power, while prestige is about ability. Now on average having more ability does tend to result in having more power. But there are many other influences on power besides individual ability. For example, there’s a person’s family’s wealth and influence, and the power they gained via associating with powerful institutions and friends.  

As I know the world of intellectuals better than other worlds, let give examples from there. Intellectuals who go to more prestigious schools and who get better jobs at more prestigious institutions have clear advantages in this world. And those whose parents were intellectuals, or who grew up in more intellectual cultures, had advantages. Having more financial support and access to better students to work with are also big helps. But when we consider which intellectuals to most praise and admire (e.g., who deserves a Nobel prize), we mainly look at the impact they’ve had, without correcting this much for these many advantages and obstacles. 

Oh sure, when it is we ourselves who are judged, we are happy to argue that our handicaps should be corrected for. After all, most of us don’t have as many advantages as do the most successful people. And we are sometimes willing to endorse correcting for handicaps with politically allied groups. So if we feel allied with the religious and politically conservative, we may note that they tend more obstacles in intellectual worlds today. And if we feel allied with women or ethnic minorities, we may also endorse taking into account the extra obstacles that they often face. 

But these corrections are often half-hearted, and they seem the exceptions that prove a rule: when we pick our intellectual heroes, we don’t correct much for all these handicaps and advantages. We mainly just want powerful dominant heroes. 

In acting, music, and management, being good looking is a big advantage. But while we tend to say that we disapprove of this advantage, we don’t correct for it much when evaluating such people. Oscar awards are mostly the pretty actors, for example. 

GD Star Rating
loading...
Tagged as: ,

News Accuracy Bonds

Fake news is a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media. This false information is mainly distributed by social media, but is periodically circulated through mainstream media. Fake news is written and published with the intent to mislead in order to damage an agency, entity, or person, and/or gain financially or politically, often using sensationalist, dishonest, or outright fabricated headlines to increase readership, online sharing, and Internet click revenue. (more)

One problem with news is that sometimes readers who want truth instead read (or watch) and believe news that is provably false. That is, a news article may contain claims that others are capable of proving wrong to a sufficiently expert and attentive neutral judge, and some readers may be fooled against their wishes into believing such news.

Yes, news can have other problems. For example, there can be readers who don’t care much about truth, and who promote false news and its apparent implications. Or readers who do care about truth may be persuaded by writing whose mistakes are too abstract or subtle to prove wrong now to a judge. I’ve suggested prediction markets as a partial solution to this; such markets could promote accurate consensus estimates on many topics which are subtle today, but which will eventually become sufficiently clear.

In this post, however, I want to describe what seems to me the simple obvious solution to the more basic problem of truth-seekers believing provably-false news: bonds. Those who publish or credential an article could offer bonds payable to anyone who shows their article to be false. The larger the bond, the higher their declared confidence in their article. With standard icons for standard categories of such bonds, readers could easily note the confidence associated with each news article, and choose their reading and skepticism accordingly.

That’s the basic idea; the rest of this post will try to work out the details.

While articles backed by larger bonds should be more accurate on average, the correlation would not be exact. Statistical models built on the dataset of bonded articles, some of which eventually pay bonds, could give useful rough estimates of accuracy. To get more precise estimates of the chance that an article will be shown to be in error, one could create prediction markets on the chance that an individual article will pay a bond, with initial prices set at statistical model estimates.

Of course the same article should have a higher chance of paying a bond when its bond amount is larger. So even better estimates of article accuracy would come from prediction markets on the chance of paying a bond, conditional on a large bond amount being randomly set for that article (for example) a week after it is published. Such conditional estimates could be informative even if only one article in a thousand is chosen for such a very large bond. However, since there are now legal barriers to introducing prediction markets, and none to introducing simple bonds, I return to focusing on simple bonds.

Independent judging organizations would be needed to evaluate claims of error. A limited set of such judging organizations might be certified to qualify an article for any given news bond icon. Someone who claimed that a bonded article was in error would have to submit their evidence, and be paid the bond only after a valid judging organization endorsed their claim.

Bond amounts should be held in escrow or guaranteed in some other way. News firms could limit their risk by buying insurance, or by limiting how many bonds they’d pay on all their articles in a given time period. Say no more than two bonds paid on each day’s news. Another option is to have the bond amount offered be a function of the (posted) number of readers of an article.

As a news article isn’t all true or false, one could distinguish degrees of error. A simple approach could go sentence by sentence. For example, a bond might pay according to some function of the number of sentences (or maybe sentence clauses) in an article shown to be false. Alternatively, sentence level errors might be combined to produce categories of overall article error, with bonds paying different amounts to those who prove each different category. One might excuse editorial sentences that do not intend to make verifiable newsy claims, and distinguish background claims from claims central to the original news of the article. One could also distinguish degrees of error, and pay proportional to that degree. For example, a quote that is completely made up might be rated as completely false, while a quote that is modified in a way that leaves the meaning mostly the same might count as a small fractional error.

To the extent that it is possible to verify partisan slants across large sets of articles, for example in how people or organizations are labeled, publishers might also offer bonds payable to those than can show that a publisher has taken a consistent partisan slant.

A subtle problem is: who pays the cost to judge a claim? On the one hand, judges can’t just offer to evaluate all claims presented to them for free. But on the other hand, we don’t want to let big judging fees stop people from claiming errors when errors exist. To make a reasonable tradeoff, I suggest a system wherein claim submissions include a fee to pay for judging, a fee that is refunded double if that claim is verified.

That is, each bond specifies a maximum amount it will pay to judge that bond, and which judging organizations it will accept.  Each judging organization specifies a max cost to judge claims of various types. A bond is void if no acceptable judge’s max is below that bond’s max. Each submission asking to be paid a bond then submits this max judging fee. If the judges don’t spend all of their max judging fee evaluating this case, the remainder is refunded to the submission. It is the amount of the fee that the judges actually spend that will be refunded double if the claim is supported. A public dataset of past bonds and their actual judging fees could help everyone to estimate future fees.

Those are the main subtleties that I’ve considered. While there are ways to set up such a system better or worse, the basic idea seems robust: news publishers who post bonds payable if their news is shown to be wrong thereby credential their news as more accurate. This can allow readers to more easily avoid believing provably-false news.

A system like that I’ve just proposed has long been feasible; why hasn’t it been adopted already? One possible theory is that publishers don’t offer bonds because that would remind readers of typical high error rates:

The largest accuracy study of U.S. papers was published in 2007 and found one of the highest error rates on record — just over 59% of articles contained some type of error, according to sources. Charnley’s first study [70 years ago] found a rate of roughly 50%. (more)

If bonds paid mostly for small errors, then bond amounts per error would have to be very small, and calling reader attention to a bond system would mostly remind them of high error rates, and discourage them from consuming news.

However, it seems to me that it should be possible to aggregate individual article errors into measures of overall article error, and to focus bond payouts on the most mistaken “fake news” type articles. That is, news error bonds should mostly pay out on articles that are wrong overall, or at least quite misleading regarding their core claims. Yes, a bit more judgment might be required to set up a system that can do this. But it seems to me that doing so is well within our capabilities.

A second possible theory to explain the lack of such a system today is the usual idea that innovation is hard and takes time. Maybe no one ever tried this with sufficient effort, persistence, or coordination across news firms. So maybe it will finally take some folks who try this hard, long, and wide enough to make it work. Maybe, and I’m willing to work with innovation attempts based on this second theory.

But we should also keep a third theory in mind: that most news consumers just don’t care much for accuracy. As we discuss in our book The Elephant in the Brain, the main function of news in our lives may be to offer “topics in fashion” that we each can all riff on in our local conversations, to show off our mental backpacks of tools and resources. For that purpose, it doesn’t much matter how accurate is such news. In fact, it might be easier to show off with more fake news in the mix, as we can then show off by commenting on which news is fake. In this case, news bonds would be another example of an innovation designed to give us more of what we say we want, which is not adopted because we at some level know that we have hidden motives and actually want something else.

GD Star Rating
loading...
Tagged as: , , ,

A Coming Hypocralypse?

Many people have been working hard for a long time to develop tech that helps to read people’s feelings. They are working on ways to read facial expressions, gazes, word choices, tones of voice, sweat, skin conductance, gait, nervous habits, and many other body features and motions. Over the coming years, we should expect this tech to consistently get cheaper and better at reading more subtler feelings of more people in more kinds of contexts more reliably.

Much of this tech will be involuntary. While your permission and assistance may help such tech to read you better, others will often be able to read you using tech that they control, on their persons or and in the buildings around you. They can use tech integrated with other complex systems that is thus hard to monitor and regulate. Yes, some defenses are possible, such as via wearing dark sunglasses or burqas, and electronically modulating your voice. But such options seem rather awkward and I doubt most people will be willing to use them much in most familiar social situations. And I doubt that regulation will greatly reduce the use of this tech. The overall trend seems clear: our true feelings will become more visible to people around us.

We are often hypocritical about our feelings. That is, we pretend to some degree to have certain acceptable public feelings, while actually harboring different feelings. Most people know that this happens often, but our book The Elephant in the Brain suggests that we still vastly underestimate typical levels of hypocrisy. We all mask our feelings a lot, quite often from ourselves. (See our book for many more details.)

These two facts, better tech for reading feelings and widespread hypocrisy, seem to me to be on a collision course. As a result, within a few decades, we may see something of a “hypocrisy apocalypse”, or “hypocralypse”, wherein familiar ways to manage hypocrisy become no longer feasible, and collide with common norms, rules, and laws. In this post I want to outline some of the problems we face.

Long ago, I was bullied as a child. And so I know rather well that one of the main defenses that children develop to protect themselves against bullies is to learn to mask their feelings. Bullies tend to see kids who are visibly scared or distraught as openly inviting them to bully. Similarly, many adults protect themselves from salespeople and sexual predators by learning to mask their feelings. Masked feelings also helps us avoid conflict with rivals at work and in other social circles. For example, we learn to not visibly insult or disrespect big people in rowdy bars if we don’t want to get beaten up.

Tech that unmasks feelings threatens to weaken the protections that masked feelings provide. That big guy in a rowdy bar may use new tech to see that everyone else there can see that you despise him, and take offense. You bosses might see your disrespect for them, or your skepticism regarding their new initiatives. Your church could see that you aren’t feeling very religious at church service. Your school and nation might see that your pledge of allegiance was not heart-felt. And so on.

While these seem like serious issues, change will be mostly gradual and so we may have time to flexibly search in the space of possible adaptations. We can try changing with whom we meet how for what purposes, and what topics we consider acceptable to discuss where. We can be more selective who we make more visible and how.

I worry more about collisions between better tech for reading feelings and common social norms, rules, and laws. Especially norms and laws that we adopt for more symbolic purposes, instead of to actually manage our interactions. These things tend to be less responsive to changing conditions.

For example, today we often consider it to be unacceptable “sexual harassment” to repeatedly and openly solicit work associates for sex, especially after they’ve clearly rejected the solicitor. We typically disapprove not just of direct requests, but also of less direct but relatively clear invitation reminders, such as visible leers, sexual jokes, and calling attention to your “junk”. And of course such rules make a great deal of sense.

But what happens when tech can make it clearer who is sexually attracted how much to whom? If the behavior that led to these judgements was completely out each person’s control, it might be hard to blame on anyone. We might then socially pretend that it doesn’t exist, though we might eagerly check it out privately. Unfortunately, our behavior will probably continue to modulate the processes that produce such judgements.

For example, the systems that judge how attracted you are to someone might focus on the moments when you directly look at that person, when your face is clearly visible to some camera, under good lighting. Without your wearing sunglasses or a burqa. So the longer you spend directly looking at someone under such conditions, the better the tech will be able to see your attraction. As a result, your choice to spend more time looking directly at them under favorable reading conditions might be seen as an intentional act, a choice to send the message that you are sexually attracted to them. And thus your continuing to do so after they have clearly rejected you might be seen as sexual harassment.

Yes, a reasonable world might adjust rules on sexual harassment to account for many complex changing conditions. But we may not live in a reasonable world. I’m not making any specific claims about sexual harassment rules, but symbolic purposes influence many of the norms and laws that we adopt. That is, we often support such rules not because of the good consequences of having them, but because we like the way that our personal support for such rules makes us look personally. For example, many support laws against drugs and prostitution even when they believe that such laws do little to discourage such things. They want to be personally seen as publicly taking a stand against such behavior.

Consider rules against expressing racism and sexism. And remember that the usual view is that everyone is at least a bit racist and sexist, in part because they live in a racist and sexist society. What happens when we can collect statistics on each person regarding how their visible evaluations of the people around them correlate with the race and sex of those people? Will we then punish white males for displaying statistically-significantly low opinions of non-whites and non-males via their body language? (That’s like a standard we often apply to firms today.) As with sexual harassment, the fact that people can moderate these readings via their behaviors may make these readings seem to count as intentional acts. Especially since they can be tracking the stats themselves, to see the impression they are giving off. To some degree they choose to visibly treat certain people around them with disrespect. And if we are individually eager to show that we personally disapprove of racism and sexism, we may publicly support strict application of such rules even if that doesn’t actually deal well with real problems of racism and sexism in the world.

Remember that this tech should improve gradually. So for the first cases that set key precedents, the tech will be weak and thus flag very few people as clearly harassers or racists or sexists. And those few exceptions are much more likely to be people who actually did intend to harass and express racism or sexism, and who embody extreme versions of such behavior. While they will also probably tend to be people who are weird and non-conformist in other ways, this tech for reading feelings may initially seem to do well to help us identify and deal with problematic people. For example, we may be glad that tech can identity the priests who most clearly lust after the young boys around them.

But as the tech gets better it will slowly be able to flag more and more people as sending disapproved messages. The rate will drift upward from one person in ten thousand to one in a thousand to one percent and so on. People may then start to change their behavior in bigger ways, to avoid being flagged, but that may be too little too late, especially if large video, etc. libraries of old behaviors are available to process with new methods.

At this point we may reach a “hypocralypse”, where rules that punish hypocrisy collide in a big way with tech that can expose hypocrisy. That is, where tech that can involuntarily show our feelings intersects with norms and laws that punish the expression of common but usually hidden feelings. Especially when such rules are in part symbolically motivated.

What happens then, I don’t know. Do white males start wearing burqas, do we regulate this tech heavily, or do we tone down and relax our many symbolic rules? I’ll hope for the best, but I still fear the worst.

GD Star Rating
loading...
Tagged as: , , ,