Tag Archives: Hypocrisy

Dealism

We economists, and also other social scientists and policy specialists, are often criticized as follows:

You recommend some policies over others, and thus make ethical choices. Yet your analyses are ethically naive and impoverished, including only a tiny fraction of the relevant considerations known to professional ethicists. Stop it, learn more on ethics, or admit you make only preliminary rough guesses.

My response is “dealism”:

The world is full of competent and useful advisors (doctors, lawyers, therapists, gardeners, realtors, hairstylists, etc.) similarly ignorant on ethics. Yes, much advice says “given options O, choose X to achieve purpose P”, but when they don’t specify purpose P the usual default is not P = “act the most ethically”, but instead P = “get what you want”.

Economists policy recommendations are usually designed to help relatively large groups make better social “deals”, via identifying their “Pareto frontier” (within option subspaces). This frontier is the set of options where some can get more of what they want only via others getting less. We infer what people want via the “revealed preferences” of models that fit their prior choices.

As people can be expected to seek out advice they expect to help them to get what they want, we economists branding ourselves in this way can induce more to seek our advice. We can reasonably want to fill this role. Doing so does not commit us to taking on all possible clients, nor to making any ethical claims whatsoever.

Yes, if people are hypocritical, and pretend to want morality more than they do, they may prefer advisors who similarly pretend. In which case we economists can also pretend that our clients want that, to help preserve their pretensions. But we wouldn’t need to know more about ethics than our clients do, and beneath that veneer of morality, clients likely prefer our advice to be targeted mostly at getting them what they want.

Yes, there are many ways one might argue that this economist’ practice is ethically good. But I make no such arguments here.

Yes, there are other possible ways to help people. Helping them identify deals is not the only way, and often not the best way, to help or advise people.

Most people want in part to be moral, and they think that what they and others want is relevant to what acts are moral. It is just that these two concepts are not identical. If in fact what people want is only and wholly to be ethical, then the difference between being ethical and getting what you want collapses. But even so, this econ approach remains useful, and in this case our advice now also becomes ethical.

The same arguments apply if we replace “be ethical” with “do what you have good reasons to do”. If there is a difference, then others should seek our advice more if it is on what they want, relative to what they have reasons to do.

What if the process of hearing our advice, or following it, can change what people want? (The advice might include a sermon, and doing something can change how you feel about it.) In this case, people will most seek out our advice when those changes in wants match their meta-wants regarding such changes. And those meta-wants are revealed in part via how they choose advisors.

For example, when people choose advisors retrospectively, based on who seems to have been pleased with the advice that they were given, that reveals a preference for changes in wants that make them pleased after the fact. In that case, you’d want to give the advice that resulted in a combination of outcomes and want changes that made them pleased later. In this case they wouldn’t mind changes to their wants, as long as those resulted in their being more pleased.

In contrast, when people choose advisors prospectively, based on how pleased they are now with the outcomes that they expect to result from your advice, then you would only want to offer advice which clients expect to change their wants if such clients expect to be pleased by such changes. So you’d want to offer advice that seemed to promote the want changes that they aspire to, but prevent the want changes that they fear or despise.

And that’s it. Many presume that policy discussions are about morality. But as a policy advisor, you can reasonably take the stance that your advice is not about morality, and that economic analysis is well-suited to the advice role that you have chosen.

GD Star Rating
loading...
Tagged as: , , ,

Hidden Motives In Law

In our book The Elephant in the Brain, Hidden Motives in Everyday Life, Kevin Simler and I first review the reasons to expect humans to often have hidden motives, and then we describe our main hidden motives in each of ten areas of life. In each area, we start with the usual claimed motive, identify puzzles that don’t fit well with that story, and then describe another plausible motive that fits better.

We hoped to inspire others to apply our method to more areas of life, but we have so far largely failed there. So its past time for me to take up that task. And as law & economics is the class I teach most often, that’s a natural first place to start. So what are our motives regarding our official systems for dispute resolution?

Saying the word “justice” doesn’t help much; what does that mean? But the field of law and economics has a standard answer that looks reasonable: economic efficiency. Which in law translates to encouraging cost-benefit-optimal levels of commitment, reliance, care, and activity. And the substantial success of law and economics scholarship suggests that this is in fact an important motive in law. Furthermore, as most everyone can get behind it, this is plausibly our most overt motive regarding law. But we also see many puzzles in law not well explained by this approach. Which suggests to me three other motives.

Back in the forager era, before formal law, disputes were resolved by mobs. That is, the local band talked informally about accusations of norm violations, came to a consensus about what to do, and then implemented that themselves. As this mob justice system has many known failure modes, we probably added law as a partial replacement in order to cut such failures. Thus a plausible secondary motive in law is to try to minimize the common failings of mob justice, and to insulate the legal system from mob influence.

The main failure of mob justice is plausibly a rush to judgment; each person in a gossip network has local incentives to accept the stance of whomever first reports an accusation to them. And the most interested parties are far more likely than average to be the first source of the first report someone hears. In response, law seeks to make legal decision makers independent and disconnected from the disputants and their gossip network, and to make such decision markers listen to all the evidence before making their decision. The rule against hearsay evidence is also plausibly to limit the influence of gossip on trials.

Leaders of the legal system often express concerns about its perceived legitimacy, and this makes sense as a third motive of the legal system. And as the most common threat to such legitimacy is widespread criticism of particular legal decisions, many features of law can be understood as ways to avoid such criticism. For example, criticism is likely cut via having legal personnel, venues, and demeanors be maximally prestigious and deferential to legal authorities.

Also, the more complex are legal language and arguments, the harder it becomes for mobs to question them. The longer the delay before final legal decisions, the less passion will remain to challenge them. Finally, the more expensive is the legal process, the fewer rulings there will be to question. Our most official legal systems differ from all our other less official dispute resolutions systems in all of these ways. They are slower, more expensive, less understandable, and more prestigious.

The last hidden motive that I think I see is that each legal jurisdiction wants to look good to outsiders. So most every jurisdiction has laws against widely disapproved behaviors, such as adultery, prostitution, or drinking alcohol on the street, even though such laws are often quite weakly enforced. Most set high standards of proof and adopt the usual rules constraining what evidence can be presented at trial, even though there’s little evidence that these rules help on net.

Most jurisdictions pretend to enforce all laws equally on everyone, but actually give police differential priorities; some locations, suspects, and victims count a lot more than others. It would be quite feasible, and probably lot more efficient, to use a bounty hunting system to enforce laws, and most locals are well aware of these varying priorities. But that would require admitting such differential priorities to outsiders, via explicit differences in the bounties paid. So most jurisdictions prefer government employees, who can be more hypocritical.

Similarly, our usual form of criminal punishment, nice jail, is less efficient than all the other forms, including mean jail, exile, corporal punishment, and fines. Holding constant how averse a convict is to suffer each punishment, nice jail costs the most. Alas, the world has fallen into an equilibrium where any jurisdiction that allows any punishment other than nice jail is declared to be cruel and unjust. Even giving the convict the choice between such punishments is called unjust. So the strong desire to avoid such accusations pushes most jurisdictions into using the least efficient form of punishment.

In sum, I see four big motives in law: encouraging commitment and care, avoiding failings of mob justice, preserving system legitimacy via avoiding clear decisions, and hindering distant observers from accusing a jurisdiction of injustice, even if most locals are not fooled.

One can of course postulate many more possible motives, including diverting revenue and status to legal authorities, preserving and increasing existing inequalities, giving civil authorities more arbitrary powers, and empowering busybodies to meddle in the lives of others. But it isn’t clear to me that these add much more explanatory power, given the above motives.

GD Star Rating
loading...
Tagged as: ,

What Hypocrisy Feels Like

Our book The Elephant in the Brain argues that there are often big differences between the motives by which we sincerely explain our behavior, and the motives that more drive and shape that behavior. But even if this claim seems plausible to you in the abstract, you might still not feel fully persuaded, if you find it hard to see this contrast clearly in a specific example.

That is, you might want to see what hypocrisy feels like up close. To see the two different kinds of motives in you in a particular case, and see that you are inclined to talk and think in terms of the first, but see your concrete actions being more driven by the second.

If so, consider the example of utopia, or heaven. When we talk about an ideal world, we are quick to talk in terms of the usual things that we would say are good for a society overall. Such as peace, prosperity, longevity, fraternity, justice, comfort, security, pleasure, etc. A place where everyone has the rank and privileges that they deserve. We say that we want such a society, and that we would be willing to work and sacrifice to create or maintain it.

But our allegiance to such a utopia is paper thin, and is primarily to a utopia described in very abstract terms. Our abstract thoughts about utopia generate very little emotional energy in us, and our minds quickly turn to other topics. In addition, as soon as someone tries to describe a heaven or utopia in vivid concrete terms, we tend to be put off or repelled. Even if such a description satisfies our various abstract good-society features, we find reasons to complain. No, that isn’t our utopia, we say. Even if we are sure to go to heaven if we die, we don’t want to die.

And this is just what near-far theory predicts. Our near and far minds think differently, with our far minds presenting a socially desirable image to others, and our near minds more in touch with what we really want. Our far minds are more in charge when we are prompted to think abstractly and hypothetically, but our near minds are more in charge when we privately make real concrete choices.

Evolved minds like ours really want to win the evolutionary game. And when there are status hierarchies tied to evolutionary success, we want to rise in those hierarchies. We want to join a team, and help that team win, as long as that team will then in turn help us to win. And we see all this concretely in the data; we mainly care about our social rank:

The outcome of life satisfaction depends on the incomes of others only via income rank. (Two followup papers find the same result for outcomes of psychological distress and nine measures of health.) They looked at 87,000 Brits, and found that while income rank strongly predicted outcomes, neither individual (log) income nor an average (log) income of their reference group predicted outcomes, after controlling for rank (and also for age, gender, education, marital status, children, housing ownership, labor-force status, and disabilities). (more)

But this isn’t what we want to think, or to say to others. With our words, and with other very visible cheap actions, we want to be pro-social. That is, we want to say that we want to help society overall. Or at least to help our society. While we really crave fights by which we might rise relative to others, we want to frame those fights in our minds and words as fighting for society overall, such as by fighting for justice against the bad guys.

And so when the subject of utopia comes up, framed abstractly and hypothetically, we first react with our far minds: we embrace our abstract ideals. We think we want them embodied in a society, and we think we want to work to create that society. And our thoughts remain this way as long as the discussion remains abstract, and we aren’t at much risk of actually incurring substantial supporting personal costs.

But the more concrete the discussion gets, and the closer to asking for concrete supporting actions, the more we recoil. We start to imagine a real society in detail wherein we don’t see good opportunities for our personal advancement over others. And where we don’t see injustices which we could use as excuses for our fights. And our real motivations, our real passions, tell us that they have reservations; this isn’t the sort of agenda that we can get behind.

So there it is: your hypocrisy up close and personal, in a specific case. In the abstract you believe that you like the idea of utopia, but you recoil at most any concrete example. You assume you have a good pro-social reason for your recoil, and will mention the first candidate that comes to your head. But you don’t have a good reason, and that’s just what hypocrisy feels like. Utopia isn’t a world where you can justify much conflict, but conflict is how you expect to win, and you really really want to win. And you expect to win mainly at others’ expense. That’s you, even if you don’t like to admit it.

GD Star Rating
loading...
Tagged as:

Yay Argument Orientation

Long ago I dove into science studies, which includes history, sociology, and philosophy of science. (Got a U. Chicago M.A. in it in 1983.) I concluded at the time that “science” doesn’t really have a coherent meaning, beyond the many diverse practices of many groups that called themselves “science”. But reflecting on my recent foray into astrophysics suggests to me that there may a simple related core concept after all.

Imagine you are in an organization with a boss who announces a new initiative, together with supporting arguments. Also imagine that you are somehow forced to hear a counter-argument against this initiative, offered by a much lower status person, expressed in language and using methods that are not especially high status. In most organizations, most people would not be much tempted to support this counter-argument; they’d rather pretend that they never heard of it.

More generally, imagine there is a standard claim, which is relevant enough to important enough topics to be worth consideration. This claim is associated with some status markers, such as the status of its supporters and their institutions, and the status of the language and methods used to argue for it. And imagine further that a counter-claim is made, with an associated argument, and also associated status markers of its supporters, languages, and methods.

The degree to which (status-weighted) people in a community would be inclined to support this counter-claim (or even to listen to supporting arguments offered) would depend on the relative strengths of both the arguments and the status markers on both sides. (And on the counter claim’s degree of informativeness and relevance regarding topics seen as important.) I’ll say that such a community is more “argument-oriented” to the degree that the arguments’ logical or Bayesian strengths are given more priority over the claims’ status strengths.

Even though almost everyone in most all communities feels obligated to offer supporting arguments for their claims, very few communities are actually very argument-oriented. You usually don’t contradict the boss in public, unless you can find pretty high status allies for your challenge; you know that the strength of your argument doesn’t count for much as an ally. So it is remarkable, and noteworthy, that there are at least some communities that are unusually argument-oriented. These include big areas of math, and smaller areas of philosophy and physics. And, alas, they include even smaller areas of most human and social sciences. So there really a sense in which some standard disciplines are more “scientific”.

Note that most people are especially averse to claims with especially low status markers. For example, when an argument made for a position is expressed using language that evokes in many people vague illicit associations, such as with racism, sexism, ghosts, or aliens. Or when the people who support a claim are thought to have had such associations on other topics. As such expressions are less likely to happen near topics in math, math is more intrinsically supportive of argument-oriented communities.

But even with supportive topic areas, argument-orientation is far from guaranteed. So let us try to identify and celebrate the communities and topic areas where it is more common, and perhaps find better ways to shame the others into becoming more argument-oriented. Such an orientation is plausibly a strong causal factor explaining variation in accuracy and progress across different communities and areas.

There are actually a few simple ways that academic fields could try to be and seem more argument-oriented. For example, while peer review is one of the main place where counter-arguments are now expressed, such reviews are usually private. Making peer review public might induce higher quality counter-arguments. Similarly, higher priority could be given to publishing articles that focus more on elaborating counter-arguments to other arguments. And communities might more strongly affirm their focus on the literal meanings of expressions, relative to drawing inferences from vague language associations.

(Note: that being “argumentative” is not very related to being “argument-oriented”. You can bluster and fight without giving much weight to logical and Bayesian strengths of arguments. And you can collect and weigh arguments in a consensus style without focusing on who disagrees with who.)

GD Star Rating
loading...
Tagged as: ,

Protecting Hypocritical Idealism

I’m told that soldiers act a lot more confident and brave when they are far from battle, relative to when it looms immediate in front of them.

When presented with descriptions of how most citizens of Nazi Germany didn’t resist or oppose the regime much, most people claim they would have done different. Which of course is pretty unlikely for most of them. But there’s an obvious explanation of this “social desirability bias”. Their subconscious expects a larger positive payoff from presenting an admirable view of themselves to associates, relative to the smaller negative payoff from making themselves more likely to actually do what they said, should they actually find themselves in a Nazi regime.

When the covid pandemic first appeared, elites and experts voiced their long-standing position that masks and travel restrictions were not effective in a pandemic. Which let them express their pro-inclusive global-citizen liberal attitudes. Their subconscious foresaw only a small chance that they’d actually face a real and big pandemic. And if that ever happened, they could and did lower the cost of this previous attitude by just suddenly and without explanation changing their minds.

For many decades it has been an article of faith among a large fraction of these same sort of experts and elites that advanced aliens must be peaceful egalitarian eco-friendly non-expansionist powers, who would if they saw us scold and lecture us about our wars, nukes, capitalism, expansion, and eco-damage. Like our descendants are presented to be in Star Trek or the Culture novels.

Because in this scenario aliens would be the highest status creatures around, and it is important to these humans that the highest in status agree with their politics. I confidently predict that their attitudes would quickly change if they were actually confronted with unknown but very real alien powers nearby.

This predictable hypocrisy could be exposed if people would back these beliefs with bets. But of course they don’t. They aren’t exactly sure why, but most just feel “uncomfortable” with that. Visible and open betting market odds that disagreed with them would also expose this hypocrisy, but most such also oppose allowing those, mostly also for vague “uncomfortable” reasons. Their unconscious knows better what are those reasons, but knows also not to tell.

GD Star Rating
loading...
Tagged as: , , ,

Skirting UFO Taboos

Since before I was born, elites have maintained a severe taboo against taking seriously the hypothesis that UFOs are aliens. As I’ve discussed, elite-aspiring UFO researchers have themselves embraced this taboo. They seem to figure that if we look carefully at all the other hypotheses, and see how inadequate they are, then the taboo against UFOs as aliens must collapse.

For elites pundits, this taboo is a problem when UFOs as possibly aliens are the topic of the day. Because elite pundits are also supposed to comment on the topic of the day. Their obvious solution: talk only about the fact that other people seem to be taking UFOs as aliens seriously.

For example, here is Ezra Klein (@ezraklein) in a 1992 word New York Times article:

Even if You Think Discussing Aliens Is Ridiculous, Just Hear Me Out

I really don’t know what’s behind these videos and reports, and I relish that. … Even if you think all discussion of aliens is ridiculous, it’s fun to let the mind roam over the implications. … Imagine, tomorrow, an alien craft crashed down in Oregon. … we are faced with the knowledge that we’re not alone, that we are perhaps being watched, and we have no way to make contact. How does that change human culture and society? …

One immediate effect, I suspect, would be a collapse in public trust. … Governments would be seen as having withheld a profound truth from the public. … “Instead of a land grab, it would be a narrative grab,” … There would be enormous power — and money — in shaping the story humanity told itself. … “An awful lot of people would basically shrug and it’d be in the news for three days,” …

how evidence of alien life would shake the world’s religions… many people would simply say, “of course.” … nation-states fall to fighting over the debris, … fractious results. … “Russians and Chinese would never believe us and frankly large numbers of Americans would be much more likely to believe that Russia or China was behind it,” … difficulty of uniting humanity …

knowledge that there were other space-faring societies might make us more desperate to join them or communicate with them. … might lead us to take more care with what we already have, and the sentient life we already know. … “inspire us to be the best examples of intelligent life that we could be.”

Note how Klein very clearly signals that he doesn’t believe, and that this is all about how people who believed would react; he never crosses the line to himself consider aliens.

Here is Tyler Cowen (@tylercowen) in a 746 words Bloomberg article:

Now that the Pentagon takes UFOs seriously, it’s perhaps appropriate to consider some more mundane aspects of the phenomenon — namely, what it means for markets. UFO data will probably remain murky and unresolved, but if UFOs of alien origin become somewhat more likely (starting, to be clear, from a low base rate), which prices will change?

My first prediction is that most market prices won’t move very much. In the short run, VIX might rise, … But … would probably [quickly] return to normal levels. … I would bet on defense stocks to rise, … alien drone probes … might be observing with the purpose of rendering judgment. If they are offended by our militaristic tendencies, the quality of our TV shows and our inability to adopt the cosmopolitan values of “Star Trek” over the next 30 years, maybe they will zap us into oblivion. But … after such an act of obliteration, neither gold nor Bitcoin will do you any good.

Note that Cowen touches on a crucial issue, what if they judge us, but with a flippant tone and only for the purpose of predicting assets prices, which are set by other investors. If he were to directly and seriously consider that issue, he’d have violated the key taboo.

Here is Megan McArdle (@asymmetricinfo) in 835 word Washington Post article:

These are all major, important stories, stories that lives and futures depend upon. And yet they’re almost irrelevant compared to the question that isn’t anywhere in my Twitter feed right now: Are we being watched by alien technology? …

Other humans … would not will the death of our entire species. Aliens might. … Whether we’re being visited, and what they might be up to, is the most important question of anyone’s lifetime, because, if so, everything that currently obsesses us, including the pandemic, will retreat to a historical footnote. …

So I’ve been surprised to find that the story of unexplained sightings, which has now been percolating for years, has been mostly a subplot to more ordinary human politics and folly. … it seems to be mostly fodder for jokes.  …Why is this particular unknowable getting such short shrift? …

One possibility is that UFOs have a social status problem; historically, they are associated with cranks … Thus, most … reflexively refuse to take the topic seriously. … But the third option is that we understand at some level that aliens would be a Very Big Deal — and that most of the possibilities for alien contact are pretty unpleasant. … the alternative is so horrible that I suspect for many of us, it simply doesn’t bear thinking about.

This is like all those long calls for a “conversation on race” that can’t seem to find the space to actually start conversing on race. (Because there is very little safe that one can actually say.) Here McArdle talks long on on being puzzled that we aren’t talking on the key issue, about which she doesn’t actually say much. In response to my complaint she tweeted “I did my best in 800 words!”

I’m pretty sure that any of these authors could have directly addressed the big “elephant in the room” alien issues here, if they had so desired. I’ve tried to do better.

GD Star Rating
loading...
Tagged as: ,

The Debunking of Debunking

In a new paper in Journal of Social Philosophy, Nicholas Smyth offers a “moral critique” of “psychological debunking”, by which he means “a speech‐act which expresses the proposition that a person’s beliefs, intentions, or utterances are caused by hidden and suspect psychological forces.” Here is his summary:

There are several reasons to worry about psychological debunking, which can easily counterbalance any positive reasons that may exist in its favor:

1. It is normally a form of humiliation, and we have a presumptive duty to avoid humiliating others.
2. It is all too easy to offer such stories without acquiring sufficient evidence for their truth,
3. We may aim at no worthy social or individual goals,
4. The speech‐act itself may be a highly inefficient means for achieving worthy goals, and
5. We may unwittingly produce bad consequences which strongly outweigh any good we do achieve, or which actually undermine our good aims entirely.

These problems … are mutually reinforcing. For example, debunking stories would not augment social tensions so rapidly if debunkers were more likely to provide real evidence for their causal hypotheses. Moreover, if we weren’t so caught up in social warfare, we’d be much less likely to ignore the need for evidence, or to ignore the need to make sure that the values which drive us are both worthy and achievable.

That is, people may actually have hidden motives, these might in fact explain their beliefs, and critics and audiences may have good reasons to consider that possibility. Even so, Smyth says that it is immoral to humiliate people without sufficient reason, and we in fact do tend to humiliate people for insufficient reasons when we explain their beliefs via hidden motives. Furthermore, we tend to lower our usual epistemic standards to do so.

This sure sounds to me like Smyth is offering a psychological debunking of psychological debunking! That is, his main argument against such debunking is via his explaining this common pattern, that we explain others’ beliefs in terms of hidden motives, by pointing to the hidden motives that people might have to offer such explanations.

Now Smyth explicitly says that he doesn’t mind general psychological debunking, only that offered against particular people:

I won’t criticize high‐level philosophical debunking arguments, because they are distinctly impersonal: they do not attribute bad or distasteful motives to particular persons, and they tend to be directed at philosophical positions. By contrast, the sort of psychological debunking I take issue with here is targeted at a particular person or persons.

So presumably Smyth doesn’t have an issue with our book The Elephant in the Brain: Hidden Motives in Everyday Life, as it also stays at the general level and does’t criticize particular people. And so he also thinks his debunking is okay, because it is general.

However, I don’t see how staying with generalities saves Smyth from his own arguments. Even if general psychological debunking humiliates large groups all at once, instead of individuals one at a time, it is still humiliation. Which he still might do yet should avoid because of his inadequate reasons, lowering of epistemic standards, there being better ways to achieve his goals, and it unwittingly producing bad consequences. Formally his arguments work just as well against general as against specific debunking.

I’d say that if you have a general policy of not appearing to pick fights, then you should try to avoid arguing by blaming your opponents’ motives if you can find other arguments sufficient to make your case. But that’s just an application of the policy of not visibly picking fights when you can avoid them. And many people clearly seem to be quite willing and eager to pick fights, and so don’t accept this general policy of avoiding fights.

If your policy were just to speak the most relevant truth at each point, to most inform rational audience members at that moment on a particular topic, then you probably should humiliate many people, because in fact hidden motives are quite common and relevant to many debates. But this speak-the-most-truth policy tends to lose you friends and associates over the longer run, which is why it is usually not such a great strategy.

GD Star Rating
loading...
Tagged as: ,

Subtext Shows Status

When we talk, we say things that are explicit and direct, on the surface of the text, and we also say things that are in hidden and indirect, said in more deniable ways via subtext. Imagine that there was a “flattext” type of talk (or writing) in which subtext was much harder to reliably express and read. Furthermore, imagine that it was easy to tell that a speaker (or writer) was using this type of talk. So that by talking in this way you were verifiably not saying as much subtext.

Yes, it seems very hard to go all the way to infinitely hard here, but flattext could have value without going to that extreme. Some have claimed that the artificial language Lojban is in some ways such a talk type.

So who would use surface text? A Twitter poll finds that respondents expect that on average they’d use flattext about half of the time, so they must expect many reasons to want to deny that they use subtext. Another such poll finds that they on average expect official talk to be required to be flattext. Except they are sharply divided between a ~40% that thinks it would be required >80% of the time, and another ~40% who thinks it would be required <20% of the time.

The obvious big application of flattext is people and organizations who are often accused of saying bad things via subtext. Such as people accusing of illicitly flirting, or sexual harrassing. Or people accused of “dogwhilsting” disliked allegiances. Or firms accused over-promising or under-warning to customers, employees, or investors.

As people are quite willing to accuse for-profit firms of bad subtext, I expect they’d be the most eager users. As would people like myself who are surrounded by hostile observers eager to identify particular texts as showing evil subtext. You might think that judges and officials speaking to the public in their official voice would prefer flattext, as it better matches their usual tone and style which implicitly claims that they are just speaking clearly and simply. But that might be a hypocrisy, and they may reject flattext so that they can continue to say subtext.

Personal servants, and slaves from centuries ago were required to speak in a very limited and stylized manner which greatly limited subtext. They could suffer big bad consequences for ever being accused of a tone of voice or manner that signaled anything less than full respect and deterrence to their masters.

Putting this all together, it seems that the ability to regularly and openly use subtext is a sign of status and privilege. We “put down” for-profit firms in our society by discouraging their use of subtext, and mobs do similarly when they hound enemies using hair-trigger standards ready to accuse them of bad subtext. And once low status people and organizations are cowed into avoiding subtext, then others can complain that they lack humanity, as they don’t show a sense of humor, which is more clear evidence that they are evil.

So I predict that if flattext were actually available, it would be mainly used to low status people and organizations to protect themselves from accusations of illicit subtext. As our enforcement of anti-subtext rules is very selective. Very risk averse government agencies might use it, but not high status politicians.

GD Star Rating
loading...
Tagged as: ,

Who Wants Good Advice?

Bryan Caplan:

1. Finish high school. 2. Get a full-time job once you finish school. 3. Get married before you have children. ….
While hardly anyone explicitly uses [this] success sequence to argue that we underrate the blameworthiness of the poor for their own troubles, critics still hear this argument loud and clear – and vociferously object. … Everyone – even the original researchers – insists that the success sequence sheds little or no light on who to blame for poverty. … talking about the success sequence so agitates the critics.

A scene from the excellent documentary Minding the Gap:

Bing: Do you, do you feel, like, concerned that [your young son] Elliot’s going to grow up, like, messed up?
Zack: Sigh. I’m 50/50 about it.
Lately I have been concerned over my influence on him, and as he gets older, how he’s gonna look at the difference between the [middle class] way his family lives and the [lower class] way I live. And.
A lot of people grow up and they are [starts a denigrating head wiggle and affected speaking style] nununu, fucking, I’m gonna play football, and I’m gonna go to college and I’m gonna get this nice office job and start a family and have 2.5 kids and a car and a garage and everything’s just gonna be nice. And I’ll buy a boat and a snow mobile. [end nodding and affected style]
I’m like ‘Fuck you, you piece of shit.’ Like, just cause you’re too fucking weak to make your own decisions and decide what you want to do with your own life, doesn’t mean everyone else has got to be like you.
Ha, ha, I don’t know, fuck, ha ha. I, ah, ask me another question. (1:10:52-1:12:00)

Zack seems to have long been well aware that he flouted the usual life advice. He lashes out at those who do, and he seems quite sensitive about the issue. Much like all those sociologists sensitive about discussing or recommending the success sequence.

Many people, including myself and Bryan, think it is a shame that so many seem worse off from making poor lifestyle choices, and so are inclined to recommend that good advice be spread more widely. However, what if most everyone who makes poor choices is actually well aware of the usual good advice when they make their poor choices? And what if they like having the option to later pretend that they were unaware, to gain sympathy and support for their resulting predicaments? Such people might then resent the wider spreading of the good advice, seeing it as an effort to take away their excuse, to blame them for their problems, and to reduce their sympathy and support.

That’s my best guess interpretation of the crazy paranoid excuses I’ve heard to oppose my free agents for all proposal. (If you doubt me, follow those links.) It would cost nothing to give everyone an agent who gets ~15% of their income, and so has a strong incentive to advise and promote them. Yet I mainly hear complaints like that such agents would: force clients to work in oppressive company towns, censor media to cut any anti-work messages, lobby for higher taxes, or send out minions to undermine promising artistic careers. Even though becoming an agent gives you no added powers; you can only persuade.

In a poll, most oppose even a test of the idea:

My conclusion: most people are well aware of a lot of advice, widely interpreted as good advice, that they don’t intend to follow. So they don’t actually want agents to give them good advise, as others would hear about that and then later give them less sympathy for not following the good advice that they have no intention of following. Yes, their children and other people in the world might benefit from such advice, but for this issue they are too focused on themselves to care.

Note this theory is similar to my standard theory of why firm managers don’t want prediction markets on their deadlines. Early market estimates take away their favorite excuse if they miss a deadline, that all was going well until something came out of left field and knocked them flat. Its so rare a problem that it couldn’t be foreseen, and will never happen again, so no need to hold anyone responsible.

GD Star Rating
loading...
Tagged as: , ,

Social Proof, But of What?

People tend to (say they) believe what they expect that others around them will soon (say they) believe. Why? Two obvious theories:
A) What others say they believe embodies info about reality,
B) Key audiences respect us more when we agree with them

Can data distinguish these theories? Consider a few examples.

First, consider the example that in most organizations, lower level folks eagerly seek “advice” from upper management. Except that when such managers announce their plan to retire soon, lower folks immediately become less interested in their advice. Manager wisdom stays the same, but the consensus on how much others will defer to what they say collapses immediately.

Second, consider that academics are reluctant to cite papers that seem correct, and which influenced their own research, if those papers were not published in prestigious journals, and seem unlikely to be so published in the future. They’d rather cite a less relevant or influential paper in a more prestigious journal. This is true not only for strangers to the author, but also for close associates who have long known the author, and cited that author’s other papers published in prestigious journals. And this is true not just for citations, but also for awarding grants and jobs. As others will mainly rely on journal prestige to evaluate paper quality, that’s what academics want to use in public as well, regardless of what they privately know about quality.

Third, consider the fact that most people will not accept a claim on topic area X that conflicts with what MSM (mainstream media) says about X. But that could be because they consider the media more informed than other random sources, right? However, they will also not accept this claim on X when made by an expert in X. But couldn’t that be because they are not sure how to judge who is an expert on X? Well let’s consider experts in Y, a related but different topic area from X. Experts in Y should know pretty well how to tell who is an expert in X, and know roughly how much experts can be trusted in general in areas X and Y.

Yet even experts in Y are also reluctant to endorse a claim made by an expert in X that differs from what MSM says about X. As the other experts in Y whose respect they seek also tend to rely on MSM for their views on X, our experts in Y want to stick with those MSM views, even if they have private info to the contrary.

These examples suggest that, for most people, the beliefs that they are willing to endorse depend more on what they expect their key audiences to endorse, relative to their private info on belief accuracy. I see two noteworthy implications.

First, it is not enough to learn something, and tell the world about it, to get the world to believe it. Not even if you can offer clear and solid evidence, and explain it so well that a child could understand. You need to instead convince each person in your audience that the other people who they see as their key audiences will soon be willing to endorse what you have learned. So you have to find a way to gain the endorsement of some existing body of experts that your key audiences expect each other to accept as relevant experts. Or you have to create a new body of experts with this feature (such as say a prediction market). Not at all easy.

Second, you can use these patterns to see which of your associates think for themselves, versus aping what they think their audiences will endorse. Just tell them about one of the many areas where experts in X disagree with MSM stories on X (assuming their main audience is not experts in X). Or see if they will cite a quality never-to-be-prestigiously-published paper. Or see if they will seek out the advice of a soon-to-be-retired manager. See not only if they will admit to which is more accurate in private, but if they will say when their key audience is listening.

And I’m sure there must be more examples that can be turned into tests (what are they?).

GD Star Rating
loading...
Tagged as: , ,