Tag Archives: Hypocrisy

Yay Argument Orientation

Long ago I dove into science studies, which includes history, sociology, and philosophy of science. (Got a U. Chicago M.A. in it in 1983.) I concluded at the time that “science” doesn’t really have a coherent meaning, beyond the many diverse practices of many groups that called themselves “science”. But reflecting on my recent foray into astrophysics suggests to me that there may a simple related core concept after all.

Imagine you are in an organization with a boss who announces a new initiative, together with supporting arguments. Also imagine that you are somehow forced to hear a counter-argument against this initiative, offered by a much lower status person, expressed in language and using methods that are not especially high status. In most organizations, most people would not be much tempted to support this counter-argument; they’d rather pretend that they never heard of it.

More generally, imagine there is a standard claim, which is relevant enough to important enough topics to be worth consideration. This claim is associated with some status markers, such as the status of its supporters and their institutions, and the status of the language and methods used to argue for it. And imagine further that a counter-claim is made, with an associated argument, and also associated status markers of its supporters, languages, and methods.

The degree to which (status-weighted) people in a community would be inclined to support this counter-claim (or even to listen to supporting arguments offered) would depend on the relative strengths of both the arguments and the status markers on both sides. (And on the counter claim’s degree of informativeness and relevance regarding topics seen as important.) I’ll say that such a community is more “argument-oriented” to the degree that the arguments’ logical or Bayesian strengths are given more priority over the claims’ status strengths.

Even though almost everyone in most all communities feels obligated to offer supporting arguments for their claims, very few communities are actually very argument-oriented. You usually don’t contradict the boss in public, unless you can find pretty high status allies for your challenge; you know that the strength of your argument doesn’t count for much as an ally. So it is remarkable, and noteworthy, that there are at least some communities that are unusually argument-oriented. These include big areas of math, and smaller areas of philosophy and physics. And, alas, they include even smaller areas of most human and social sciences. So there really a sense in which some standard disciplines are more “scientific”.

Note that most people are especially averse to claims with especially low status markers. For example, when an argument made for a position is expressed using language that evokes in many people vague illicit associations, such as with racism, sexism, ghosts, or aliens. Or when the people who support a claim are thought to have had such associations on other topics. As such expressions are less likely to happen near topics in math, math is more intrinsically supportive of argument-oriented communities.

But even with supportive topic areas, argument-orientation is far from guaranteed. So let us try to identify and celebrate the communities and topic areas where it is more common, and perhaps find better ways to shame the others into becoming more argument-oriented. Such an orientation is plausibly a strong causal factor explaining variation in accuracy and progress across different communities and areas.

There are actually a few simple ways that academic fields could try to be and seem more argument-oriented. For example, while peer review is one of the main place where counter-arguments are now expressed, such reviews are usually private. Making peer review public might induce higher quality counter-arguments. Similarly, higher priority could be given to publishing articles that focus more on elaborating counter-arguments to other arguments. And communities might more strongly affirm their focus on the literal meanings of expressions, relative to drawing inferences from vague language associations.

(Note: that being “argumentative” is not very related to being “argument-oriented”. You can bluster and fight without giving much weight to logical and Bayesian strengths of arguments. And you can collect and weigh arguments in a consensus style without focusing on who disagrees with who.)

GD Star Rating
loading...
Tagged as: ,

Protecting Hypocritical Idealism

I’m told that soldiers act a lot more confident and brave when they are far from battle, relative to when it looms immediate in front of them.

When presented with descriptions of how most citizens of Nazi Germany didn’t resist or oppose the regime much, most people claim they would have done different. Which of course is pretty unlikely for most of them. But there’s an obvious explanation of this “social desirability bias”. Their subconscious expects a larger positive payoff from presenting an admirable view of themselves to associates, relative to the smaller negative payoff from making themselves more likely to actually do what they said, should they actually find themselves in a Nazi regime.

When the covid pandemic first appeared, elites and experts voiced their long-standing position that masks and travel restrictions were not effective in a pandemic. Which let them express their pro-inclusive global-citizen liberal attitudes. Their subconscious foresaw only a small chance that they’d actually face a real and big pandemic. And if that ever happened, they could and did lower the cost of this previous attitude by just suddenly and without explanation changing their minds.

For many decades it has been an article of faith among a large fraction of these same sort of experts and elites that advanced aliens must be peaceful egalitarian eco-friendly non-expansionist powers, who would if they saw us scold and lecture us about our wars, nukes, capitalism, expansion, and eco-damage. Like our descendants are presented to be in Star Trek or the Culture novels.

Because in this scenario aliens would be the highest status creatures around, and it is important to these humans that the highest in status agree with their politics. I confidently predict that their attitudes would quickly change if they were actually confronted with unknown but very real alien powers nearby.

This predictable hypocrisy could be exposed if people would back these beliefs with bets. But of course they don’t. They aren’t exactly sure why, but most just feel “uncomfortable” with that. Visible and open betting market odds that disagreed with them would also expose this hypocrisy, but most such also oppose allowing those, mostly also for vague “uncomfortable” reasons. Their unconscious knows better what are those reasons, but knows also not to tell.

GD Star Rating
loading...
Tagged as: , , ,

Skirting UFO Taboos

Since before I was born, elites have maintained a severe taboo against taking seriously the hypothesis that UFOs are aliens. As I’ve discussed, elite-aspiring UFO researchers have themselves embraced this taboo. They seem to figure that if we look carefully at all the other hypotheses, and see how inadequate they are, then the taboo against UFOs as aliens must collapse.

For elites pundits, this taboo is a problem when UFOs as possibly aliens are the topic of the day. Because elite pundits are also supposed to comment on the topic of the day. Their obvious solution: talk only about the fact that other people seem to be taking UFOs as aliens seriously.

For example, here is Ezra Klein (@ezraklein) in a 1992 word New York Times article:

Even if You Think Discussing Aliens Is Ridiculous, Just Hear Me Out

I really don’t know what’s behind these videos and reports, and I relish that. … Even if you think all discussion of aliens is ridiculous, it’s fun to let the mind roam over the implications. … Imagine, tomorrow, an alien craft crashed down in Oregon. … we are faced with the knowledge that we’re not alone, that we are perhaps being watched, and we have no way to make contact. How does that change human culture and society? …

One immediate effect, I suspect, would be a collapse in public trust. … Governments would be seen as having withheld a profound truth from the public. … “Instead of a land grab, it would be a narrative grab,” … There would be enormous power — and money — in shaping the story humanity told itself. … “An awful lot of people would basically shrug and it’d be in the news for three days,” …

how evidence of alien life would shake the world’s religions… many people would simply say, “of course.” … nation-states fall to fighting over the debris, … fractious results. … “Russians and Chinese would never believe us and frankly large numbers of Americans would be much more likely to believe that Russia or China was behind it,” … difficulty of uniting humanity …

knowledge that there were other space-faring societies might make us more desperate to join them or communicate with them. … might lead us to take more care with what we already have, and the sentient life we already know. … “inspire us to be the best examples of intelligent life that we could be.”

Note how Klein very clearly signals that he doesn’t believe, and that this is all about how people who believed would react; he never crosses the line to himself consider aliens.

Here is Tyler Cowen (@tylercowen) in a 746 words Bloomberg article:

Now that the Pentagon takes UFOs seriously, it’s perhaps appropriate to consider some more mundane aspects of the phenomenon — namely, what it means for markets. UFO data will probably remain murky and unresolved, but if UFOs of alien origin become somewhat more likely (starting, to be clear, from a low base rate), which prices will change?

My first prediction is that most market prices won’t move very much. In the short run, VIX might rise, … But … would probably [quickly] return to normal levels. … I would bet on defense stocks to rise, … alien drone probes … might be observing with the purpose of rendering judgment. If they are offended by our militaristic tendencies, the quality of our TV shows and our inability to adopt the cosmopolitan values of “Star Trek” over the next 30 years, maybe they will zap us into oblivion. But … after such an act of obliteration, neither gold nor Bitcoin will do you any good.

Note that Cowen touches on a crucial issue, what if they judge us, but with a flippant tone and only for the purpose of predicting assets prices, which are set by other investors. If he were to directly and seriously consider that issue, he’d have violated the key taboo.

Here is Megan McArdle (@asymmetricinfo) in 835 word Washington Post article:

These are all major, important stories, stories that lives and futures depend upon. And yet they’re almost irrelevant compared to the question that isn’t anywhere in my Twitter feed right now: Are we being watched by alien technology? …

Other humans … would not will the death of our entire species. Aliens might. … Whether we’re being visited, and what they might be up to, is the most important question of anyone’s lifetime, because, if so, everything that currently obsesses us, including the pandemic, will retreat to a historical footnote. …

So I’ve been surprised to find that the story of unexplained sightings, which has now been percolating for years, has been mostly a subplot to more ordinary human politics and folly. … it seems to be mostly fodder for jokes.  …Why is this particular unknowable getting such short shrift? …

One possibility is that UFOs have a social status problem; historically, they are associated with cranks … Thus, most … reflexively refuse to take the topic seriously. … But the third option is that we understand at some level that aliens would be a Very Big Deal — and that most of the possibilities for alien contact are pretty unpleasant. … the alternative is so horrible that I suspect for many of us, it simply doesn’t bear thinking about.

This is like all those long calls for a “conversation on race” that can’t seem to find the space to actually start conversing on race. (Because there is very little safe that one can actually say.) Here McArdle talks long on on being puzzled that we aren’t talking on the key issue, about which she doesn’t actually say much. In response to my complaint she tweeted “I did my best in 800 words!”

I’m pretty sure that any of these authors could have directly addressed the big “elephant in the room” alien issues here, if they had so desired. I’ve tried to do better.

GD Star Rating
loading...
Tagged as: ,

The Debunking of Debunking

In a new paper in Journal of Social Philosophy, Nicholas Smyth offers a “moral critique” of “psychological debunking”, by which he means “a speech‐act which expresses the proposition that a person’s beliefs, intentions, or utterances are caused by hidden and suspect psychological forces.” Here is his summary:

There are several reasons to worry about psychological debunking, which can easily counterbalance any positive reasons that may exist in its favor:

1. It is normally a form of humiliation, and we have a presumptive duty to avoid humiliating others.
2. It is all too easy to offer such stories without acquiring sufficient evidence for their truth,
3. We may aim at no worthy social or individual goals,
4. The speech‐act itself may be a highly inefficient means for achieving worthy goals, and
5. We may unwittingly produce bad consequences which strongly outweigh any good we do achieve, or which actually undermine our good aims entirely.

These problems … are mutually reinforcing. For example, debunking stories would not augment social tensions so rapidly if debunkers were more likely to provide real evidence for their causal hypotheses. Moreover, if we weren’t so caught up in social warfare, we’d be much less likely to ignore the need for evidence, or to ignore the need to make sure that the values which drive us are both worthy and achievable.

That is, people may actually have hidden motives, these might in fact explain their beliefs, and critics and audiences may have good reasons to consider that possibility. Even so, Smyth says that it is immoral to humiliate people without sufficient reason, and we in fact do tend to humiliate people for insufficient reasons when we explain their beliefs via hidden motives. Furthermore, we tend to lower our usual epistemic standards to do so.

This sure sounds to me like Smyth is offering a psychological debunking of psychological debunking! That is, his main argument against such debunking is via his explaining this common pattern, that we explain others’ beliefs in terms of hidden motives, by pointing to the hidden motives that people might have to offer such explanations.

Now Smyth explicitly says that he doesn’t mind general psychological debunking, only that offered against particular people:

I won’t criticize high‐level philosophical debunking arguments, because they are distinctly impersonal: they do not attribute bad or distasteful motives to particular persons, and they tend to be directed at philosophical positions. By contrast, the sort of psychological debunking I take issue with here is targeted at a particular person or persons.

So presumably Smyth doesn’t have an issue with our book The Elephant in the Brain: Hidden Motives in Everyday Life, as it also stays at the general level and does’t criticize particular people. And so he also thinks his debunking is okay, because it is general.

However, I don’t see how staying with generalities saves Smyth from his own arguments. Even if general psychological debunking humiliates large groups all at once, instead of individuals one at a time, it is still humiliation. Which he still might do yet should avoid because of his inadequate reasons, lowering of epistemic standards, there being better ways to achieve his goals, and it unwittingly producing bad consequences. Formally his arguments work just as well against general as against specific debunking.

I’d say that if you have a general policy of not appearing to pick fights, then you should try to avoid arguing by blaming your opponents’ motives if you can find other arguments sufficient to make your case. But that’s just an application of the policy of not visibly picking fights when you can avoid them. And many people clearly seem to be quite willing and eager to pick fights, and so don’t accept this general policy of avoiding fights.

If your policy were just to speak the most relevant truth at each point, to most inform rational audience members at that moment on a particular topic, then you probably should humiliate many people, because in fact hidden motives are quite common and relevant to many debates. But this speak-the-most-truth policy tends to lose you friends and associates over the longer run, which is why it is usually not such a great strategy.

GD Star Rating
loading...
Tagged as: ,

Subtext Shows Status

When we talk, we say things that are explicit and direct, on the surface of the text, and we also say things that are in hidden and indirect, said in more deniable ways via subtext. Imagine that there was a “flattext” type of talk (or writing) in which subtext was much harder to reliably express and read. Furthermore, imagine that it was easy to tell that a speaker (or writer) was using this type of talk. So that by talking in this way you were verifiably not saying as much subtext.

Yes, it seems very hard to go all the way to infinitely hard here, but flattext could have value without going to that extreme. Some have claimed that the artificial language Lojban is in some ways such a talk type.

So who would use surface text? A Twitter poll finds that respondents expect that on average they’d use flattext about half of the time, so they must expect many reasons to want to deny that they use subtext. Another such poll finds that they on average expect official talk to be required to be flattext. Except they are sharply divided between a ~40% that thinks it would be required >80% of the time, and another ~40% who thinks it would be required <20% of the time.

The obvious big application of flattext is people and organizations who are often accused of saying bad things via subtext. Such as people accusing of illicitly flirting, or sexual harrassing. Or people accused of “dogwhilsting” disliked allegiances. Or firms accused over-promising or under-warning to customers, employees, or investors.

As people are quite willing to accuse for-profit firms of bad subtext, I expect they’d be the most eager users. As would people like myself who are surrounded by hostile observers eager to identify particular texts as showing evil subtext. You might think that judges and officials speaking to the public in their official voice would prefer flattext, as it better matches their usual tone and style which implicitly claims that they are just speaking clearly and simply. But that might be a hypocrisy, and they may reject flattext so that they can continue to say subtext.

Personal servants, and slaves from centuries ago were required to speak in a very limited and stylized manner which greatly limited subtext. They could suffer big bad consequences for ever being accused of a tone of voice or manner that signaled anything less than full respect and deterrence to their masters.

Putting this all together, it seems that the ability to regularly and openly use subtext is a sign of status and privilege. We “put down” for-profit firms in our society by discouraging their use of subtext, and mobs do similarly when they hound enemies using hair-trigger standards ready to accuse them of bad subtext. And once low status people and organizations are cowed into avoiding subtext, then others can complain that they lack humanity, as they don’t show a sense of humor, which is more clear evidence that they are evil.

So I predict that if flattext were actually available, it would be mainly used to low status people and organizations to protect themselves from accusations of illicit subtext. As our enforcement of anti-subtext rules is very selective. Very risk averse government agencies might use it, but not high status politicians.

GD Star Rating
loading...
Tagged as: ,

Who Wants Good Advice?

Bryan Caplan:

1. Finish high school. 2. Get a full-time job once you finish school. 3. Get married before you have children. ….
While hardly anyone explicitly uses [this] success sequence to argue that we underrate the blameworthiness of the poor for their own troubles, critics still hear this argument loud and clear – and vociferously object. … Everyone – even the original researchers – insists that the success sequence sheds little or no light on who to blame for poverty. … talking about the success sequence so agitates the critics.

A scene from the excellent documentary Minding the Gap:

Bing: Do you, do you feel, like, concerned that [your young son] Elliot’s going to grow up, like, messed up?
Zack: Sigh. I’m 50/50 about it.
Lately I have been concerned over my influence on him, and as he gets older, how he’s gonna look at the difference between the [middle class] way his family lives and the [lower class] way I live. And.
A lot of people grow up and they are [starts a denigrating head wiggle and affected speaking style] nununu, fucking, I’m gonna play football, and I’m gonna go to college and I’m gonna get this nice office job and start a family and have 2.5 kids and a car and a garage and everything’s just gonna be nice. And I’ll buy a boat and a snow mobile. [end nodding and affected style]
I’m like ‘Fuck you, you piece of shit.’ Like, just cause you’re too fucking weak to make your own decisions and decide what you want to do with your own life, doesn’t mean everyone else has got to be like you.
Ha, ha, I don’t know, fuck, ha ha. I, ah, ask me another question. (1:10:52-1:12:00)

Zack seems to have long been well aware that he flouted the usual life advice. He lashes out at those who do, and he seems quite sensitive about the issue. Much like all those sociologists sensitive about discussing or recommending the success sequence.

Many people, including myself and Bryan, think it is a shame that so many seem worse off from making poor lifestyle choices, and so are inclined to recommend that good advice be spread more widely. However, what if most everyone who makes poor choices is actually well aware of the usual good advice when they make their poor choices? And what if they like having the option to later pretend that they were unaware, to gain sympathy and support for their resulting predicaments? Such people might then resent the wider spreading of the good advice, seeing it as an effort to take away their excuse, to blame them for their problems, and to reduce their sympathy and support.

That’s my best guess interpretation of the crazy paranoid excuses I’ve heard to oppose my free agents for all proposal. (If you doubt me, follow those links.) It would cost nothing to give everyone an agent who gets ~15% of their income, and so has a strong incentive to advise and promote them. Yet I mainly hear complaints like that such agents would: force clients to work in oppressive company towns, censor media to cut any anti-work messages, lobby for higher taxes, or send out minions to undermine promising artistic careers. Even though becoming an agent gives you no added powers; you can only persuade.

In a poll, most oppose even a test of the idea:

My conclusion: most people are well aware of a lot of advice, widely interpreted as good advice, that they don’t intend to follow. So they don’t actually want agents to give them good advise, as others would hear about that and then later give them less sympathy for not following the good advice that they have no intention of following. Yes, their children and other people in the world might benefit from such advice, but for this issue they are too focused on themselves to care.

Note this theory is similar to my standard theory of why firm managers don’t want prediction markets on their deadlines. Early market estimates take away their favorite excuse if they miss a deadline, that all was going well until something came out of left field and knocked them flat. Its so rare a problem that it couldn’t be foreseen, and will never happen again, so no need to hold anyone responsible.

GD Star Rating
loading...
Tagged as: , ,

Social Proof, But of What?

People tend to (say they) believe what they expect that others around them will soon (say they) believe. Why? Two obvious theories:
A) What others say they believe embodies info about reality,
B) Key audiences respect us more when we agree with them

Can data distinguish these theories? Consider a few examples.

First, consider the example that in most organizations, lower level folks eagerly seek “advice” from upper management. Except that when such managers announce their plan to retire soon, lower folks immediately become less interested in their advice. Manager wisdom stays the same, but the consensus on how much others will defer to what they say collapses immediately.

Second, consider that academics are reluctant to cite papers that seem correct, and which influenced their own research, if those papers were not published in prestigious journals, and seem unlikely to be so published in the future. They’d rather cite a less relevant or influential paper in a more prestigious journal. This is true not only for strangers to the author, but also for close associates who have long known the author, and cited that author’s other papers published in prestigious journals. And this is true not just for citations, but also for awarding grants and jobs. As others will mainly rely on journal prestige to evaluate paper quality, that’s what academics want to use in public as well, regardless of what they privately know about quality.

Third, consider the fact that most people will not accept a claim on topic area X that conflicts with what MSM (mainstream media) says about X. But that could be because they consider the media more informed than other random sources, right? However, they will also not accept this claim on X when made by an expert in X. But couldn’t that be because they are not sure how to judge who is an expert on X? Well let’s consider experts in Y, a related but different topic area from X. Experts in Y should know pretty well how to tell who is an expert in X, and know roughly how much experts can be trusted in general in areas X and Y.

Yet even experts in Y are also reluctant to endorse a claim made by an expert in X that differs from what MSM says about X. As the other experts in Y whose respect they seek also tend to rely on MSM for their views on X, our experts in Y want to stick with those MSM views, even if they have private info to the contrary.

These examples suggest that, for most people, the beliefs that they are willing to endorse depend more on what they expect their key audiences to endorse, relative to their private info on belief accuracy. I see two noteworthy implications.

First, it is not enough to learn something, and tell the world about it, to get the world to believe it. Not even if you can offer clear and solid evidence, and explain it so well that a child could understand. You need to instead convince each person in your audience that the other people who they see as their key audiences will soon be willing to endorse what you have learned. So you have to find a way to gain the endorsement of some existing body of experts that your key audiences expect each other to accept as relevant experts. Or you have to create a new body of experts with this feature (such as say a prediction market). Not at all easy.

Second, you can use these patterns to see which of your associates think for themselves, versus aping what they think their audiences will endorse. Just tell them about one of the many areas where experts in X disagree with MSM stories on X (assuming their main audience is not experts in X). Or see if they will cite a quality never-to-be-prestigiously-published paper. Or see if they will seek out the advice of a soon-to-be-retired manager. See not only if they will admit to which is more accurate in private, but if they will say when their key audience is listening.

And I’m sure there must be more examples that can be turned into tests (what are they?).

GD Star Rating
loading...
Tagged as: , ,

Contra-Counting Coalitions Value Variety

These events probably happened in the reverse order, but imagine if humans inventing counting after herding. That is, imagine a community long ago which herded animals, and where having a better herd was a big mark of higher status. Since they could not count, these humans gossiped about who had the better herd. For example, they traded anecdotes about times when someone’s herd had seemed especially awe-inspiring or dingy. And via gossip (and its implicit coalition politics), they formed a rough consensus on who had the best herds. A consensus where the opinions of high status folks tended to count for more.

Then someone invented counting and said “This will help us ensure that we aren’t missing stragglers when we bring our herds back from grazing”, and “Now we can objectively measure who has the larger flock”. While this community might be grateful for that first feature, I predict that they would hate the second one.

Folks would point out that size isn’t the only factor that matters for a better herd, that counting mistakes are possible, and that gossip about herd counts might inform herd thieves about who to target. Some say this won’t stop people from gossiping lots about whose herd is better, while others say that it will cut gossiping but that’s bad as gossip is good. Better to ban counting, they all say.

Don’t believe me? Consider these poll results (and attached comments): Continue reading "Contra-Counting Coalitions Value Variety" »

GD Star Rating
loading...
Tagged as: , ,

What Do Workers Want?

I’m old enough to remember that within a society pushing more traditional gender roles, men often asked each other “what do women want?” It was widely believed, and I think then true, that it was much easier (for men) to predict what men wanted. Men would tell you what they wanted, and would in fact be relatively content, at least for a while, if they got what they had asked for. In contrast, while women would often express opinions on what they might like, it was harder to predict how content women might be with getting various things.

As a negotiation strategy, I think this kinda made sense for women as response to their having less direct and overt control within traditional male-female relations. A man who could more make the official choices for the couple might be tempted to try to figure out the minimum he needed to spend to satisfy his woman, after which he could spend all the rest on himself. Her evasiveness and ambiguity re what it would take to satisfy her let her extract a larger fraction of their joint surplus. She could keep him in real doubt as to whether she might become very unhappy and tempted to take extreme actions.

Our gender roles today do not have men being as strongly dominant. But such strong dominance does continue in employee-employer relations. Employees can quit, but if they don’t they mostly have to do what their employers say. In this situation, employees may also feel (perhaps mistakenly) that they benefit from evasiveness and ambiguity about what they want, and what it takes to satisfy them.

I just did two sets of polls that seems to confirm this. I asked people in two different ways about the importance of eight different features of jobs/careers: money, control, respect, time, health, flow, happiness, and meaning. Here are the weights, relative to money, via asking to choose between four options (N = 376-432), and via (a median lognormal fit to) asking for a weight number (N= 170-218).

Both methods found a lot of individual variation, but only weak and inconsistent differences in aggregate importance. And I just don’t believe the low priority put here on respect.

This looks to me like people just don’t like to be pinned down on which of these factors are more important to them. So they do not know what they prefer, or don’t like what they prefer being clearly known to others. Worker lists or scoresheets of ideal job features seem no more realistic or useful than lists or scoresheets of ideal romantic partner features, and probably fail for similar reasons.

What do workers want? I’m sure you’d love to know, wouldn’t you boss-man. Which is why I won’t tell. And may not know. I won’t give you the satisfaction of knowing just how much you could demand from me before I’d quit. On that, I want you to remain forever uncertain. Even if that comes at the cost of my not getting what I want, because I don’t really know what I want.

Alas, this worker reluctance to say directly what they want is probably an obstacle to widespread adoption of career agents. And note that this is a different mechanism for producing hidden motives from those I’ve discussed before: trying to present good motives or evading norm enforcement.

GD Star Rating
loading...
Tagged as: ,

Why Not Clearer Legitimacy?

In political science, legitimacy is the right and acceptance of an authority, usually a governing law or a regime, … a system of government, … without which a government will suffer legislative deadlock(s) and collapse. … Unpopular régimes survive because they are considered legitimate by a small, influential élite. …

In moral philosophy, the term legitimacy is often positively interpreted as the normative status conferred by a governed people upon their governors’ institutions, offices, and actions, based upon the belief that their government’s actions are appropriate uses of power by a legally constituted government. (More)

Legitimacy is a common belief among the governed that they prefer their current system of government to possible alternatives. This is widely seen as a good thing, and in its absence many say that violent revolt or foreign influence is justified. So you might think that regimes would be eager to show their legitimacy to those they govern, and to the world.

Now the absence of a recent violent revolt is evidence for some degree of legitimacy. But let us define the degree of legitimacy of a regime as the cost that its governed would be willing to pay to keep that regime from changing. In this case, the absence of recent revolt only places a rather low and negative lower bound on the degree of legitimacy. So you might think regimes would be eager to show much higher degrees of legitimacy. Perhaps even positive degrees.

A second way to show legitimacy is to offer an official way to change the system. Many regimes have a constitution that can in principle be changed if enough people lobby hard and long enough to trigger the various official acts required by that constitution to effect change. But while this sets a higher (negative) lower bound than does the absence of revolt, honestly it isn’t usually that much higher. The governed could still strongly prefer an alternative system of government, and yet not care enough to coordinate to sufficiently push the usual constitutional process.

A third way to show legitimacy is to advertise the results of polls of the governed on the topic. But not only are such polls almost never done, observers can reasonably question their neutrality and relevance. Who is trusted to do them, and how well do citizen responses to random questions on the subject out of the blue indicate what they’d say if they thought about the topic more?

Regular referenda seem like a more informative approach. Hold elections at standard intervals wherein the governed is asked to endorse either the status quo or change. (In the system, not the people.) In this case, discussion leading up to the election could induce more thought, and give change advocates a better chance to make their case and persuade voters.

Voters might be asked to pick one of several directions of change, or they might just initiate a process that will soon generate more concrete alternatives and then offer them to the electorate. I’m sure that a lot could be said about the best way to run such referenda, but for today my focus is on the fact that almost no regimes ever hold such referenda. Not even bad ones intended to prevent regime change and produce the appearance of more legitimacy than actually exists.

Regimes the world over give lip service to the idea of regime legitimacy, saying both that it is important for regimes to have high legitimacy, and claiming that they in particular have high legitimacy. Yet in fact the most that regimes usually do is to include in their constitutions very slow difficult processes for regime change, processes that are rarely ever actually invoked. Regimes point to that plus the lack of recent revolts as sufficient evidence of their legitimacy. They do not institute regular legitimacy referenda.

Of course most ordinary people are not very upset about this fact. If they were to demand such referenda, then politicians might run on platforms which support them, and they might happen. Yet if asked these same ordinary people would also probably claim that it is important for regimes to have high legitimacy. Especially their own. It seems that both the governed and their governors pretend to care more about legitimacy than they do.

GD Star Rating
loading...
Tagged as: ,