Tag Archives: Hypocrisy

Why Info Push Dominates

Some phenomena to ponder:

  1. Decades ago I gave talks about how the coming world wide web (which we then called “hypertext publishing”) could help people find more info. Academics would actually reply “I don’t need any info tools; my associates will personally tell me about any research worth knowing about.”
  2. Many said the internet would bring a revolution of info pull, where people pay to get the specific info they want, to supplant the info push of ads, where folks pay to get their messages heard. But even Google gets most revenue from info pushers, and our celebrated social media mainly push info too.
  3. Blog conversations put a huge premium on arguments that appear quickly after other arguments. Mostly arguments that appear by themselves a few weeks later might as well not exist, for all they’ll influence future expressed opinions.
  4. When people hear negative rumors about others, they usually believe them, and rarely ask the accused directly for their side of the story. This makes it easy to slander folks who aren’t well connected enough to have friends who will tell them who said what about them.
  5. We usually don’t seem to correct well for “independent” confirming clues that actually come from the same source a few steps back. We also tolerate higher status folks dominating meetings and other communication channels, thereby counting their opinions more. So ad campaigns often have time-correlated channel-redundant bursts with high status associations.

Overall, we tend to wait for others to push info onto us, rather than taking the initiative to pull info in, and we tend to gullibly believe such pushed clues, especially when they come from high status folks, come redundantly, and come correlated in time.

A simple explanation of all this is that our mental habits were designed to get us to accept the opinions of socially well-connected folks. Such opinions may be more likely to be true, but even if not they are more likely to be socially convenient. Pushed info tends to come with the meta clues of who said it when and via what channel. In contrast, pulled info tends to drop many such meta clues, making it harder to covertly adopt the opinions of the well-connected.

GD Star Rating
loading...
Tagged as: , ,

Rejection Via Advice

We get status in part from the status of our associates, which is a credible signal of how others see us. Because of this, we prefer to associate with high status folks. But it looks bad to be overt about this. So we try to hide this motive, and to pretend that other motives dominate our choices of associates.

This would be easier to do if status were very stable. Then we could take our time setting up plausible excuses for wanting to associate with particular high status folks, and for rejecting association bids by particular low status folks. But in fact status fluctuates, which can force us to act quickly. We want to quickly associate more with folks who rise in status, and to quickly associate less with those who fall in status. But the coincidence in time between their status change and our association change may make our status motives obvious.

Since association seems a good thing in general, trying to associate with anyone seems a “nice” act, requiring fewer excuses. In contrast, weakening an existing association seems less nice. So we mainly need good excuses for pushing away those whose status has recently fallen. Such opportunistic rejection, just when our associates most need us, seems especially wrong and mean. So how do we manage it?

One robust strategy is to offer random specific advice. You acknowledge their problems, express sympathy, and then take extra time to “help” them by offering random specific advice about how to prevent or reverse their status fall. Especially advice that will sound good if quoted to others, but is hard for them to actually follow, and is unlikely to be the same as what other associates advise.

If different associates offer different advice, then this person with fallen status simply must fail to follow most of that advice. Which then gives all those folks whose advice was not followed an excuse to distance themselves from this failure. And those whose advice was followed, well at least they get the status mark of power – a credibly claim that they have influence over others. Either way, the falling status person loses even more status.

Unless of course the advice followed is actually useful. But what are the chances of that?

Added 27Dec: A similar strategy would be useful if your status were to rise, and you wanted to drop associates in order make room for more higher status associates.

GD Star Rating
loading...
Tagged as: , , , ,

Wyden Puff Piece Errors

In the latest New Yorker, Ryan Lizza writes on “State of Deception: Why won’t the President rein in the intelligence community?” Which would be an interesting topic. Alas Lizza says little about it. Instead he summarizes the history of NSA spying on US citizens, supported via misleading statements and tortured legal interpretations, and talks the most about one Senator Ron Wyden’s heroic fight against the NSA.

Even though Wyden hasn’t actually succeeded at much. Lizza tells us that Wyden attached sunset provisions to the Patriot Act (which he supported), and asked the question at a Senate hearing where the NSA head’s answer was later shown to be misleading. Lizza speculates that Wyden’s many secret memos “repeatedly challenging the NSA’s contention that [a particular] program was effective” caused the NSA to drop that program. Oh and Wyden voted against some bills that passed, introduced bills that didn’t pass, and argued with Obama.

Here is the concrete Wyden accomplishement for which Lizza gives the most detail:

Three months later, the Defense Department started a new program with the Orwellian name Total Information Awareness. T.I.A. was based inside the Pentagon’s Information Awareness Office, which was headed by Admiral John Poindexter. In the nineteen-eighties, Poindexter had been convicted, and then acquitted, of perjury for his role in the Iran-Contra scandal. He wanted to create a system that could mine a seemingly infinite number of government and private-sector databases in order to detect suspicious activity and preëmpt attacks. The T.I.A. system was intended to collect information about the faces, fingerprints, irises, and even the gait of suspicious people. In 2002 and 2003, Wyden attacked the program as a major affront to privacy rights and urged that it be shut down.

In the summer of 2003, while Congress debated a crucial vote on the future of the plan, Wyden instructed an intern to sift through the Pentagon’s documents about T.I.A. The intern discovered that one of the program’s ideas was to create a futures market in which anonymous users could place bets on events such as assassinations and terrorist attacks, and get paid on the basis of whether the events occurred. Wyden called Byron Dorgan, a Democratic senator from North Dakota, who was also working to kill the program. “Byron, we’ve got what we need to win this,” he told him. “You and I should make this public.” Twenty-four hours after they exposed the futures-market idea at a press conference, Total Information Awareness was dead. Poindexter soon resigned.

It was Wyden’s first real victory on the Intelligence Committee. (more)

That “futures market” program mentioned was called the Policy Analysis Market (PAM). As I was a chief architect, I happen to know that this discussion is quite misleading:

  1. TIA was a DARPA research project to develop methods for integrating masses of info; it wasn’t an actual program to handle such info masses.
  2. I’ve been told by several sources that TIA research didn’t stop, it just moved elsewhere. PAM, in contrast, did stop.
  3. PAM was not part of TIA; the only relation is that both were among the score of research programs under Poindexter in the DARPA management hierarchy.
  4. Though Wyden called it “Terrorism Futures,” PAM was mainly about forecasting geopolitical instability in the MidEast. The basis for the claim that it was about terrorism was a single website background screen containing a concept sample screen which included a small miscellaneous section listing the events “Arafat assassinated” and “North Korea missile strike.”

All those errors in just two paragraphs of a 12,500 word article. Makes me wonder how many more errors are in the rest.

It is hard to believe that Lizza’s article didn’t get a lot of input from Wyden. So Wyden is likely responsible for most of these errors. Thus to fight the NSA’s spying supported by lying, Wyden eagerly lied about an unrelated research program, in order to kill a research program with a symbolic tangential relation to NSA spying. Which wasn’t actually killed. Seems a bit underwhelming as a reason to make Wyden the main actor in a story on NSA spying. I see better candidates.

GD Star Rating
loading...
Tagged as: , ,

Graeme Wood on Futarchy

At the end of his article on the deaths of Intrade and its founder John Delaney, Graeme Wood considers futarchy:

It’s perhaps no great surprise that we haven’t embraced Hanson’s “futarchy.” Our current political system resists dramatic change, and has resisted it for 237 years. More traditional modes of prediction have proved astonishingly bad, yet they continue to run our economic and political worlds, often straight into the ground. Bubbles do occur, and we can all point to examples of markets getting blindsided. But if prediction markets are on balance more accurate and unbiased, they should still be an attractive policy tool, rather than a discarded idea tainted with the odor of unseemliness. As Hanson asks, “Who wouldn’t want a more accurate source?”

Maybe most people. What motivates us to vote, opine, and prognosticate is often not the desire for efficacy or accuracy in worldly affairs—the things that prediction markets deliver—but instead the desire to send signals to each other about who we are. Humans remain intensely tribal. We choose groups to associate with, and we try hard to show everybody which groups we belong to. We don’t join the Tea Party because we have exhaustively studied and rejected monetarism, and we don’t pay extra for organic food because we have made a careful cost-benefit analysis based on research about its relative safety. We do these things because doing so says something that we want to convey to others. Nor does the accuracy of our favorite talking heads matter that much to us. More than we like accuracy, we like listening to talkers on our side, and identifying them as being on our team—the right team.

“We continue to have consistent results and evidence that markets are accurate,” Hanson says. “If the question is, ‘Do these things predict well?,’ we have an answer: They do. But that story has to be put up against the idea that people never really wanted more accurate sources.”

On this theory, the techno-libertarian enthusiasts got the technology right, and the humanity wrong. Whenever John Delaney showed up on CNBC, hawking his Intrade numbers and describing them as the most accurate and impartial around, he was also selling a future that people fundamentally weren’t interested in buying. (more)

I don’t much disagree — I raised these issues with Wood when he interviewed me. As usual, our hopes for idealistic outcomes mostly depend on finding ways to shame people into actually supporting what they pretend to support, by making the difference too obvious to ignore.

More specifically, I hope prediction markets within firms may someday gain a status like cost accounting today. In a world were no one else did cost accounting, proposing that your firm do it would basically suggest that someone was stealing there. Which would look bad. But in a world where everyone else does cost accounting, suggesting that your firm not do it would suggest that you want to steal from it. Which also looks bad.

Similarly, in a world where few other firms use prediction markets, suggesting that your firm use them on your project suggests that your project has an unusual problem in getting people to tell the truth about it via the usual channels. Which looks bad. But in a world where most firms use prediction markets on most projects, suggesting that your project not use prediction markets would suggest you want to hide something. That is, you don’t want a market to predict if your project will make its deadline because you don’t want others to see that it won’t make the deadline. Which would look bad.

Once prediction markets were a standard accepted practice within firms, it would be much easier to convince people to use them in government as well.

GD Star Rating
loading...
Tagged as: ,

The Coalition Politics Hypothesis

Game theories let us analyze precise models of social situations. While each model leaves out much that is important, the ability to see how an entire set of payoffs, info, and acts work together can give powerful insights into social behavior. But it does matter a lot which games we think apply best to which real situations.

Today the game most often used as a metaphor for general social instincts is the public goods game, where individuals contribute personal efforts to benefit everyone in a group. This is seen as a variation on the prisoner’s dilemma. With this metaphor in mind, people see most social instincts as there to detect and reward contributions, and to punish free-riders. Many social activities that on the surface appear to have other purposes are said to be really about this. Here, “pro-social” is good for the group, while “anti-social” is bad. Institutions or policies that undercut traditional social instincts are suspect.

While this metaphor does give insight, the game I see as a better metaphor for general social instincts is this:

Divide The Dollar Game … There are three players … 1, 2, 3. The players wish to divide 300 units of money among themselves. Each player can propose a payoff such that no player’s payoff is negative and the sum of all the payoffs does not exceed 300. … Players get 0 unless there is some pair of players {1, 2}, {2, 3}, or {1, 3} who propose the same allocation, in which case they get this allocation. …

It turns out that in any equilibrium of this game, there is always at least one pair of players who would both do strictly better by jointly agreeing to change their strategies together. …

Suppose the negotiated agreements are tentative and non-binding. Thus a player who negotiates in a sequential manner in various coalitions can nullify his earlier agreements and reach a different agreement with a coalition that negotiates later. Here the order in which negotiations are made and nullified will have a bearing on the final outcome. … It is clear that coalitions that get to negotiate later hold the advantage in this scheme. (more)

That is, most social behavior is about shifting coalitions that change how group benefits are divided, and social instincts are mostly about seeing what coalitions to join and how to get others to want you in their coalitions. Such “social” behavior isn’t good for the group as a whole, though it can be good for your coalition. Because coalition politics can be expensive, institutions or policies that undercut it can be good overall.

In this view of social behavior, we expect to see great efforts to infer each person’s threat point – how much they and a coalition would lose if they leave that coalition. We also expect even greater efforts to infer each person’s loyalty – what coalitions they are likely to prefer and help. And we expect great efforts to signal desirable loyalties and threat points. When shifting coalitions are important, we expect lots of efforts to go into seeing and changing the focal points people use to coordinate which new coalitions form, and to seeing who will be pivotal in those changes.

At a meta level, people would also try to infer what other people think about these things. That is, folks will want to know what others think about various loyalties, threat points, and focal points, and in response those others will try to signal their opinions on such things. In other words, people will want to know how well others can track and influence changing fashions on these topics. At a higher meta level, people will want to know what others think that still others think about these things, i.e., they’ll want to know who is seen to be good at tracking fashion. And so on up the meta hierarchy.

When people talk, we expect them to say some things directly and clearly to all, to influence overall focal points. But we expect many other messages to be targeted to particular audiences, like “Let’s dump that guy from our coalition.” When such targeted messages might be overheard, or quoted to others, we expect talking to be indirect, using code words that obscure meanings, or at least give plausible deniability.

A social world dominated by shifting coalitions would spend modest efforts to influence temporary policies, such has how to divide up today’s spoils, and more efforts on rare chances to change longer term policies that more permanently divide spoils. Even more effort would be spent on rare chances to change who is possible as a coalition partner, For our forager ancestors, killing someone, or letting a new person live nearby, could change the whole game. In a firm today, hiring or firing someone can have similar effects.

This view of social behavior as mostly about shifting coalitions raises the obvious question: why doesn’t most social behavior and conversation seem on the surface to be about such things. And the obvious homo hypocritus answer is that we do such things indirectly to avoid admitting that this is what we are doing. Since coalition politics is socially destructive, we have long had social norms to discourage it, such as the usual norms against gossip. So we do these things indirectly, to get plausible deniability.

This can explain why we place such a high premium on spontaneity and apparent randomness in conversation and other leisure behavior. And also why we seem so uninterested in systematic plans to prioritize our efforts in charity and other good causes. And why we drop names so often. When we manage our shifting coalitions, we prefer to stay free to quickly shift our conversations and priorities to adapt to the changing fashions. If you ever wonder why the news, public discourse, and academia seem so uninterested in the topics most everyone would agree are really important, this is plausibly why.

GD Star Rating
loading...
Tagged as: ,

Boss Hypocrisy

In our culture, we are supposed to resent and dislike bosses. Bosses get paid too much, are mad with power, seek profits over people, etc. In fiction, we are mainly willing to see bosses as good when they run a noble work group, like a police, military, medicine, music, or sport group. In such rare cases, it is ok to submit to boss domination to achieve the noble cause. Or a boss can be good if he helps subordinates fight a higher bad boss. Otherwise, a good person resents and resists boss domination. For example:

The [TV trope of the] Benevolent Boss is that rarity in the Work [Sit]Com: a superior who is actually superior, a nice guy who listens to employee problems and really cares about the issues of those beneath him. … A character that is The Captain is likely, but not required, to be a Benevolent Boss.
Contrast with Bad Boss and Stupid Boss. Compare Reasonable Authority Figure. In more fantastic works, this character usually comes in the form of Big Good. On the other hand, an Affably Evil character can be a benevolent boss with his mooks, as well.
In The Army, he is often The Captain, Majorly Awesome, Colonel Badass, The Brigadier, or even the Four Star Badass and may be A Father to His Men.
For some lucky workers, this is Truth in Television. For a lot of other people, this is some sort of malicious fantasy. (more)

But here is a 2010 (& 2011) survey of 1000 workers (30% bosses, half blue collar):

Agree or completely agree with:

  • You respect your boss 91%
  • You think your boss trusts you 91%
  • You think your boss respects you 91%
  • You trust your boss 86%
  • If your job was on the line, your boss would go to bat for you 78%
  • You consider your boss a friend 61%
  • You would not change a thing about your boss 59%
  • Your boss has more education than you 53%
  • You think you are smarter than your boss 37%
  • You aspire to have the bosses job 30%
  • You work harder than your boss 28%
  • You feel pressure to conform to your bosses hobbies/interests in order to get ahead 20% (more; more; more)

In reality most people respect and trust their bosses, see them as a friend, and so on. Quite a different picture than the one from fiction.

Foragers had strong norms against domination, and bosses regularly violate such norms. We retain a weak allegiance to forager norms in fiction and when we talk politics. But we also have deeper more ancient mammalian instincts to submit to powers above us. And also, our competitive economy probably tends to make real bosses be functional and useful, and we spend enough time on our jobs to see that.

Many other of our cultural presumptions are probably similar. We give lip service to them in the far modes of fiction and politics, but we quickly reject them in the near mode of concrete decisions that matter to us.

GD Star Rating
loading...
Tagged as: , , ,

`Best’ Is About `Us’

Why don’t we express and follow clear principles on what sort of inequality is how bad? Last week I suggested that we want the flexibility to use inequality as an excuse to grab resources when grabbing is easy, but don’t want to obligate ourselves to grab when grabbing is hard.

It seems we prefer similar flexibility on who are the “best” students to admit to elite colleges. Not only do inside views of the admission process seem to show careful efforts to avoid clarity on criteria, ordinary people seem to support such flexibility:

Half [of whites surveyed] were simply asked to assign the importance they thought various criteria should have in the admissions system of the University of California. The other half received a different prompt, one that noted that Asian Americans make up more than twice as many undergraduates proportionally in the UC system as they do in the population of the state. When informed of that fact, the white adults favor a reduced role for grade and test scores in admissions—apparently based on high achievement levels by Asian-American applicants. (more)

Matt Yglesias agrees:

This is further evidence that there’s no stable underlying concept of “meritocracy” undergirding the system. But rather than dedicating the most resources to the “best” students and then fighting over who’s the best, we should be allocating resources to the people who are mostly likely to benefit from additional instructional resources.

But this seems an unlikely strategy for an elite coalition to use to entrench itself. If we were willing to admit the students who would benefit most by objective criteria like income or career success, we could use prediction markets. The complete lack of interest in this suggests that isn’t really the agenda.

Much of law is like this, complex and ambiguous enough to let judges usually draw their desired conclusions. People often say the law needs this flexibility to adapt to complex local conditions. I’m skeptical.

GD Star Rating
loading...
Tagged as: , , ,

Inequality Talk Is About Grabbing

The US today has about 425 billionaires, over 1/3 of the world’s total. Many folks say these billionaires are unfairly unequal, and so we should tax them lots more.

People usually become billionaires via having “super-powers,” i.e., very unusual abilities, at least within some context. But what if most billionaires had super-powers of the traditional comic book sort, like x-ray vision or an ability to fly, etc.? That is, what if people with physical super-powers earned billions in the labor market by selling the use of these powers? Would folks be just as eager to tax them to reduce unfair inequality?

My guess is no, most would be less eager to tax billionaires with physical super-powers. And I offer this prediction as a test of my favored theory of expressed inequality concerns: that inequality talk is usually a covert way of coordinating who to maybe grab stuff from. Let me explain.

As I’ve discussed before, while people usually justify their inequality concerns by noting that inequality can make lower folks feel bad, that justification can apply equally to a great many sorts of inequality. Yet concern is actually only voiced about a very particular sort: financial inequality at a given time between the families of a nation. The puzzle in need of explaining is: why is so little concern expressed about all the other sorts of inequality?

My favored theory is an application of homo hypocritus: our forager ancestors developed the ability to express and enforce social norms, and then developed rich and subtle abilities to coordinate to evade those norms. One of those norms was that foragers weren’t supposed to grab stuff from each other just because they wanted the stuff, or just because that stuff was easy to grab. But they did have norms favoring sharing and equal treatment, and so it was ok to talk about who might be violating such norms, and what punishments to apply to violators.

But they all knew, at least subconsciously, that some groups would be quite effective at retaliating against such suggestions. The accused might physically resist the attempted punishment, or might retaliate with contrary accusations. So foragers needed ways not only to overtly accuse folks of violating norms, and to officially propose to take stuff away as punishment, but also to covertly discuss who might have especially nice stuff to take, and who they could most easily get away with grabbing from.

I suggest that most talk about the problems of inequality actually invokes this ancient hypocritical ability to covertly discuss where to find lots of nice easy-to-grab stuff. We don’t discuss inequalities across time, because it is hard to grab much more than we do from the past or the future. We don’t much discuss the inequality of rich foreigners, because it is much harder to grab their stuff. We don’t much discuss inequality of those with unusual artistic abilities or sexual attractiveness, because we can’t directly grab their advantages and while we might try to grab their material goods to compensate, they don’t have that much, and the grabbing would be hard. (Also, such folks have more social status to resist with. For foragers, status counted lots more than material goods for influence.)

A few people within our nation who each have lots and lots of material goods, however, seem to make a great target for grabbing. So people discover they have a deep moral concern about that particular inequality, and ponder what oh what could we possibly do to rectify this situation? Anyone have an idea? Anyone?

But if those few very rich folks had real physical super-powers, we would be a lot more afraid of their simple physical retaliation. They might be very effective at physically resisting our attempts to take their stuff. So somehow, conveniently, we just wouldn’t find that their unequal wealth evoked as much deeply felt important-social-issue-in-need-of-discussing moral concern in us. Because, I hypothesize, in reality those feelings only arise as a cover to excuse our grabbing, when such grabs seem worth the bother.

GD Star Rating
loading...
Tagged as: ,

Impressive Power

Monday I attended a conference session on the metrics academics use to rate and rank people, journals, departments, etc.:

Eugene Garfield developed the journal impact factor a half-century ago based on a two-year window of citations. And more recently, Jorge Hirsch invented the h-index to quantify an individual’s productivity based on the distribution of citations over one’s publications. There are also several competing “world university ranking” systems in wide circulation. Most traditional bibliometrics seek to build upon the citation structure of scholarship in the same manner that PageRank uses the link structure of the web as a signal of importance, but new approaches are now seeking to harness usage patterns and social media to assess impact. (agenda; video)

Session speakers discussed such metrics in an engineering mode, listing good features metrics should have, and searching for metrics with many good features. But it occurred to me that we can also discuss metrics in social science mode, i.e., as data to help us distinguish social theories. You see, many different conflicting theories have been offered about the main functions of academia, and about the preferences of academics and their customers, such as students, readers, and funders. And the metrics that various people prefer might help us to distinguish between such theories.

For example, one class of theories posits that academia mainly functions to increase innovation and intellectual progress valued by the larger world, and that academics are well organized and incentivized to serve this function. (Yes such theories may also predict individuals favoring metrics that rate themselves highly, but such effects should wash out as we average widely.) This theory predicts that academics and their customers prefer metrics that are good proxies for this ultimate outcome.

So instead of just measuring the influence of academic work on future academic publications, academics and customers should strongly prefer metrics that also measure wider influence on the media, blogs, business practices, ways of thinking, etc. Relative to other kinds of impact, such metrics should focus especially on relevant innovation and intellectual progress. This theory also predicts that, instead of just crediting the abstract thinkers and writers in an academic project, there are strong preferences for also crediting supporting folks who write computer programs, built required tools, do tedious data collection, give administrative support, manage funding programs, etc.

My preferred theory, in contrast, is that academia mainly functions to let outsiders affiliate with credentialed impressive power. Individual academics show exceptional impressive abstract mental abilities via their academic work, and academic institutions credential individual people and works as impressive in this way, by awarding them prestigious positions and publications. Outsiders gain social status in the wider world via their association with such credentialed-as-impressive folks.

Note that I said “impressive power,” not just impressiveness. This is the new twist that I’m introducing in this post. People clearly want academics to show not just impressive raw abilities, but also to show that they’ve translated such abilities into power over others, especially over other credentialled-as-impressive folks. I think we also see similar preferences regarding music, novels, sports, etc. We want people who make such things to show not only that they have have impressive abilities in musical, writing, athletics, etc., we also want them to show that they have translated such abilities into substantial power to influence competitors, listeners, readers, spectators, etc.

My favored theory predicts that academics will be uninterested in and even hostile to metrics that credit the people who contributed to academic projects without thereby demonstrating exceptional abstract mental abilities. This theory also predicts that while there will be some interest in measuring the impact of academic work outside academia, this interest will be mild relative to measuring impact on other academics, and will focus mostly on influence on other credentialed-as-impressives, such as pundits, musicians, politicians, etc. This theory also predicts little extra interest in measuring impact on innovation and intellectual progress, relative to just measuring a raw ability to change thoughts and behaviors. This is a theory of power, not progress.

Under my preferred theory of academia, innovation and intellectual progress are mainly side-effects, not main functions. They may sometimes be welcome side effects, but they mostly aren’t what the institutions are designed to achieve. Thus proposals that would tend to increase progress, like promoting more inter-disciplinary work, are rejected if they make it substantially harder to credential people as mentally impressive.

You might wonder: why would humans tend to seek signals of the combination of impressive abilities and power over others? Why not signal these things separately? I think this is yet another sign of homo hypocritus. For foragers, directly showing off one’s power is quite illicit, and so foragers had to show power indirectly, with strong plausible deniability. We humans evolved to lust after power and those who wield power, but to pretend our pursuit of power is accidental; we mainly just care about beauty, stories, exciting contests, and intellectual progress. Or so we say.

So does anyone else have different theories of academia, with different predictions about which metrics academics and their customers will prefer? I look forward to the collection of data on who prefers which metrics, to give us sharper tests of these alternative theories of the nature and function of academia. And theories of music, stories, sport, etc.

GD Star Rating
loading...
Tagged as: , , , ,

Suspecting Truth-Hiders

Tyler against bets:

On my side of the debate I claim a long history of successful science, corporate innovation, journalism, and also commentary of many kinds, mostly not based on personal small bets, sometimes banning them, and relying on various other forms of personal stakes in ideas, and passing various market tests repeatedly. I don’t see comparable evidence on the other side of this debate, which I interpret as a preference for witnessing comeuppance for its own sake (read Robin’s framing or Alex’s repeated use of the mood-affiliated word “bullshit” to describe both scientific communication and reporting). The quest for comeuppance is a misallocation of personal resources. (more)

My translation:

Most existing social institutions tolerate lots of hypocrisy, and often don’t try to expose people who say things they don’t believe. When competing with alternatives, the disadvantages such institutions suffer from letting people believe more falsehoods are likely outweighed by other advantages. People who feel glee from seeing the comeuppance of bullshitting hypocrites don’t appreciate the advantages of hypocrisy.

Yes existing institutions deserve some deference, but surely we don’t believe our institutions are the best of all possible worlds. And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself. Yes it is possible that such squelching is all for the best, but isn’t it just as likely that some folks are trying to hide things for private, not social, gains? Isn’t this a major reason we often rightly mood-affiliate with those who gleefully expose bullshit?

For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better? Same for if you were trying to rate the effectiveness of a charity or government agency, or evaluate a paper for a journal. The more that people and habits seemed to be trying to hide something and evade incentives for accuracy, the more suspicious you would rightly be that something inefficient was going on.

Now I agree that people do often avoid speaking uncomfortable truths, and coordinate to punish those who violate norms against such speaking. But we usually do this when have a decent guess of what the truth actually is that we don’t want to hear.

If if were just bad in general to encourage more accurate expressions of belief, then it seems pretty dangerous to let academics and bloggers collect status by speculating about the truth of various important things. If that is a good idea, why are more bets a bad idea? And in general, how can we judge well when to encourage accuracy and when to let the truth be hidden, from the middle of a conversation where we know lots of accuracy has been being sacrificed for unknown reasons?

GD Star Rating
loading...
Tagged as: , ,