Tag Archives: Hypocrisy

Impressive Power

Monday I attended a conference session on the metrics academics use to rate and rank people, journals, departments, etc.:

Eugene Garfield developed the journal impact factor a half-century ago based on a two-year window of citations. And more recently, Jorge Hirsch invented the h-index to quantify an individual’s productivity based on the distribution of citations over one’s publications. There are also several competing “world university ranking” systems in wide circulation. Most traditional bibliometrics seek to build upon the citation structure of scholarship in the same manner that PageRank uses the link structure of the web as a signal of importance, but new approaches are now seeking to harness usage patterns and social media to assess impact. (agenda; video)

Session speakers discussed such metrics in an engineering mode, listing good features metrics should have, and searching for metrics with many good features. But it occurred to me that we can also discuss metrics in social science mode, i.e., as data to help us distinguish social theories. You see, many different conflicting theories have been offered about the main functions of academia, and about the preferences of academics and their customers, such as students, readers, and funders. And the metrics that various people prefer might help us to distinguish between such theories.

For example, one class of theories posits that academia mainly functions to increase innovation and intellectual progress valued by the larger world, and that academics are well organized and incentivized to serve this function. (Yes such theories may also predict individuals favoring metrics that rate themselves highly, but such effects should wash out as we average widely.) This theory predicts that academics and their customers prefer metrics that are good proxies for this ultimate outcome.

So instead of just measuring the influence of academic work on future academic publications, academics and customers should strongly prefer metrics that also measure wider influence on the media, blogs, business practices, ways of thinking, etc. Relative to other kinds of impact, such metrics should focus especially on relevant innovation and intellectual progress. This theory also predicts that, instead of just crediting the abstract thinkers and writers in an academic project, there are strong preferences for also crediting supporting folks who write computer programs, built required tools, do tedious data collection, give administrative support, manage funding programs, etc.

My preferred theory, in contrast, is that academia mainly functions to let outsiders affiliate with credentialed impressive power. Individual academics show exceptional impressive abstract mental abilities via their academic work, and academic institutions credential individual people and works as impressive in this way, by awarding them prestigious positions and publications. Outsiders gain social status in the wider world via their association with such credentialed-as-impressive folks.

Note that I said “impressive power,” not just impressiveness. This is the new twist that I’m introducing in this post. People clearly want academics to show not just impressive raw abilities, but also to show that they’ve translated such abilities into power over others, especially over other credentialled-as-impressive folks. I think we also see similar preferences regarding music, novels, sports, etc. We want people who make such things to show not only that they have have impressive abilities in musical, writing, athletics, etc., we also want them to show that they have translated such abilities into substantial power to influence competitors, listeners, readers, spectators, etc.

My favored theory predicts that academics will be uninterested in and even hostile to metrics that credit the people who contributed to academic projects without thereby demonstrating exceptional abstract mental abilities. This theory also predicts that while there will be some interest in measuring the impact of academic work outside academia, this interest will be mild relative to measuring impact on other academics, and will focus mostly on influence on other credentialed-as-impressives, such as pundits, musicians, politicians, etc. This theory also predicts little extra interest in measuring impact on innovation and intellectual progress, relative to just measuring a raw ability to change thoughts and behaviors. This is a theory of power, not progress.

Under my preferred theory of academia, innovation and intellectual progress are mainly side-effects, not main functions. They may sometimes be welcome side effects, but they mostly aren’t what the institutions are designed to achieve. Thus proposals that would tend to increase progress, like promoting more inter-disciplinary work, are rejected if they make it substantially harder to credential people as mentally impressive.

You might wonder: why would humans tend to seek signals of the combination of impressive abilities and power over others? Why not signal these things separately? I think this is yet another sign of homo hypocritus. For foragers, directly showing off one’s power is quite illicit, and so foragers had to show power indirectly, with strong plausible deniability. We humans evolved to lust after power and those who wield power, but to pretend our pursuit of power is accidental; we mainly just care about beauty, stories, exciting contests, and intellectual progress. Or so we say.

So does anyone else have different theories of academia, with different predictions about which metrics academics and their customers will prefer? I look forward to the collection of data on who prefers which metrics, to give us sharper tests of these alternative theories of the nature and function of academia. And theories of music, stories, sport, etc.

GD Star Rating
loading...
Tagged as: , , , ,

Suspecting Truth-Hiders

Tyler against bets:

On my side of the debate I claim a long history of successful science, corporate innovation, journalism, and also commentary of many kinds, mostly not based on personal small bets, sometimes banning them, and relying on various other forms of personal stakes in ideas, and passing various market tests repeatedly. I don’t see comparable evidence on the other side of this debate, which I interpret as a preference for witnessing comeuppance for its own sake (read Robin’s framing or Alex’s repeated use of the mood-affiliated word “bullshit” to describe both scientific communication and reporting). The quest for comeuppance is a misallocation of personal resources. (more)

My translation:

Most existing social institutions tolerate lots of hypocrisy, and often don’t try to expose people who say things they don’t believe. When competing with alternatives, the disadvantages such institutions suffer from letting people believe more falsehoods are likely outweighed by other advantages. People who feel glee from seeing the comeuppance of bullshitting hypocrites don’t appreciate the advantages of hypocrisy.

Yes existing institutions deserve some deference, but surely we don’t believe our institutions are the best of all possible worlds. And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself. Yes it is possible that such squelching is all for the best, but isn’t it just as likely that some folks are trying to hide things for private, not social, gains? Isn’t this a major reason we often rightly mood-affiliate with those who gleefully expose bullshit?

For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better? Same for if you were trying to rate the effectiveness of a charity or government agency, or evaluate a paper for a journal. The more that people and habits seemed to be trying to hide something and evade incentives for accuracy, the more suspicious you would rightly be that something inefficient was going on.

Now I agree that people do often avoid speaking uncomfortable truths, and coordinate to punish those who violate norms against such speaking. But we usually do this when have a decent guess of what the truth actually is that we don’t want to hear.

If if were just bad in general to encourage more accurate expressions of belief, then it seems pretty dangerous to let academics and bloggers collect status by speculating about the truth of various important things. If that is a good idea, why are more bets a bad idea? And in general, how can we judge well when to encourage accuracy and when to let the truth be hidden, from the middle of a conversation where we know lots of accuracy has been being sacrificed for unknown reasons?

GD Star Rating
loading...
Tagged as: , ,

Thought Crime Hypocrisy

Philip Tetlock’s new paper on political hypocrisy re thought crimes:

The ability to read minds raises the specter of punishment of thought crimes and preventive incarceration of those who harbor dangerous thoughts. … Our participants were highly educated managers participating in an executive education program who had extensive experience inside large business organizations and held diverse political views. … We asked participants to suppose that scientists had created technologies that can reveal attitudes that people are not aware of possessing but that may influence their actions nonetheless.

In the control condition, the core applications of these technologies (described as a mix of brain-scan technology and the IAT’s reaction-time technology) were left unspecified. In the two treatment conditions, these technologies were to be used … to screen employees for evidence of either unconscious racism (UR) against African Americans or unconscious anti-Americanism (UAA). … Liberals were consistently more open to the technology, and to punishing organizations that rejected its use, when the technology was aimed at detecting UR among company managers; conservatives were consistently more open to the technology, and to punishing organizations that rejected its use, when the technology was aimed at detecting UAA among American Muslims.

Virtually no one was ready to abandon that [harm] principle and endorse punishing individuals for unconscious attitudes per se. … When directly asked, few respondents saw it as defensible to endorse the technology for one type of application but not for the other—even though there were strong signs from our experiment that differential ideological groups would do just that when not directly confronted with this potential hypocrisy. …

Liberal participants were [more] reluctant to raise concerns about researcher bias as a basis for opposition, a reluctance consistent [the] finding that citizens tend to believe that scientists hold liberal rather than conservative political views. …

This experiment confronted the more extreme participants with a choice between defending a double standard (explaining why one application is more acceptable) and acknowledging that they may have erred initially (reconsidering their support for the ideologically agreeable technology). … Those with more extreme views were more disposed to … backtrack from their initial position. (more; ungated)

So if we oppose thought crime in general, but support it when it serves our partisan purposes, that probably means that we will have it in the long run. There will be thought crime.

GD Star Rating
loading...
Tagged as: , , ,

Your Honesty Budget

Kira Newman runs The Honesty Experiment:

30 days. Complete honesty. Can they survive it? — Follow their journey and read about honesty in life, love, and business.

She interviewed me recently. One excerpt:

Honesty Experiment: How do we solve this conundrum?

Hanson: I think the first thing you’ll have to come to terms with is wondering why you think you want to be otherwise. We’re clearly built to be two-faced – we’re built to, on one level, sincerely want to and believe that we are following these standard norms – and at the other level, actually evading them whenever it’s in our interest to get away with it. And since we are built that way, you should expect to have a part of yourself that feels like it sincerely wants to follow the norms, and you should expect another part of you that consistently avoids having to do that.

And so, if you observe this part of yourself that wants to be good (according to the norms), that’s what you should expect to see. It’s not evidence that you’re different from everybody else. So a real hard question is: how different do you want to be, actually? How different are your desires to be different? . . . Overall, you should expect yourself to be roughly as hypocritical as everybody else.

I later recommend compromise:

It would be simply inhuman to actually try to be consistently honest, because we’re so built for hypocrisy on so many levels. But what you can hope for is perhaps a better compromise between the parts of you that want to be honest and the parts of you that don’t. Think more in terms of: you have a limited budget of honesty, and where you should spend it.

GD Star Rating
loading...
Tagged as: ,

Sleep Signaling

We sleep less well when we sleep together:

Our collective weariness is the subject of several new books, some by professionals who study sleep, others by amateurs who are short of it. David K. Randall’s “Dreamland: Adventures in the Strange Science of Sleep” belongs to the latter category. It’s a good book to pick up during a bout of insomnia. …

Research studies consistently find … that adults “sleep better when given their own bed.” One such study monitored couples over a span of several nights. Half of these nights they spent in one bed and the other half in separate rooms. When the subjects woke, they tended to say that they’d slept better when they’d been together. In fact, on average they’d spent thirty minutes more a night in the deeper stages of sleep when they were apart. (more)

In 2001, the National Sleep Foundation reported that 12% of American couples slept apart with that number rising to 23% in 2005. … Couples experience up to 50% more sleep disturbances when sleeping with their spouse. (more)

Why do we choose to sleep together, and claim that we sleep better that way, when in fact we sleep worse? This seems an obvious example of signaling aided by self-deception. It looks bad to your spouse to want to sleep apart. In the recent movie Hope Springs, sleeping apart is seen as a big sign of an unhealthy relation; most of us have internalized this association. So to be able to send the right sincere signal, we deceive ourselves into thinking we sleep better.

GD Star Rating
loading...
Tagged as: , ,

Hypocritical Fairness

Arvind Narayanan puzzled over this fact:

Online price discrimination is suspiciously absent in directly observable form, even though covert price discrimination is everywhere. … The differential treatment isn’t made explicit — e.g., by not basing it directly on a customer attribute — and thereby avoiding triggering the perception of unfairness or discrimination. (more)

So he read up on fairness:

I decided to dig deeper into the literature in psychology, marketing, and behavioral economics on the topic of price fairness and understand where this perception comes from. What I found surprised me.

First, the fairness heuristic is quite elaborate and complex. … A particularly impressive and highly cited 2004 paper reviews the literature and proposes an elaborate framework with four different classes of inputs to explain how people decide if pricing is fair or unfair in various situations. …

Sounds like we have a well-honed and sophisticated decision procedure, then? Quite the opposite, actually. The fairness heuristic seems to be rather fragile, even if complex. … More generally, every aspect of our mental price fairness assessment heuristic seems similarly vulnerable to hijacking by tweaking the presentation of the transaction without changing the essence of price discrimination. …

The perception of fairness, then, can be more properly called the illusion of fairness. … Given that the prime impediment to pervasive online price discrimination is a moral principle that is fickle and easily circumventable, one can expect companies to do exactly that. (more)

Of course all of our perceptions are subject to framing to some degree. But Narayanan seems to be saying that fairness perceptions are much more subject to framing than usual. And I agree. But then the key question is: why are fairness perceptions so much more fragile and subject to framing?

A homo hypocritus perspective accounts for this nicely I think. If humans evolved the habit of pretending to follow social norms while covertly coordinating to evade them and use them to social advantage, we should expect the psychology of social norms to be flexibly able to come to whatever conclusions a winning covert coalition desires.

What does 2+2 equal in fairness? The main question we privately ask is, what do we want it to equal?

GD Star Rating
loading...
Tagged as: ,

Morality as though it really mattered

A large share of the public, and even an outright majority of professional philosophers, claim to be ‘moral realists‘. Presumably, if this means anything, it means that there are objective rules out there that any being ought to follow and doing the ‘right thing’ is about more than just doing what you want.

Whatever surveys say, my impression is that almost nobody acts as though they were actually realists. If you really believed that there were objective rules that we should follow, that would make it crucial to work out what those rules actually were. If you failed to pick the right rules, you could spend your life doing things that were worthless, or maybe even evil. And if those are the rules that everyone necessarily ought to be following, nothing could be worse than failing to follow them. If most acts or consequences are not the best, as seems likely, then the chances of you stumbling on the right ones by chance are very low.

Does this imply that you should spend your entire life studying morality? Not exactly. If you became sufficiently confident about what was good, it would then be more valuable to go out and do that thing, rather than continue studying. On the other hand, it does imply a lot more effort than most people put into this question today. The number of ethicists with a public profile could be counted on one hand. Research on ethics, let alone meta-ethics, is largely ignored by the public and considered of ‘academic interest’, if that. To a realist, nothing could be further from the truth. It is impossible to go about forming other life plans confidently until you have worked out what is morally right!

Simple probing using questions well known to philosophers usually reveals a great deal of apparent inconsistency in people’s positions on moral issues. This has been known for thousands of years, but we are scarcely more consistent now than in the past. If we assume that any of the rules we ought to follow will be consistent with one another, this is a disaster and calls for us to down tools until right and wrong can be clarified. In other cases, popular intutive positions simply do not make sense.

A moral realist should also be trying to spread their bets to account for ‘moral uncertainty‘. Even if you think you have the right moral code, there is always the possibility you are mistaken and in fact a different set of rules are correct. Unless you are extremely confident that the rules you consider most likely, this ought to affect your behaviour. This is easily explained through an example which occurred to me recently concerning the debate over the ‘person-affecting view‘ of morality. According to this view, it would only be good to prevent a catastrophe that caused the extinction of humanity because such a catastrophe would affect people alive now, not because it ensures countless future generations never get to live. People who could exist in the future but don’t are not well-defined, and so do not quality for moral consideration. The case for putting enormous resources into ensuring humanity does not collapse is weaker if future people do not count. But how much weaker? Let’s say the number of (post-)humans we expect to live in the future, in the absence of any collapse, is a modest 1 trillion. The real number is probably much larger. If you thought there were just a 10% chance that people who weren’t alive now did in fact deserve moral consideration, that would still mean collapse prevented the existence of 100 billion future people in ‘expected value’ terms. This still dwarfs the importance of the 7 billion people alive today, and makes the case for focussing on such threats many times more compelling than otherwise. Note that incorporating moral uncertainty is unlikely to make someone stop focussing on collapse risk, because the consequences of being wrong in the other direction aren’t so bad.

This demonstrates that a moral realist with some doubt they have picked the right rules will want to a) hedge their bets b) focus disproportionate attention on plausible rules under which their choices have a bigger potential impact on the desirability of outcomes. This is just the same as uncertainty around matters of fact: we take precautions in case our model of how the world works is wrong, especially those errors under which our preferred choice could lead to a relative disaster. Despite this being a natural and important consideration for all moral realists, moral uncertainty is only talked about by a handful of moral philosophers.

Uncertainty about moral issues is scarcely a fringe concern because the quality of available evidence is so poor. Most moral reasoning, when we dig down, relies on nothing more than the competing intuitions of different people. The vast majority of people I know think the moral intuitions of the billions of people who lived in the past on matters such as racism, gender, sex, torture, slavery, the divine right of monarchs, animal cruelty and so on, were totally wrong. Furthermore, intuitive disagreement on moral questions remains vast today. Without a compelling reason to think our intuitions are better than those of others – and I don’t see one – the chances that we have all the right intuitions is frighteningly low.

I would go further and say there is no obvious reason for our moral intuitions to be tethered to what is really right and wrong full stop. It is almost certain that humans came about through the process of evolution. Evolution will give us the ability to sense the physical world in order to be able to respond to it, survive and reproduce. It will also give us good intuitions about mathematics, insofar as that helps us make predictions about the world around us, survive and reproduce. But why should natural selection provide us with instinctive knowledge of objective moral rules? There is no necessary reason for such knowledge to help a creature survive – indeed, most popular moral theories are likely to do the opposite. For this reason our intuitions, even where they agree, are probably uninformative.

I think this shows that most people who profess moral realism are in fact not. This is yet another obvious example of human hypocrisy. Professing objective morality is instrumentally useful for individuals and societies, and our minds can be easily shielded from what this implies. For anyone who actually does want to follow through on a realist position, I can see two options,

  • Hit the books and put more work into doing the right thing.
  • Concede that you have almost no chance of working out what is right and wrong, and could not gain much by trying. Moral skepticism would get you off the hook.

Personally, I would like to think I take doing the right thing seriously, so I am willing to offer a monetary prize of £300 for anyone who can change my mind on a) whether I ought to place a significant probability on moral realism being correct, or b) help me see that I seriously misunderstand what I subjectively value. Such insights would be a bargain!

GD Star Rating
loading...
Tagged as: , , ,

Breeding happier livestock: no futuristic tech required

I talk to a lot of people who are enthusiastic about the possibility that advanced technologies will provide more humane sources of meat. Some have focused on in vitro meat, a technology which investor Peter Thiel has backed. Others worry that in vitro meat would reduce the animal population, and hope to use futuristic genetic engineering to produce animals that feel more pleasure and less pain.

But would it really take radical new technologies to produce happy livestock? I suspect that some of these enthusiasts have been distracted by a shiny Far sci-fi solution of genetic engineering, to the point of missing the presence of a powerful, long-used mundane agricultural version: animal breeding.

Modern animal breeding is able to shape almost any quantitative trait with significant heritable variation in a population. One carefully measures the trait in different animals, and selects sperm for the next generation on that basis. So far this has not been done to reduce animals’ capacity for pain, or to increase their capacity for pleasure, but it has been applied to great effect elsewhere.

One could test varied behavioral measures of fear response, and physiological measures like cortisol levels, and select for them. As long as the measurements in aggregate tracked one’s conception of animal welfare closely enough, breeders could easily generate immense increases in livestock welfare, many standard deviations, initially at low marginal cost in other traits.

Just how powerful are ordinary animal breeding techniques? Consider cattle:

In 1942, when my father was born, the average dairy cow produced less than 5,000 pounds of milk in its lifetime. Now, the average cow produces over 21,000 pounds of milk. At the same time, the number of dairy cows has decreased from a high of 25 million around the end of World War II to fewer than nine million today. This is an indisputable environmental win as fewer cows create less methane, a potent greenhouse gas, and require less land.

 Wired has an impressive chart of turkey weight over time:
GD Star Rating
loading...
Tagged as: , ,

Why Admire Brags?

A few days ago Rob Wiblin complained about our admiration of anonymous charity:

Even those who are open about their good deeds are likely to hold a special admiration for anyone they discover has been secretly helping others for years. … This norm exists because when you go on about your altruism, … perhaps you made that donation just to be able to show off your virtue and wealth to everyone else. … [But] a culture of ‘private altruism’ has some seriously perverse effects. … We are less inclined to talk about … which causes are most valuable. .. Altruistic acts … will tend to be crowded out by alternatives that are unavoidably conspicuous – impressive cars, holidays, degrees and so forth. … Someone who really cared about helping others … would want to bring up the fact whenever they could get away with it, in order to draw attention to the merits of their cause and prompt others to join in. (more)

Charity has an overt and a covert purpose. The overt purpose is to help those who can’t trade to get the help they need. To understand the covert purpose, let’s review some basics about showing that we care.

Your associates care about how helpful you are to them. Sometimes they can see very clearly how helpful you are. For example, they might see you hold a door open, or answer a direct question. But most of the time their vision is obscured. So they have to look for clues in what they can see, to infer things unseen. For example, if they see you helping a similar associate in a situation where that associate can’t see the help, they might guess that you help them in similar situations where they can’t see. Conversely if they see you make fun of someone not in the room, they might wonder if you do the same to them when they are absent.

If they see you helping someone in need who can’t much help you back, they might guess that you would similarly help them if they were in similar need, but couldn’t help you back. And if they see you helping someone in a situation where you might reasonably guess that no one could see your help, they might think you would help them in a situation where you’d guess no one could see. There is thus a close functional association, and complementarity, between charity, helping people who can’t help you back much, and anonymity, helping when the recipient and others can’t see the help.

Given this complementarity between charity and anonymity, for the purpose of signaling, it makes sense that people recommend giving anonymously, and admire folks who do so. Sure, that may end up less helping distant others in need, but we all know that we don’t care much about that.

Imagine that after one person told another “I love your new dress, it makes you look thin,” you shouted “Liar. I know you don’t like dresses like that, and anyone can see this dress doesn’t maker her look thin.” Do you think either of them would appreciate your comment? They probably both know the speaker exaggerates, but still appreciate the exchange as a signal of friendship and loyalty. You are rudely insulting them both, because they did something they admire.

You’ll seem similarly tone deaf if you point out that charity givers are not giving in ways to maximally benefit recipients. The giver and the audience both admire the gift as a signal of loyalty and caring, which they see as good things, and in addition a third party benefits from the process. Yet there you are complaining that they aren’t doing even more. They can quite reasonably see you as rude, hostile, and ungrateful. Who made you the spokesperson for the recipients of their charity? Don’t you see how white lies smooth the social fabric?

GD Star Rating
loading...
Tagged as: , ,

Significance and motivation

Over at philosophical disquisitions, John Danaher is discussing Aaron Smuts’ response to Bernard Williams’ argument that immortality would be tedious. Smuts’ thesis, in Danaher’s words, is a familiar one:

Immortality would lead to a general motivational collapse because it would sap all our decisions of significance.

This is interestingly at odds with my observations, which suggests that people are much more motivated to do things that seem unimportant, and have to constantly press themselves to do important things once in a while. Most people have arbitrary energy for reading unimportant online articles, playing computer games, and talking aimlessly. Important articles, serious decisions, and momentous conversations get put off.

Unsurprisingly then, people also seem to take more joy from apparently long-run insignificant events. Actually I thought this was the whole point of such events. For instance people seem to quite like cuddling and lazing in the sun and eating and bathing and watching movies. If one had any capacity to get bored of these things, I predict it would happen within the first century. While significant events also bring joy, they seem to involve a lot more drudgery in preceding build up.

So it seems to me that living forever could only take the pressure off and make people more motivated and happy. Except inasmuch as the argument is faulty in other ways, e.g. impending death is not the only time constraint on activities.

Have I missed something?

GD Star Rating
loading...
Tagged as: , , , ,