Tag Archives: Hypocrisy

Bias Is A Red Queen Game

It takes all the running you can do, to keep in the same place. The Red Queen.

In my last post I said that as “you must allocate a very limited budget of rationality”, we “must choose where to focus our efforts to attend carefully to avoiding possible biases.” Some objected, seeing the task of overcoming bias as like lifting weights to build muscles. Scott Alexander compared it to developing habits of good posture and lucid dreaming:

If I can train myself to use proper aikido styles of movement even when I’m doing something stupid like opening a door, my body will become so used to them that they will be the style I default to when my mind is otherwise occupied. .. Lucid dreamers offer some techniques for realizing you’re in a dream, and suggest you practice them even when you are awake, especially when you are awake. The goal is to make them so natural that you could (and literally will) do them in your sleep. (more)

One might also compare with habits like brushing your teeth regularly, or checking that your fly isn’t unzipped. There are indeed many possible good habits, and some related to rationality. And I encourage you all to develop good habits.

What I object to is letting yourself think that you have sufficiently overcome bias by collecting a few good mental habits. My reason: the task of overcoming bias is a Red Queen game, i.e., one against a smart, capable, and determined rival, not a simple dumb obstacle.

There are few smart determined enemies trying to dirty your teeth, pull your fly down, mess your posture, weaken your muscles, or keep you unaware that you are dreaming. Nature sometimes happens to block your way in such cases, but because it isn’t trying hard to do so, it takes only modest effort to overcome such obstacles. And as these problems are relatively simple and easy, an effective strategy to deal with them doesn’t have to take much context into account.

For a contrast, consider the example of trying to invest to beat the stock market. In that case, it isn’t enough to just be reasonably smart and attentive, and avoid simple biases like not deciding when very emotional. When you speculate in stocks, you are betting against other speculators, and so can only expect to win if you are better than others. If you can’t reasonably expect to have better info and analysis than the average person on the other side of your trades, you shouldn’t bet at all, but instead just take the average stock return, by investing in index funds.

Trying to beat the stock market is a Red Queen game against a smart determined opponent who is quite plausibly more capable than you. Other examples of Red Queen games are poker, and most competitive contests like trying to win at sports, music, etc. The more competitive a contest, the more energy and attention you have to put in to have a chance at winning, and the more you have to expect to specialize to have a decent chance. You can’t just develop good general athletic habits to win at all sports, you have to pick the focus sport where you are going to try to win. And for all the non-focus sports, you might play them for fun sometimes, but you shouldn’t expect to win against the best.

Overcoming bias is also a Red Queen game. Your mind was built to be hypocritical, with more conscious parts of your mind sincerely believing that they are unbiased, and other less conscious parts systematically distorting those beliefs, in order to achieve the many functional benefits of hypocrisy. This capacity for hypocrisy evolved in the context of conscious minds being aware of bias in others, suspecting it in themselves, and often sincerely trying to overcome such bias. Unconscious minds evolved many effective strategies to thwart such attempts, and they usually handily win such conflicts.

Given this legacy, it is hard to see how your particular conscious mind has much of a chance at all. So if you are going to create a fighting chance, you will need to try very hard. And this trying hard should include focusing a lot, so you can realize gains from specialization. Just as you’d need to pay close attention and focus well to have much of a chance at beating the hedge funds and well-informed expert speculators who you compete with in stock markets.

In stock markets, the reference point for “good enough” is set by the option to just take the average via an index fund. If using your own judgement will do worse than an index fund, you might as well just take that fund. In overcoming bias, a reference point is set by the option to just accept the estimates of others who are also trying to overcome bias, but who focus on that particular topic.

Yes you might do better than you otherwise would have if you use a few good habits of rationality. But doing a bit better in a Red Queen game is like bringing a knife to a gunfight. If those good habits make you think “I’m a rationalist,” you might think too highly of yourself, and be reluctant to just take the simple option of relying on the estimates of others who try to overcome their biases and focus on those particular topics. After all, refusing to defer to others is one of our most common biases.

Remember that the processes inside you that bias your beliefs are many, varied, subtle, and complex. They express themselves in different ways on different topics. It is far from sufficient to learn a few simple generic tricks that avoid a few simple symptoms of bias. Your opponent is putting a lot more work into it than that, and you will need to do so as well if you are to have much of a chance. When you play a Red Queen game, go hard or go home.

GD Star Rating
Tagged as: ,

How Deep The Rabbit Hole?

You take the red pill – you stay in Wonderland and I show you how deep the rabbit-hole goes. The Matrix

A new article in Evolutionary Psychology by Andrew Gersick and Robert Kurzban details the many ways that one can credibly show good features via covert signals. Covert signals are more subtle and complicated, and so signal intelligence and social savvy. By the details of your covert signals, you can show your awareness of details of social situations, of the risks and attitudes of the people to whom you signal, of the size and chances of the punishments you may suffer if your covert signals are uncovered, and of how much you are willing to risk such punishment:

Flirting is a class of courtship signaling that conveys the signaler’s intentions and desirability to the intended receiver while minimizing the costs that would accompany an overt courtship attempt. … Individuals who are courting [in this way] should vary the intensity of their signals to suit the level of risk attached to the particular social configuration, and receivers may assess this flexible matching of signal to context as an indicator of the signaler’s broader behavioral flexibility and social intelligence. …

Simply producing or interpreting implicature is challenging cognitive work. Moreover, the complexity—and consequent showiness—of implicature is clear in its essential structure. Whereas direct speech merely reports informational content, implicature manipulates meaning by playing that content off of the implicit knowledge shared between speaker and audience.

General intelligence is not the only quality one can demonstrate through indirect speech. Signaling subtly in appropriate situations can convey the signaler’s social awareness and adeptness, his cognizance of the potential costs attached to the sort of transaction he is proposing, his ability to skillfully reduce those costs, and, therefore, his worthiness as a partner. A discretely offered bribe not only opens a negotiation but shows that the aspiring briber knows how to avoid attracting attention. By the same token, the suitor who subtly approaches a woman with a jealous boyfriend does more than simply protect himself from physical assault. He shows his sensitivity to his target’s circumstances. … A slightly more transparent sexual signal might be optimal if the suitor wants to convey not only that he has the social intelligence to be moderately subtle, but also the implicit physical confidence to take on the risk of a fight with the boyfriend. ..

Courtship signals that are marked by … poor quality … [include] the highly overt, socially inappropriate signaling that we call boorishness (e.g., making crude advances to a friend’s partner). Another sort of bad match … is signaling weakly when the risks attached to a sexual advance are quite low, as in the shy mumbling of a high-schooler who knows his current companion is interested in him but still can’t manage to make a move. … A lowly waiter might feel empowered to flirt more openly with a rich customer’s wife if he were younger, taller and better looking than the husband. Calibrating one’s signal-intensity to the right pitch of flirtatiousness may require a blend of social awareness, behavioral flexibility. (more)

Note the reason for covertness here is not peculiar to mating – there are many other situations where a wider audience may object to or punish one for cooperating with particular others in particular ways. The more partially-enforced social norms that a society has, the more reasons its members have to develop ways to covertly coordinate to evade those norms.

Note also that while it so happens that we are often consciously aware that we are flirting, or that others are flirting with us, this need not always apply. We can often more credibly and sincerely deny our covert signals, and prevent their detection, when we are not consciously aware of such signals. Yes, doing such things unconsciously may cost us some in how carefully we can adapt those signals to the details of particular situations, if conscious minds are useful in such adaptation. Even so, being unconscious of covert signals may often be a net gain.

And here is where madness lies — where the rabbit hole you’ve fallen down opens into a vast black hole. Because once you realize that your unconscious mind might be doing a lot of covert talking with the unconscious minds of others, you have to realize that you may not actually know that much about what you are doing much of the time, or why you are doing it. Your conscious reasoning about what you should do, based on what you know about your conscious motivations and acts, could be quite flawed.

So the more that your conscious reasoning actually influences your actions, instead of being after the fact rationalizations, the more important it becomes to get some handle on this. Just how often are we how wrong about what we are doing and why? How could we find this out, and do we really want to?

GD Star Rating
Tagged as: , ,

Info As Excuse

When we try to justify our actions, we prefer to do so by citing a common general good that results from our actions. But of course we often have other stronger motives for our actions, motives that we are less eager to highlight.

One big category of examples here are info justifications. When we endorse a policy, we often point out how it may tend to encourage info to be generated, spread, or aggregated. After all, who could be against more info? But the details of the policies we endorse often belie that appearance, as we pick details that reduce and discourage info. Because we have other agendas.

For example:

  1. We say free speech is to elicit more better info, but for that it should instead be free hearing.
  2. We say meetings are to gain info, but they are more to show who controls, who allied with whom.
  3. We say we hire college grads because of all they’ve learned, but they don’t learn much there.
  4. We say court proceedings are to get info to decide guilt, but then rules of evidence cut out info.
  5. We say managers are to collect info to make key decisions, but they are more motivators and politicians.
  6. We say diverse groups are good as they get diverse info, but most kinds don’t, they just make distance.
  7. We say voting is to get info on better policies, but the better informed don’t get more votes.
  8. We say voting is to get info on better policies, but we don’t use random juries of voters, who would get more info.
  9. We say we travel to learn, but we can usually learn lots cheaper at home.
  10. We say we read news to gain useful info, but very little of it has much use to us.

Have more good examples?

GD Star Rating
Tagged as:

Socializers Clump

Imagine that this weekend you and others will volunteer time to help tend the grounds at some large site – you’ll trim bushes, pull weeds, plant bulbs, etc. You might have two reasons for doing this. First, you might care about the cause of the site. The site might hold an orphanage, or a historical building. Second, you might want to socialize with others going to the same event, to reinforce old connections and to make new ones.

Imagine that instead of being assigned to work in particular areas, each person was free to choose where on the site to work. These different motives for being there are likely to reveal themselves in where people spend their time grounds-tending. The more that someone wants to socialize, the more they will work near where others are working, so that they can chat while they work, and while taking breaks from work. Socializing workers will tend to clump together.

On the other hand, the more someone cares about the cause itself, the more they will look for places that others have neglected, so that their efforts can create maximal value. These will tend to be places places away from where socially-motivated workers are clumped. Volunteers who want more to socialize will tend more to clump, while volunteers who want more to help will tend more to spread out.

This same pattern should also apply to conversation topics. If your main reason for talking is to socialize, you’ll want to talk about whatever everyone else is talking about. Like say the missing Malaysia Airlines plane. But if instead your purpose is to gain and spread useful insight, so that we can all understand more about things that matter, you’ll want to look for relatively neglected topics. You’ll seek topics that are important and yet little discussed, where more discussion seems likely to result in progress, and where you and your fellow discussants have a comparative advantage of expertise.

You can use this clue to help infer the conversation motives of the people you talk with, and of yourself. I expect you’ll find that almost everyone mainly cares more about talking to socialize, relative to gaining insight.

GD Star Rating
Tagged as: , , ,

Why Broken Evals?

This review article published 36 years ago shows that it was well known back then that teacher evaluations by college students are predictably influenced by time of day, class size, course level, course electively, and more. Thus one could get more reliable teacher evaluations by building a statistical model to predict student evaluations using these features plus who taught what, and then using each teacher coefficient as that teacher’s evaluation. Yet colleges almost never do this. Why?

Actually, most orgs also use known-to-be broken worker evaluation systems:

There is a lot of systematic evidence on the connections between job performance and career outcomes. … The data shows that performance doesn’t matter that much for what happens to most people in most organizations. That includes the effect of your accomplishments on those ubiquitous performance evaluations and even on your job tenure and promotion prospects. …

[For example,] supervisors who were actively involved in hiring people whom they favored rated those subordinates more highly on performance appraisals than they did those employees they inherited or the ones they did not initially support. In fact, whether or not the supervisor had been actively engaged in the selection process had an effect on people’s performance evaluations even when objective measures of job performance were statistically controlled. (more)

So why don’t firms correct employee evaluations for this who-hired-you bias? And it isn’t just this one bias; there are lots:

Extensive research on promotions in organizations, with advancement measured either by changes in position, increases in salary, or both, also reveals the modest contribution of job performance in accounting for the variation in what happens to people. In 1980, economists … observed that salaries in companies were more strongly related to age and organizational tenure than they were to job performance. Ensuing research has confirmed and extended their findings, both in the United States and elsewhere. … One meta-analysis of chief executive compensation found that firm size accounted for more than 40 percent of the variation in pay while performance accounted for less than 5 percent. (more)

An obvious explanation here is that coalition politics dominates worker evaluations. Coalitions like being able to ignore job performance to favor their allies and punish their rivals. Winning coalitions tend to be benefiting from the current broken rules. But, you might ask, why don’t people at the top put a stop to this? Doesn’t allowing politics such free reign hurt overall org performance? This story hints at an answer:

A few years ago, Bob, the CEO of a private, venture-backed human capital software company, invited me to serve on the board of directors as the company began a transition to a new product platform and sought to increase its growth rate and profitability. Not long after I joined the board, in the midst of an upgrading in management talent, the CEO hired a new chief financial officer, Chris. Chris was an ambitious, hardworking, articulate individual who had big plans for the company— and himself. Chris asked Bob to make him chief operating officer. Bob agreed. Chris asked to join the board of directors. Bob agreed. I could see what was coming next, so I called Bob and said, “Chris is after your job.” Bob’s reply was that he was only interested in what was best for the company, would not stoop to playing politics, and thought that the board had seen his level of competence and integrity and would do the right thing. You can guess how this story ended— Bob’s gone, Chris is the CEO. What was interesting was the conference call in which the board discussed the moves. Although there was much agreement that Chris’s behavior had been inappropriate and harmful to the company, there was little support for Bob. If he was not going to put up a fight, no one was going to pick up the cudgel on his behalf. (more)

People at the top play coalition politics as hard as anyone. Rules to limit politics at lower levels can hurt lower level allies of top people, and can set expectations that limit politics at higher levels. When mob bosses who are best at violence rise to the top of a competition for boss-hood, why should they and their allies favor non-violent criteria for how to pick bosses?

Some more data: Continue reading "Why Broken Evals?" »

GD Star Rating
Tagged as: , , ,

Why Info Push Dominates

Some phenomena to ponder:

  1. Decades ago I gave talks about how the coming world wide web (which we then called “hypertext publishing”) could help people find more info. Academics would actually reply “I don’t need any info tools; my associates will personally tell me about any research worth knowing about.”
  2. Many said the internet would bring a revolution of info pull, where people pay to get the specific info they want, to supplant the info push of ads, where folks pay to get their messages heard. But even Google gets most revenue from info pushers, and our celebrated social media mainly push info too.
  3. Blog conversations put a huge premium on arguments that appear quickly after other arguments. Mostly arguments that appear by themselves a few weeks later might as well not exist, for all they’ll influence future expressed opinions.
  4. When people hear negative rumors about others, they usually believe them, and rarely ask the accused directly for their side of the story. This makes it easy to slander folks who aren’t well connected enough to have friends who will tell them who said what about them.
  5. We usually don’t seem to correct well for “independent” confirming clues that actually come from the same source a few steps back. We also tolerate higher status folks dominating meetings and other communication channels, thereby counting their opinions more. So ad campaigns often have time-correlated channel-redundant bursts with high status associations.

Overall, we tend to wait for others to push info onto us, rather than taking the initiative to pull info in, and we tend to gullibly believe such pushed clues, especially when they come from high status folks, come redundantly, and come correlated in time.

A simple explanation of all this is that our mental habits were designed to get us to accept the opinions of socially well-connected folks. Such opinions may be more likely to be true, but even if not they are more likely to be socially convenient. Pushed info tends to come with the meta clues of who said it when and via what channel. In contrast, pulled info tends to drop many such meta clues, making it harder to covertly adopt the opinions of the well-connected.

GD Star Rating
Tagged as: , ,

Rejection Via Advice

We get status in part from the status of our associates, which is a credible signal of how others see us. Because of this, we prefer to associate with high status folks. But it looks bad to be overt about this. So we try to hide this motive, and to pretend that other motives dominate our choices of associates.

This would be easier to do if status were very stable. Then we could take our time setting up plausible excuses for wanting to associate with particular high status folks, and for rejecting association bids by particular low status folks. But in fact status fluctuates, which can force us to act quickly. We want to quickly associate more with folks who rise in status, and to quickly associate less with those who fall in status. But the coincidence in time between their status change and our association change may make our status motives obvious.

Since association seems a good thing in general, trying to associate with anyone seems a “nice” act, requiring fewer excuses. In contrast, weakening an existing association seems less nice. So we mainly need good excuses for pushing away those whose status has recently fallen. Such opportunistic rejection, just when our associates most need us, seems especially wrong and mean. So how do we manage it?

One robust strategy is to offer random specific advice. You acknowledge their problems, express sympathy, and then take extra time to “help” them by offering random specific advice about how to prevent or reverse their status fall. Especially advice that will sound good if quoted to others, but is hard for them to actually follow, and is unlikely to be the same as what other associates advise.

If different associates offer different advice, then this person with fallen status simply must fail to follow most of that advice. Which then gives all those folks whose advice was not followed an excuse to distance themselves from this failure. And those whose advice was followed, well at least they get the status mark of power – a credibly claim that they have influence over others. Either way, the falling status person loses even more status.

Unless of course the advice followed is actually useful. But what are the chances of that?

Added 27Dec: A similar strategy would be useful if your status were to rise, and you wanted to drop associates in order make room for more higher status associates.

GD Star Rating
Tagged as: , , , ,

Wyden Puff Piece Errors

In the latest New Yorker, Ryan Lizza writes on “State of Deception: Why won’t the President rein in the intelligence community?” Which would be an interesting topic. Alas Lizza says little about it. Instead he summarizes the history of NSA spying on US citizens, supported via misleading statements and tortured legal interpretations, and talks the most about one Senator Ron Wyden’s heroic fight against the NSA.

Even though Wyden hasn’t actually succeeded at much. Lizza tells us that Wyden attached sunset provisions to the Patriot Act (which he supported), and asked the question at a Senate hearing where the NSA head’s answer was later shown to be misleading. Lizza speculates that Wyden’s many secret memos “repeatedly challenging the NSA’s contention that [a particular] program was effective” caused the NSA to drop that program. Oh and Wyden voted against some bills that passed, introduced bills that didn’t pass, and argued with Obama.

Here is the concrete Wyden accomplishement for which Lizza gives the most detail:

Three months later, the Defense Department started a new program with the Orwellian name Total Information Awareness. T.I.A. was based inside the Pentagon’s Information Awareness Office, which was headed by Admiral John Poindexter. In the nineteen-eighties, Poindexter had been convicted, and then acquitted, of perjury for his role in the Iran-Contra scandal. He wanted to create a system that could mine a seemingly infinite number of government and private-sector databases in order to detect suspicious activity and preëmpt attacks. The T.I.A. system was intended to collect information about the faces, fingerprints, irises, and even the gait of suspicious people. In 2002 and 2003, Wyden attacked the program as a major affront to privacy rights and urged that it be shut down.

In the summer of 2003, while Congress debated a crucial vote on the future of the plan, Wyden instructed an intern to sift through the Pentagon’s documents about T.I.A. The intern discovered that one of the program’s ideas was to create a futures market in which anonymous users could place bets on events such as assassinations and terrorist attacks, and get paid on the basis of whether the events occurred. Wyden called Byron Dorgan, a Democratic senator from North Dakota, who was also working to kill the program. “Byron, we’ve got what we need to win this,” he told him. “You and I should make this public.” Twenty-four hours after they exposed the futures-market idea at a press conference, Total Information Awareness was dead. Poindexter soon resigned.

It was Wyden’s first real victory on the Intelligence Committee. (more)

That “futures market” program mentioned was called the Policy Analysis Market (PAM). As I was a chief architect, I happen to know that this discussion is quite misleading:

  1. TIA was a DARPA research project to develop methods for integrating masses of info; it wasn’t an actual program to handle such info masses.
  2. I’ve been told by several sources that TIA research didn’t stop, it just moved elsewhere. PAM, in contrast, did stop.
  3. PAM was not part of TIA; the only relation is that both were among the score of research programs under Poindexter in the DARPA management hierarchy.
  4. Though Wyden called it “Terrorism Futures,” PAM was mainly about forecasting geopolitical instability in the MidEast. The basis for the claim that it was about terrorism was a single website background screen containing a concept sample screen which included a small miscellaneous section listing the events “Arafat assassinated” and “North Korea missile strike.”

All those errors in just two paragraphs of a 12,500 word article. Makes me wonder how many more errors are in the rest.

It is hard to believe that Lizza’s article didn’t get a lot of input from Wyden. So Wyden is likely responsible for most of these errors. Thus to fight the NSA’s spying supported by lying, Wyden eagerly lied about an unrelated research program, in order to kill a research program with a symbolic tangential relation to NSA spying. Which wasn’t actually killed. Seems a bit underwhelming as a reason to make Wyden the main actor in a story on NSA spying. I see better candidates.

GD Star Rating
Tagged as: , ,

Graeme Wood on Futarchy

At the end of his article on the deaths of Intrade and its founder John Delaney, Graeme Wood considers futarchy:

It’s perhaps no great surprise that we haven’t embraced Hanson’s “futarchy.” Our current political system resists dramatic change, and has resisted it for 237 years. More traditional modes of prediction have proved astonishingly bad, yet they continue to run our economic and political worlds, often straight into the ground. Bubbles do occur, and we can all point to examples of markets getting blindsided. But if prediction markets are on balance more accurate and unbiased, they should still be an attractive policy tool, rather than a discarded idea tainted with the odor of unseemliness. As Hanson asks, “Who wouldn’t want a more accurate source?”

Maybe most people. What motivates us to vote, opine, and prognosticate is often not the desire for efficacy or accuracy in worldly affairs—the things that prediction markets deliver—but instead the desire to send signals to each other about who we are. Humans remain intensely tribal. We choose groups to associate with, and we try hard to show everybody which groups we belong to. We don’t join the Tea Party because we have exhaustively studied and rejected monetarism, and we don’t pay extra for organic food because we have made a careful cost-benefit analysis based on research about its relative safety. We do these things because doing so says something that we want to convey to others. Nor does the accuracy of our favorite talking heads matter that much to us. More than we like accuracy, we like listening to talkers on our side, and identifying them as being on our team—the right team.

“We continue to have consistent results and evidence that markets are accurate,” Hanson says. “If the question is, ‘Do these things predict well?,’ we have an answer: They do. But that story has to be put up against the idea that people never really wanted more accurate sources.”

On this theory, the techno-libertarian enthusiasts got the technology right, and the humanity wrong. Whenever John Delaney showed up on CNBC, hawking his Intrade numbers and describing them as the most accurate and impartial around, he was also selling a future that people fundamentally weren’t interested in buying. (more)

I don’t much disagree — I raised these issues with Wood when he interviewed me. As usual, our hopes for idealistic outcomes mostly depend on finding ways to shame people into actually supporting what they pretend to support, by making the difference too obvious to ignore.

More specifically, I hope prediction markets within firms may someday gain a status like cost accounting today. In a world were no one else did cost accounting, proposing that your firm do it would basically suggest that someone was stealing there. Which would look bad. But in a world where everyone else does cost accounting, suggesting that your firm not do it would suggest that you want to steal from it. Which also looks bad.

Similarly, in a world where few other firms use prediction markets, suggesting that your firm use them on your project suggests that your project has an unusual problem in getting people to tell the truth about it via the usual channels. Which looks bad. But in a world where most firms use prediction markets on most projects, suggesting that your project not use prediction markets would suggest you want to hide something. That is, you don’t want a market to predict if your project will make its deadline because you don’t want others to see that it won’t make the deadline. Which would look bad.

Once prediction markets were a standard accepted practice within firms, it would be much easier to convince people to use them in government as well.

GD Star Rating
Tagged as: ,

The Coalition Politics Hypothesis

Game theories let us analyze precise models of social situations. While each model leaves out much that is important, the ability to see how an entire set of payoffs, info, and acts work together can give powerful insights into social behavior. But it does matter a lot which games we think apply best to which real situations.

Today the game most often used as a metaphor for general social instincts is the public goods game, where individuals contribute personal efforts to benefit everyone in a group. This is seen as a variation on the prisoner’s dilemma. With this metaphor in mind, people see most social instincts as there to detect and reward contributions, and to punish free-riders. Many social activities that on the surface appear to have other purposes are said to be really about this. Here, “pro-social” is good for the group, while “anti-social” is bad. Institutions or policies that undercut traditional social instincts are suspect.

While this metaphor does give insight, the game I see as a better metaphor for general social instincts is this:

Divide The Dollar Game … There are three players … 1, 2, 3. The players wish to divide 300 units of money among themselves. Each player can propose a payoff such that no player’s payoff is negative and the sum of all the payoffs does not exceed 300. … Players get 0 unless there is some pair of players {1, 2}, {2, 3}, or {1, 3} who propose the same allocation, in which case they get this allocation. …

It turns out that in any equilibrium of this game, there is always at least one pair of players who would both do strictly better by jointly agreeing to change their strategies together. …

Suppose the negotiated agreements are tentative and non-binding. Thus a player who negotiates in a sequential manner in various coalitions can nullify his earlier agreements and reach a different agreement with a coalition that negotiates later. Here the order in which negotiations are made and nullified will have a bearing on the final outcome. … It is clear that coalitions that get to negotiate later hold the advantage in this scheme. (more)

That is, most social behavior is about shifting coalitions that change how group benefits are divided, and social instincts are mostly about seeing what coalitions to join and how to get others to want you in their coalitions. Such “social” behavior isn’t good for the group as a whole, though it can be good for your coalition. Because coalition politics can be expensive, institutions or policies that undercut it can be good overall.

In this view of social behavior, we expect to see great efforts to infer each person’s threat point – how much they and a coalition would lose if they leave that coalition. We also expect even greater efforts to infer each person’s loyalty – what coalitions they are likely to prefer and help. And we expect great efforts to signal desirable loyalties and threat points. When shifting coalitions are important, we expect lots of efforts to go into seeing and changing the focal points people use to coordinate which new coalitions form, and to seeing who will be pivotal in those changes.

At a meta level, people would also try to infer what other people think about these things. That is, folks will want to know what others think about various loyalties, threat points, and focal points, and in response those others will try to signal their opinions on such things. In other words, people will want to know how well others can track and influence changing fashions on these topics. At a higher meta level, people will want to know what others think that still others think about these things, i.e., they’ll want to know who is seen to be good at tracking fashion. And so on up the meta hierarchy.

When people talk, we expect them to say some things directly and clearly to all, to influence overall focal points. But we expect many other messages to be targeted to particular audiences, like “Let’s dump that guy from our coalition.” When such targeted messages might be overheard, or quoted to others, we expect talking to be indirect, using code words that obscure meanings, or at least give plausible deniability.

A social world dominated by shifting coalitions would spend modest efforts to influence temporary policies, such has how to divide up today’s spoils, and more efforts on rare chances to change longer term policies that more permanently divide spoils. Even more effort would be spent on rare chances to change who is possible as a coalition partner, For our forager ancestors, killing someone, or letting a new person live nearby, could change the whole game. In a firm today, hiring or firing someone can have similar effects.

This view of social behavior as mostly about shifting coalitions raises the obvious question: why doesn’t most social behavior and conversation seem on the surface to be about such things. And the obvious homo hypocritus answer is that we do such things indirectly to avoid admitting that this is what we are doing. Since coalition politics is socially destructive, we have long had social norms to discourage it, such as the usual norms against gossip. So we do these things indirectly, to get plausible deniability.

This can explain why we place such a high premium on spontaneity and apparent randomness in conversation and other leisure behavior. And also why we seem so uninterested in systematic plans to prioritize our efforts in charity and other good causes. And why we drop names so often. When we manage our shifting coalitions, we prefer to stay free to quickly shift our conversations and priorities to adapt to the changing fashions. If you ever wonder why the news, public discourse, and academia seem so uninterested in the topics most everyone would agree are really important, this is plausibly why.

GD Star Rating
Tagged as: ,