A few hours ago I heard a talk by Frans De Waal, author of the great classic “Chimpanzee Politics,” on his new book, “The Age of Empathy: Nature’s Lessons for a Kinder Society” (excerpt here). Before and during his talk, De Waal showed a deep understanding of animal empathy and sociality, but he never once mentioned any of the lessons for humans his subtitle promised. I inquired in the Q&A, saying: if humans show about as much empathy to each other as related species do, what more lessons can we learn from nature?
He said the lesson is that it is bad to have societies “like the US, based on social Darwinism”, as revealed by its shameful response to Hurricane Katrina and reluctance to support Obamacare. I pressed: humans have some empathy, even in the US, so how can we tell what the right amount is? A bit later I pressed: how can we tell who should show empathy for someone in need: their family, neighborhood, city, state, nation, continent, planet, or what? Other than repeating that the US should do more, and that it is nations who are responsible, he had no further comment (though he had ample time).
Alas, I conclude that while De Waal is very smart and feels strongly on this topic, he seems incapable of even the most basic analysis of it. He has a slogan, which identifies him with his side, and that is all he, or his readers, seek. Sorta like folks who sing “Love is all you need.”
One might argue that empathy is good because it promotes cooperation. But a striking experiment in the latest AER shows the dark side of cooperation; better cooperation within teams that fight each other can lead to far more destruction and waste.
In the one-on-one version of the experiment, subjects are paired and each side gets a budget of 1000 tokens, some of which can be spent fighting over a common prize of another 1000 tokens. That prize is distributed in proportion to tokens spent fighting for it. For example, if you spent 300 and your opponent spent 100 tokens fighting, then you’d get 750 and they’d get 250 of the prize tokens. Together with the 700 you didn’t spend fighting, you’d end up with 1450 tokens.
The one-shot Nash equilibrium here is for each side to spend 250 tokens fighting, and walk away with 1250; half of the prize is destroyed in the struggle. The same opponents interact for twenty (or 40) rounds, and if that allowed perfect coordination between the opponents, they’d each contribute 1 token to the fight and walk away with 1499; only 0.2% of the prize would be destroyed.
There was also a team-on-team version of the experiment. Four people on each side could contribute to their side’s fighting pot, and a 1000 token per person prize is again distributed in proportion to the relative size of the pots. Sharing a pot creates a free rider problem; each team member would rather that other members contribute more to the fight. This reduces the one-shot Nash fighting contributions to 63, so that each walks away with 1437; only 1/8 of the prize is destroyed in the struggle.
Finally they tried teams with internal punishment. Each token a team member spent punishing another would destroy three of that person’s tokens. If such punishment allowed teams to coordinate perfectly, we’d be back to one-on-one equilibria.
The actual experimental results are summarized in this figure:
In the one-on-one version, subjects were far more eager to fight, relative to one-shot Nash predictions, on average destroying all of the prize. They fought less as time passed, but even after forty rounds fought far more than Nash suggests. Free-riding did reduce team efforts, though only down to the Nash level for individuals. And punishment enabled teams to coordinate to destroy more than the whole prize, an effect that doesn’t seem to diminish with time.
So, relative to what simple uncoordinated self-interest would predict, humans are far more eager to fight each other. And while punishment does allow teams to coordinate internally, teams completely fail to coordinate with each other; instead they coordinate to fight so hard that they destroy more than what they fight over. And this is all with the usual cold clinical experimental framing:
Group contests with punishment opportunities can be extremely destructive. Contests between groups that rely solely on their members’ voluntary contributions to the collective effort are already characterized by investments in fighting far above equilibrium, but it is the addition of punishment possibilities that drives contest expenditure levels at the end of the experiments to about six times the equilibrium levels. … We find these outcomes in the abstract, anonymous environment of a laboratory experiment, in absence of any ethnic, religious or class division between the groups. Emotional forces related to rivalry between conflict parties can be conjectured to be much more intense in field environments involving parties that may have been in conflict for a long time.
This, this, is human nature. Be thankful coordination is hard, so rival groups often fail to coordinate internally.
Striking study. It has a ring of Koestler to it re: the destructiveness of group allegiance.
I could see how social ostracism can lead to suboptimal results (herd behavior) regardless of any motivation issues (prize) simply because the individuals' own judgments are suppressed in favor of a common judgment. And that goes against the whole Polanyi / Hayek idea of the distribution of information in society, especially the concept that the sum of distributed local knowledge is greater than the collected centralized knowledge and its plans.
It appears that in this study the groups that could not punish did much better than the groups that could, and in the absolute they did better than the individuals too ( though not relative to the equilibrium). That is a powerful argument for the wisdom of the crowds and against forced collective organization.
In addition: I fail to see why the feeling of empathy should have much to do with the makeup or organization of a society to begin with. Empathy is a personal feeling, usually linked to personal proximity, and display of emotion. We empathize with our friends for instance, even when they are objectively in the wrong in some societal situation. So empathy can not serve as basis for a general rule in society - people inevitably empathize with different target individuals or groups.
Be the mechanism for this mirror neurons or something else, empathy is personal and usually local in time and space. In large and complex societies it is hard to see how empathy alone could conceivably generate a form of complex organization even if the target issue didn't exist. True, empathy fuels charity work, but that is a very small subset of social organization. I also don't see at all why top down coordination mandated by collectively decided global empathetic goals should yield higher utility outcomes than piecemeal individual (market) coordination. In fact this is precisely a problem for charities as well - they are often not efficient.
Empathy as driver of economic exchange is a deeply flawed concept and the polar opposite of Adam Smith, for a lot of reasons that have been studied extensively since.
Economists have well developed tools for concluding that a number is too high or too low, without ever needing to say what exactly the value of the number is. For example, we expect there to be too much pollution when polluters don't have to pay for the harm they impose on distant others. We can know this without knowing just how much pollution there is. My point was that you'd need some sort of argument like that to conclude that we show too little empathy for each other in the US.
I'm happy to grant that empathy is ancient, and common today, and most economists admit this as well. Incentives remain relevant, however, even when empathy exists.