44 Comments

If deal enforcement was cheap they could flip a coin.

Expand full comment

Even cultures and religions you would think simply couldn't fall into this conflict trap - whose every scripture and moral principle is *against* it - still manage to do it.

Let's take the example of Buddhism; I can't think of a more peaceful, pacifist religion (except Jainism), and yet Buddhists still regularly managed to become warrior-monks, come up with strange things like the Tantric forms (not talking about the sexual ones), suicide bombers*, kamikaze**, and so on.

Even the primitive cultures aren't exempt. Think of the murder rates among the !Kung, or the more famous homicides of the Yanomano. Conflict and especially violent conflict certainly seem like human universals...

* I refer here to Sri Lanka; with a 70% Buddhist population, I'm fairly confident that many of the Tamil Tigers' (the inventors) suicide bombers were Buddhist.** One could argue that the Japanese kamikaze weren't 'really' Buddhist, that their Buddhism was pro forma and they were really more Shinto or atheistic.

Expand full comment

"students perk up when academic topics are posed as conflicts ... But while I'd like to be a POPULAR teacher, I'd rather be HONEST,"

The inescapable irony of what I intuit is our primate aesthetic to pay attention to (and thus frame things as in the attentional marketplace) binary conflicts. Although your post is to a degree managing this fire with fire.

Expand full comment

Right, I should have known that. :) Anyway, I've created a new post on LessWrong to continue the discussion, since it's getting off-topic for this post.

Expand full comment

Wei, people might well choose to be irrational. This is not my preference, but that hardly makes it "abhorrent."

Expand full comment

Robin, in that paper you wrote:

For example, if you learned that your strong conviction that fleas sing was the result of anexperiment, which physically adjusted people’s brains to give them odd beliefs, you mightwell think it irrational to retain that belief (Talbott, 1990).

Suppose in the future, self-modification technologies allow everyone to modify their beliefs, and people do so in order to gain strategic advantage (or to keep up with their neighbors), and they also modify themselves to not think it irrational to retain such modified beliefs (otherwise they would have wasted their money). Would such a future be abhorrent to you? If so, do you think it can be avoided?

Expand full comment

Wei, I have in mind this analysis. Once we integrate our knowledge about the origins of our beliefs into such a framework, we can't still embrace beliefs that differ for this reason.

Expand full comment

Robin, my understanding is that if you take any consistent set of beliefs and observations, you can work backwards and find a prior that rationally gives rise to that set of beliefs under those observations. Given that human beings have a tendency to find and discard inconsistent beliefs, there should have been an evolutionary pressure to have consistent beliefs that give good strategic impressions, and the only way to do that is by having certain priors.

I do not dispute that we also have beliefs that give good strategic impressions and are inconsistent with our other beliefs, and those can certainly be overcome by more rationality. But the better we get at detecting and fixing inconsistent beliefs, the more evolutionary pressure there will be for having consistent strategic beliefs. What can counteract that?

BTW, Eliezer's idea of achieving cooperation by showing source code, if it works, will probably make this problem even worse. "Leaks" will become more common and the importance of strategic beliefs (and values) will increase. The ability to self modify in the future will also make it easier to have consistent strategic beliefs, or to create inconsistent ones that can't be discarded.

Expand full comment

Wei, yes we probably evolved to have beliefs that give good strategic impressions, assuming they are often leaked. But I don't think this is well described as having evolved to have certain priors, which are not just any old beliefs. Once we knew about about this source of the origins of our beliefs, we should not rationally retain them, so rationality can overcome disagreements due to this effect.

Expand full comment

Robin: in my scenario, it is definitely possible to enforce a deal. The thief is a very good shot, and if the victim tried to run away the thief would have a very good (>95%) chance of killing him. More importantly for real situations, even a 95% chance of a deal being enforced can be too low if one side has very little to lose from a conflict (<5% of his expected gain). How well a deal must be enforced to be supported by both sides depends a lot on the cost of conflict.

Expand full comment

It further occurs to me that this view of human beings as leaky agents of our genes can also help explain the "agreeing to disagree" phenomenon. Because we tend to leak our private beliefs in addition to our private preferences, our genes should have constructed us to have different private beliefs than if we weren't leaky, for example by giving us priors that favor beliefs that they "want" us to have, taking into considering the likelihood that the beliefs will be leaked. Each person will inherit a prior that differs from others, and thus disagreements can be explained by these differing priors.

This kind of disagreement can't be solved by a commitment to honesty and rationality, because the disagreeing parties honestly have different beliefs, and both are rational given their priors.

One way out of these dual binds (some conflicts are Pareto-optimal, and some disagreements are rational), is to commit instead to objective notions of truth and morality, ones that are strong enough to say that some of the ultimate values and some of the priors we have now are objectively wrong. But the trend in philosophy seems to be to move away from such objective notions. For example, in Robin's "Efficient Economist's Pledge", he explicitly commits to take people's values as given and disavows preaching on what they should want.

Expand full comment

All, I agree and said explicitly that there can be situations where the better-for-all deals can't be created or enforced. But do you really think the urges-to-take-sides I discussed in my post are of that sort?

No, I think your specific examples may be better explained by an ideal for war, which you already hypothesized in your post:

It seems that one of humanity's strongest ideals is actually war, i.e., uncompromising conflict.

Game theoretic considerations suggest that such an ideal should exist. And if humanity really does have an ideal for war, in other words, if war is a ultimate value for us, not just an instrumental one, then some of the conflicts that you see as wasteful are in fact the better-for-all deals that you seek. And it's not true that "there is some deal that beats each conflict for each party."

Expand full comment

Robin, I think one problem arises here from the absolute nature of the statement. It should read:

"*In many situations,* every side can expect to get more of what it wants from compromise deals than from all out conflict."

That is undeniably true--just ask any divorce lawyer.

They'll also tell you that humans quite frequently treat win-win situations--even stunningly obvious ones--as if they were zero-sum. (My explanation, expressed in brief: foolish pride.)

Some experiments in the 50s and 60s demonstrated this in spades:

http://www.asymptosis.com/h...

Expand full comment

All, I agree and said explicitly that there can be situations where the better-for-all deals can't be created or enforced. But do you really think the urges-to-take-sides I discussed in my post are of that sort?

Expand full comment

Robin, aren't most scenarios where one side has a large advantage in conflict similarly too hard to enforce? I am having trouble squaring the two sentences of the update (or the two in your comment at 8:45). You seem to have granted that one cannot really have a deal when one side can take whatever it wants, then said that a deal is better anyway.

There are also many real world scenarios in which someone wants someone else (or many someones) dead, and may place significant value in taking part in the process. I presume this is outside the range you wish to cover.

Expand full comment

After writing the above, I realized that the descriptive version of "prefer peace" may not be true either. It may be that our genes "prefer" peace, but they've programmed us to prefer war.

Suppose in the "double auction" example I linked to, the buyer and seller don't bid personally, but must program agents with utility functions and let those agents bid for them. But before the bidding, there's an additional round where one agent will reveal its utility function to the other. In this case, the principals should program the agents with utility functions different from their own. To see this, suppose the seller's agent is programed with U(p) = p-c if deal occurs, and this is revealed to the buyer's agent, then the buyer's agent will bid c+.01 and the seller's agent will bid c. If the seller wants to make more than a penny's profit, it has to program its agent with a higher c than the actual cost.

Similarly, human beings tend to leak information about their private preferences, and therefore our genes should have constructed us with higher real preferences for conflict than if we could hide our preferences perfectly.

Expand full comment