Prefer Peace

As fiction authors know, compelling stories need conflict; readers love to root for good guys against bad guys.  As college professors know, students perk up when academic topics are posed as conflicts.  Sophomores love to hear each subject posed as a conflict between several possible isms, especially a long bitter conflict.  To them, intellectual maturity consists largely of looking over a long menu and ordering one from column A, one from column B, and so on.  But while I'd like to be a popular teacher, I'd rather be honest, and most subjects are just not well described as a conflict of isms. 

When asked to evaluate a proposed economic policy, most students identify some winners and losers, and then favor or oppose the policy based on which group they like best.  It takes a long time for students to learn to think in terms of economic efficiency, weighing the costs and benefits for all effected parties, and even then students usually find an even-handed approach much less inspiring.  Some econ profs engage students by inviting them to join the few knowing insiders against the ignorant multitudes outside, but even that rings wrong to me.

Yesterday I discussed the tension between the ideals we often verbalize and the goals our usual choices seem designed to achieve.  I tried to argue for compromise, for seeking "variations on common ideals which one can more easily admit serve ordinary non-ideal ends."  But, most commenters did not want compromise; they instead wanted to take sides and seek better ways for their side to win the war.  Generation after generation, the [added: some] old tell the young to seek internal peace; no internal side has the strength to win a clean victory, so all out war risks all out destruction.  But the young will not hear.

It seems that one of humanity's strongest ideals is actually war, i.e., uncompromising conflict.  In our culture we are supposed to oppose ordinary bloody war, preferring peace when possible there. But we do not generalize this lesson much to other sorts of  conflicts.  We celebrate those who take sides and win far more than we do peacemakers and compromisers.  But the principle is the same; every side can expect to get more of what it wants from compromise deals than from all out conflict.

Added: Byran Caplan asks:

What makes Robin think that "every side can expect to get more" from compromise than conflict?  Doesn't anyone have a comparative advantage in conflict?  And all it takes to get a conflict is one willing combatant, no?

Deals are not always enforceable, admitting interest in a deal might send the wrong signal, and one may need to threaten conflict to get the best deal.  Even so, there is some deal that beats each conflict for each party.

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • Pedant

    weighing the costs and benefits for all effected parties

    Is this a veiled reference to abortion?

  • http://www.baseballmusings.com David Pinto

    Winning does get the girl.

  • http://profile.typepad.com/iph1954 Mike Treder

    Good piece, Robin. You’re obviously right that humans have a built-in bias to look at complicated situations and reduce them to simple binary choices. It wouldn’t be hard to develop a theory of evolutionary psychology that supports your thesis. And I don’t think a comparatively few centuries of Enlightenment will quickly overcome our hundred millennia of evolutionary development.

    So, what’s the solution? We’ve come a long way already through the spread of freedom, equality, education, and the benefit of a fossil-fueled prosperity. Yet, as you point out, we’re still inclined to look for right sides and wrong sides, good sides and bad sides, ready to choose up and fight.

    As a transhumanist, I wonder if the availability — and, perhaps, popularity — of enhancement therapies to increase our intelligence, moderate our psychology, and maximize our wisdom will someday open a door into a new way of thinking and living without the reflex need for conflict.

  • Heath

    Generation after generation, the old tell the young to seek internal peace; no internal side has the strength to win a clean victory, so all out war risks all out destruction. But the young will not hear.

    Great piece, but you’re dead wrong on this part. The problem is that the old don’t do this.

    “I was raised to _____.”

    “My father taught me to ____.”

    Fill in the blanks with whatever pops into your head and 9 times out of 10 it’s some really arrogant nonsense.

  • Jef Allbright

    s/the old/the wise/

  • http://www.nancybuttons.com Nancy Lebovitz

    Are you sure that it’s all human cultures equally?

    • gwern

      Even cultures and religions you would think simply couldn’t fall into this conflict trap – whose every scripture and moral principle is *against* it – still manage to do it.

      Let’s take the example of Buddhism; I can’t think of a more peaceful, pacifist religion (except Jainism), and yet Buddhists still regularly managed to become warrior-monks, come up with strange things like the Tantric forms (not talking about the sexual ones), suicide bombers*, kamikaze**, and so on.

      Even the primitive cultures aren’t exempt. Think of the murder rates among the !Kung, or the more famous homicides of the Yanomano. Conflict and especially violent conflict certainly seem like human universals…

      * I refer here to Sri Lanka; with a 70% Buddhist population, I’m fairly confident that many of the Tamil Tigers’ (the inventors) suicide bombers were Buddhist.
      ** One could argue that the Japanese kamikaze weren’t ‘really’ Buddhist, that their Buddhism was pro forma and they were really more Shinto or atheistic.

  • Stephen

    On behalf of young people, thank you Heath.

    That wisdom comes with age is the most unsupportable truism…
    It should be “the illusion of wisdom that comes with calcification into one’s preferred folk-theories comes with age.” Greater wisdom probably correlates with greater age, but I think a wise person is an exceptional case at any age.

    My 2 cents: I think every gradeschooler should have to learn the implications of the iterated prisoner’s dilemma and the (objectively) winning solution of “tit-for-tat.” Out of this solution you get the principles of generosity (begin with compromise), toughness (respond to defect with defect on next iteration), forgiveness (should the other side compromise, compromise), and clarity (do this consistently).

    We always eventually realize that compromise is better. The trick is in prioritizing compromise over our penchant for leading off with a defect strategy (and of course we never construe it as defection: it’s always us vs. them). It is so difficult to get back to compromise after enough iterations of betrayal. It isn’t important whether children inherit this in-group bias and shortsighted ill-will or whether it’s part of their nature: the important thing is that they be educated out of it. I suggest making game theory a required subject in highschool and college.

  • Jef Allbright

    @Stephen: Your comment here has my vote for the most powerful potentially productive suggestion on this blog this year. Do you Facebook??

  • http://timtyler.org/ Tim Tyler

    Re: We celebrate those who take sides and win far more than we do peacemakers and compromisers. But the principle is the same; every side can expect to get more of what it wants from compromise deals than from all out conflict.

    It makes me think of capitalism – proponents advocate conflict over cooperation – despite all the resulting waste and failed enterprises.

  • Jef Allbright

    Few see
    *increasingly effective competition supported by increasingly effective cooperation
    *increasingly effective cooperation driven by increasingly effective competition

    Fewer still identify (align their interests) with the evolving framework rather than its constituent drivers of “cooperate/defect.”

    Yes, we still negotiate with terrorists, and we still puzzle over the iterated, multidimensional, Prisoners’ Dilemma.

  • http://profile.typepad.com/6p011169016158970c TychoCelchuuu

    I feel like we go for conflict over compromise and cooperation because this seems like an easy way to make it to efficient decisionmaking. Sitting in the middle and evaluating each side fairly is a lot more work than having one person take one view and the other take the opposite view and then watching them argue. It’s like the philosophy behind the adversarial justice system: the best facts come out not when you have a judge go out and look for everything, but when you let both sides of the conflict present what they feel is the best evidence.

    Tim Tyler’s comment about capitalism is sort of the same thing; if we have one person whose job it is to find the “best” solution, they might muddle through the middle and not try as hard to find new, innovative ways of doing things that two adversaries would. Of course, at the end you need some way to choose between the two extremes you get, but this is a problem that comes after the information-gathering process, which is when conflict seems most useful.

  • http://retiredurologist.com retired urologist

    Re: the goal of compromise

    Richard Dawkins: “I think it’s important to realize that when two opposite points of view are expressed with equal intensity, the truth does not necessarily lie exactly halfway between them. It is possible for one side to be simply wrong.”

  • Stephen

    @Jef: Thank you. Your comment is possibly the greatest compliment a stranger has given me.

    @Tycho: I think you’re absolutely right. But the first compromise occurred when both sides agreed to let their ideas do battle based on the weight of the evidence behind them. In this situation, the destruction of the flawed/less accurate idea is a creative act.

  • Jef Allbright

    @Stephen: Yah, but the complement wasn’t so much for you (who I don’t yet know), but for your valuable message. :-)
    Looking forward to more…

  • http://macroethics.blogspot.com nazgulnarsil

    I have serious doubts that humans can operate efficiently without competition.

  • http://meteuphoric.blogspot.com/ Katja Grace

    It may be good to compromise, but it’s good to have allies who do not. So makes sense for humans to shout about staying true to their causes while they privately compromise like crazy. Same as not trading off other ‘sacred’ things e.g. not liking value placed on human lives, body parts or care, hating money for sex, calling for money into health care indefinitely if it might save one baby.

    So maybe the commenters you mention seem as if they only see one side because this is a discussion, not a conflict that is actually hurting them much, and the one side they ‘see’ is the one they feel compelled to show/have strong allegiance to. Or maybe it’s because the person you talk to here, and the one that can think about deals, is more of the far self, so is partisan.

    Also I agree with Heath that the old do not say that. The closest thing they say is a list of either justifications for believing that they can’t possibly influence anything anyway, or justifications for believing that feeding the birds in their garden is just as important as whatever they previously cared about. They may be more at peace with themselves, but only because they’ve had longer to find the most comforting rationalizations.

  • Grant

    I do agree about the profoundness of the iterative prisoner’s dilemma with Stephen. Its probably the single most enlightening concept I’ve found in sociology. I don’t agree that humans necessarily play this game poorly; I don’t think that is clear.

    I think the second most enlightening concept I’ve learned is that the cooperation needed to solve prisoner’s dilemmas is costly. This implies that mutual defection can be the efficient strategy, even when it results in tragic consequences.

  • Percy

    Let me play the devil’s advocate:

    “We celebrate those who take sides and win far more than we do peacemakers and compromisers. But the principle is the same; every side can expect to get more of what it wants from compromise deals than from all out conflict.”

    On what basis do you extend the results of Prisoner’s Dilemma to every scenario of human interaction, including the interactions of human groups?

    Here are some differences between Prisoner’s dilemma and the situation of nations at war:

    1. PD does not contain an existential threat
    2. In PD, outcomes of choices remain static
    3. The outcomes of PD are arbitrarily chosen and only reflect certain real life scenarios – it is possible for example to posit situations where cooperation is disadvantageous.

    Then there are the problems of scale – for example, massively complex and massively multi-dimensional PDs running simultaneously. We don’t know the rules for the interaction of all these complex and differently-rigged PDs. (Or do we? Someone
    give me a link please if you have one).

    This is further complicated by the epistemological problems resulting from this complexity – the fact that humans can’t
    know or foresee the results of multiple decisions in multiple many-tiered many-player PDs. Even with the benefit of full historical documentation and hindsight, we remain at the mercy of what-if scenarios: thus much more so for those who actually experience this in real time.

    Wartime psychology also tends to see a shift from rational thought modules to pre-rational ones. Perhaps because the sacrifices required in war and the sustained effort necessary are impossible for a human being to justify to him- or herself on the basis of rational considerations of one’s own interests. In these scenarios, pre-rational bonds (family, ethnic group, nation) are brought to the fore, and the pre-rational notion of ‘loyalty’ becomes a primary driver. I have called it ‘pre-rational’ because it seems to be rooted in emotional/non-neocortex thought modules, or at least to be referred to as an instinctive drive; loyalty in human conflict may in fact represent rational decision making on the basis of Hamilton’s Rule.

    Rather than making a series of decisions about one’s own continued existence, one considers oneself an articulation of a larger group – such as, in war, the nation which gave one birth – and subordinates individual interest to the interest of the group. If a group contains all the elements perceived to be central to one’s essential identity – whether memes or genes – in higher concentrations relative to other groups/entities, one sees the success of the group as bringing a benefit, any harm inflicted on the group as being a detriment, to one’s larger interests regardless of individual survival. These are referred to by Salter as Ethnic Genetic Interests (EGI).

    This is predicated on the notion that one was generated by such a group, and that the group could therefore generate similar people throughout futurity in the absence of the individual who sacrifices him/herself for the continued existence of the group. Whereas without the group the individual would not be able to reproduce the group. The individual, thus subordinate in an evolutionary or life-historical sense, is not obliged to defer in every scenario to his/her own interest.

    Given what the above, (of course there may be more objections to the quoted assertion), I don’t see the basis for the assertion that all human conflict or war is essentially an error and a failure to consider one’s interests. As of right now, I’d like to see more proof or support for this contention.

  • Doug S.

    What’s the evolutionary stable strategy for playing Chicken? Interactions that are Chicken-like seem to be fairly common: two groups going to war can easily be seen as having both chose to crash their cars into each other rather than swerve away.

  • Grant

    Percy,

    I think the phrase “cooperation is costly” encompasses much of what you’re saying. We can expect more evolved minds to cooperation more cheaply (and thus more frequently), but the evolutionary process takes time. In the mean time there can be efficient wars.

    However, I think we find ourselves in a world where people often defect when they shouldn’t (and vise-versa). Economists can advise against this behavior. Also, they can advise ways which make cooperation cheaper.

    Still, much of our great society is dependent on mutual cooperation on a huge scale. We have evolved mechanisms (in-group and out-group signaling and screening, for example) to allow this cooperation cheaply. These mechanisms (heuristics?) are ancient and flawed, but I’m skeptical we really know how to improve upon them.

    nazgulnarsil, competition can be cooperation. Businesses competing against each other are cooperating with society; those colluding to try to set prices and wages are defecting against it. Under most (non-lifeboat) situations I think humans can operate efficiently while cooperating with a higher ideal (e.g. “don’t kill each other”).

  • http://profile.typepad.com/robinhanson Robin Hanson

    All, yes old folks often, perhaps even usually, say many stupid things. But some do say wise things.

    nazgul and others, preferring to take sides isn’t the same as competing.

    Katja, yes we may want to signal reluctance to compromise.

    Percy, I didn’t mention the prisoner’s dilemna; cooperation is meaningful far more broadly.

    Pedant, not even close.

    Mike, human enhancement is another topic where folks are eager to pick sides.

    Tycho, side-taking seems much more than a trick to get others to make arguments.

  • Stephen

    The prisoner’s dilemma is way too well-defined to apply with any precision to most real-world conflicts. I only suggest that learning about it will promote cooperation and an understanding of playing for the long-term.

    The better reason to cooperate is because it is extremely unlikely that any single existing side has already pegged the most efficient solution.

  • http://www.bcaplan.com Bryan

    Robin, when you say, “Even so, there is some deal that beats each conflict for each party,” you’re just repeating yourself. What makes you so sure about the existence of such deals? Is it just the tautology that the winner of the conflict would have preferred, “Give me everything I’m going to take from you without struggling against me”?

  • Eric Yu

    Bryan: Yes, that solution usually works out better for both sides.

    Deals are not always enforceable, admitting interest in a deal might send the wrong signal, and one may need to threaten conflict to get the best deal. Even so, there is some deal that beats each conflict for each party.

    It also takes some resources to make a deal–if the cost of conflict for one side is unusually small (when they have an overwhelming advantage) or the cost of compromise is unusually large, then they will prefer conflict even if deals are enforceable and signaling is ignored.

    Consider the case of a thief with a gun. The police has very little ability to detect criminals, so the thief will get in trouble only if there is an eyewitness. An unarmed person is standing in a park with no one else in sight, and the thief knows they have $5,000 in cash. The person also knows that the thief is armed, so neither side is under- or overestimating their expected gain (assuming both agents are rational). The thief is a very good shot, and his distance from the victim virtually guarantees that he will be able to kill him in one shot. The victim, however, is too far away to reach the thief before getting killed. The thief could stop the person from escaping and then persuade the person to hand over all his money (compromise), risking getting caught. Alternatively, he could shoot and kill the person immediately (conflict), take his money and most likely get away with the crime. Of course, there is always the option of leaving the person alone, but since there aren’t any detectives, killing the person and taking his money has a positive expected gain.

    This scenario is realistic, even though it doesn’t happen very often. Clearly, a deal is possible (most unarmed people would surrender to people with guns and give up all their money). However, for the thief, no compromise is better than conflict because any compromise would take time, increasing his chances of being caught. Therefore, Robin’s statement is false (but it is true if making a deal takes no resources).

  • http://profile.typepad.com/robinhanson Robin Hanson

    Bryan, yes, conflicts waste resources, so all sides can be better off without the waste of the struggle.

    Eric, your scenario is one where the ideal deal, you can have my money if you don’t kill me, is too hard to enforce.

  • http://timtyler.org/ Tim Tyler

    It can pay to fight – when your life is on the line. Consider the case of two unrelated people, alone on an island, with only enough resources for one to survive. The problem is not the cost of making a deal.

    • Carl Shulman

      If deal enforcement was cheap they could flip a coin.

  • http://williambswift.blogspot.com/ billswift

    Efficiency is nice, but more important is **effectiveness**, using the means that best meets your ends, whether or not they are the most efficient.

  • http://profile.typepad.com/6p010537043dce970c Wei Dai

    The theory of games with incomplete information explains why mutually beneficial deals sometimes don’t occur. When one side has private information about its costs and benefits for a compromise (compared to continued conflict), it will act as if its costs are higher and benefits lower. This way it gets a better deal if a deal does occur, but also means that sometimes deals don’t occur when both sides could benefit.

    There’s a nice example of this that I still remember from my game theory class, and I dug it up at http://books.google.com/books?id=pFPHKwXro3QC&pg=PA220.

    Isn’t saying “prefer peace” the same thing as telling the seller in this game “bid your true cost” or telling the buyer “bid your true valuation”, in which case it seems futile. Or is “prefer peace” is supposed to be descriptive rather than prescriptive? In other words, is the point that we all actually prefer peace, even if many act and speak as if they prefer the opposite?

  • http://profile.typepad.com/6p010537043dce970c Wei Dai

    After writing the above, I realized that the descriptive version of “prefer peace” may not be true either. It may be that our genes “prefer” peace, but they’ve programmed us to prefer war.

    Suppose in the “double auction” example I linked to, the buyer and seller don’t bid personally, but must program agents with utility functions and let those agents bid for them. But before the bidding, there’s an additional round where one agent will reveal its utility function to the other. In this case, the principals should program the agents with utility functions different from their own. To see this, suppose the seller’s agent is programed with U(p) = p-c if deal occurs, and this is revealed to the buyer’s agent, then the buyer’s agent will bid c+.01 and the seller’s agent will bid c. If the seller wants to make more than a penny’s profit, it has to program its agent with a higher c than the actual cost.

    Similarly, human beings tend to leak information about their private preferences, and therefore our genes should have constructed us with higher real preferences for conflict than if we could hide our preferences perfectly.

  • http://zbooks.blogspot.com Zubon

    Robin, aren’t most scenarios where one side has a large advantage in conflict similarly too hard to enforce? I am having trouble squaring the two sentences of the update (or the two in your comment at 8:45). You seem to have granted that one cannot really have a deal when one side can take whatever it wants, then said that a deal is better anyway.

    There are also many real world scenarios in which someone wants someone else (or many someones) dead, and may place significant value in taking part in the process. I presume this is outside the range you wish to cover.

  • http://profile.typepad.com/robinhanson Robin Hanson

    All, I agree and said explicitly that there can be situations where the better-for-all deals can’t be created or enforced. But do you really think the urges-to-take-sides I discussed in my post are of that sort?

  • http://asymptosis.com Steve Roth

    Robin, I think one problem arises here from the absolute nature of the statement. It should read:

    “*In many situations,* every side can expect to get more of what it wants from compromise deals than from all out conflict.”

    That is undeniably true–just ask any divorce lawyer.

    They’ll also tell you that humans quite frequently treat win-win situations–even stunningly obvious ones–as if they were zero-sum. (My explanation, expressed in brief: foolish pride.)

    Some experiments in the 50s and 60s demonstrated this in spades:

    http://www.asymptosis.com/humans-are-pathologically-nuts-proof-positive.html

  • http://profile.typepad.com/6p010537043dce970c Wei Dai

    All, I agree and said explicitly that there can be situations where the better-for-all deals can’t be created or enforced. But do you really think the urges-to-take-sides I discussed in my post are of that sort?

    No, I think your specific examples may be better explained by an ideal for war, which you already hypothesized in your post:

    It seems that one of humanity’s strongest ideals is actually war, i.e., uncompromising conflict.

    Game theoretic considerations suggest that such an ideal should exist. And if humanity really does have an ideal for war, in other words, if war is a ultimate value for us, not just an instrumental one, then some of the conflicts that you see as wasteful are in fact the better-for-all deals that you seek. And it’s not true that “there is some deal that beats each conflict for each party.”

  • http://profile.typepad.com/6p010537043dce970c Wei Dai

    It further occurs to me that this view of human beings as leaky agents of our genes can also help explain the “agreeing to disagree” phenomenon. Because we tend to leak our private beliefs in addition to our private preferences, our genes should have constructed us to have different private beliefs than if we weren’t leaky, for example by giving us priors that favor beliefs that they “want” us to have, taking into considering the likelihood that the beliefs will be leaked. Each person will inherit a prior that differs from others, and thus disagreements can be explained by these differing priors.

    This kind of disagreement can’t be solved by a commitment to honesty and rationality, because the disagreeing parties honestly have different beliefs, and both are rational given their priors.

    One way out of these dual binds (some conflicts are Pareto-optimal, and some disagreements are rational), is to commit instead to objective notions of truth and morality, ones that are strong enough to say that some of the ultimate values and some of the priors we have now are objectively wrong. But the trend in philosophy seems to be to move away from such objective notions. For example, in Robin’s “Efficient Economist’s Pledge”, he explicitly commits to take people’s values as given and disavows preaching on what they should want.

  • Eric Yu

    Robin: in my scenario, it is definitely possible to enforce a deal. The thief is a very good shot, and if the victim tried to run away the thief would have a very good (>95%) chance of killing him. More importantly for real situations, even a 95% chance of a deal being enforced can be too low if one side has very little to lose from a conflict (<5% of his expected gain). How well a deal must be enforced to be supported by both sides depends a lot on the cost of conflict.

  • http://profile.typepad.com/robinhanson Robin Hanson

    Wei, yes we probably evolved to have beliefs that give good strategic impressions, assuming they are often leaked. But I don’t think this is well described as having evolved to have certain priors, which are not just any old beliefs. Once we knew about about this source of the origins of our beliefs, we should not rationally retain them, so rationality can overcome disagreements due to this effect.

  • http://profile.typepad.com/6p010537043dce970c Wei Dai

    Robin, my understanding is that if you take any consistent set of beliefs and observations, you can work backwards and find a prior that rationally gives rise to that set of beliefs under those observations. Given that human beings have a tendency to find and discard inconsistent beliefs, there should have been an evolutionary pressure to have consistent beliefs that give good strategic impressions, and the only way to do that is by having certain priors.

    I do not dispute that we also have beliefs that give good strategic impressions and are inconsistent with our other beliefs, and those can certainly be overcome by more rationality. But the better we get at detecting and fixing inconsistent beliefs, the more evolutionary pressure there will be for having consistent strategic beliefs. What can counteract that?

    BTW, Eliezer’s idea of achieving cooperation by showing source code, if it works, will probably make this problem even worse. “Leaks” will become more common and the importance of strategic beliefs (and values) will increase. The ability to self modify in the future will also make it easier to have consistent strategic beliefs, or to create inconsistent ones that can’t be discarded.

  • http://profile.typepad.com/robinhanson Robin Hanson

    Wei, I have in mind this analysis. Once we integrate our knowledge about the origins of our beliefs into such a framework, we can’t still embrace beliefs that differ for this reason.

  • http://profile.typepad.com/6p010537043dce970c Wei Dai

    Robin, in that paper you wrote:

    For example, if you learned that your strong conviction that fleas sing was the result of an
    experiment, which physically adjusted people’s brains to give them odd beliefs, you might
    well think it irrational to retain that belief (Talbott, 1990).

    Suppose in the future, self-modification technologies allow everyone to modify their beliefs, and people do so in order to gain strategic advantage (or to keep up with their neighbors), and they also modify themselves to not think it irrational to retain such modified beliefs (otherwise they would have wasted their money). Would such a future be abhorrent to you? If so, do you think it can be avoided?

  • http://profile.typepad.com/robinhanson Robin Hanson

    Wei, people might well choose to be irrational. This is not my preference, but that hardly makes it “abhorrent.”

  • http://profile.typepad.com/6p010537043dce970c Wei Dai

    Right, I should have known that. :) Anyway, I’ve created a new post on LessWrong to continue the discussion, since it’s getting off-topic for this post.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    “students perk up when academic topics are posed as conflicts … But while I’d like to be a POPULAR teacher, I’d rather be HONEST,”

    The inescapable irony of what I intuit is our primate aesthetic to pay attention to (and thus frame things as in the attentional marketplace) binary conflicts. Although your post is to a degree managing this fire with fire.

  • Pingback: Recent Reading: 11 May: The Torture Habit, Alphabet of Morbidity, Surviving Dog, Conflict, Death & Dying, Credit Psychology « Beyond Rivalry

  • Pingback: Overcoming Bias : Is Conflict Inevitable?