Overcoming Disagreement

In an ideal world, disagreements would not exist.

It’s a provocative statement, but hopefully readers of this blog will have been exposed enough to the reasoning to accept it provisionally. Even Eliezer’s recent explanations of his various disagreements largely come down to making cases for why his disputants should agree with him, not for why they should all continue to disagree.

When two people disagree, and they come together to try to reach agreement, they have much to gain. First, some of their disagreement may be based on different information. By explaining the basis for their beliefs and sharing their data, each can improve the quality of his own estimates. Second, they probably have different biases affecting their reasoning. Discussion will illuminate those biases and help to cancel them out. Third, not being perfect Bayesians, they are computationally limited, and one or the other is likely to have superior reasoning, algorithms and heuristics. They can both aim to incorporate the results from the best quality reasoning available to the two of them.

Given these advantages, why then are people seemingly so reluctant to reach agreement? I think it can be easily explained in terms of human social status. All too often we view disagreement and debate as a contest, with a winner and a loser. The one who convinces the other of the correctness of his position wins. If they both reach an intermediate conclusion, this is seen as a tie, perhaps with an edge to one side or the other.

This viewpoint makes pragmatic sense in human society. Someone who is frequently right and convinces others of that fact is likely to be correct on other issues. He will be more trusted and respected as a source of advice and wisdom. Someone who often turns out to be wrong on disagreements is not going to be very well trusted in general. So people who win disagreements gain status, respect, power and influence, all of which lead to improved quality of life.

This effect makes people very reluctant to admit that they are wrong and to adopt the other person’s side in a disagreement. In fact, we often see a sort of bargaining process, in which adopting certain aspects of one’s opponent’s views leads to demands that the other side adopt something of one’s own views in return. Agreement is seen as very much a process of compromise and negotiation, rather than an objective search for truth.

Of course, OB participants are far above such mundane considerations as power and influence. We are clear minded seekers after truth, right? Right? Okay, maybe we do retain some vestiges of these natural human instincts. What can we do to overcome them and approach disagreement from a more Bayesian perspective?

One idea is to practice overcoming disagreement first on issues that are relatively easy. As with other areas of life, we do best by starting with easy problems before moving on to harder ones. Overcoming disagreement on matters where you have no emotional stake, no firm commitments, should be feasible. Yet often we do find ourselves disagreeing with others on quite trivial matters. These are good topics for practice.

It will be important that your practice partner is aware of the basic principles of rationality and the implications for how honest and respectful people interact when they disagree. You both should understand that “agreeing to disagree” is a sign of mutual disrespect and contempt. Rationality imposes a strong imperative to reach agreement and this must drive your interaction.

Moving beyond trivial matters, there are other strategies we can employ to make it easier to reach agreement. One is to avoid prematurely staking out a strong public position. Once you are committed publicly to a view, and your disagreement partner is likewise committed to an opposing position, it will be hard to avoid the winner/loser paradigm.

I tend to believe that an most issues where disputes are common, for most people, the evidence is really quite ambiguous as to which view is correct (as suggested by the mere fact that different people have reached different conclusions). The best position to take is a weak one, to hold views provisionally and to be open to persuasion. Adopting this as one’s public stance can actually improve status in many circumstances, since we all claim to admire those who have open minds. This kind of positioning can reduce the loss of status from changing your mind in a dispute, making it easier to reach agreement and gain the benefits of improved accuracy.

The final strategy I will suggest is the hardest, which is to renounce the social game and accept the possible lowering of status in disagreement, achieving a zen-like equanimity in the face of social disaster. This will not be easy but frankly, I suspect that many OB readers are already somewhat alienated from popular human social mores. Taking another step and consciously accepting the loss of status from being shown up as an intellectual inferior should be a reachable goal for many of us.

To move towards this ideal, consider taking actions which may accustom you to similar losses of status as you would experience from changing your mind in a disagreement. These might include making frequent, falsifiable predictions, many of which will inevitably turn out to be wrong; commenting on issues even where you are not too knowledgeable; sharing your speculations and thoughts even when you expect that they will lead to criticism. Air the dirty laundry of your mind, expose your ideas with all their unpolished flaws. In a world where most people build up a false front and do their best to hide their weaknesses, these honest actions can paradoxically make you seem mentally inferior. Such exercises can hopefully prepare you emotionally for being able to honestly report your changes of mind in disagreements.

Now it might be argued that this strategy could backfire, by hurting your reputation so that in the disagreement, your views will not be given appropriate weight. Indeed, this approach does depend on both parties being able to rationally evaluate and weigh their respective strengths, insights and quality of information. Ideally, then, both participants in the disagreement will be practiced at status reduction exercises, preparing them both to achieve maximum gains from overcoming their disagreement.

When this condition is not met, overcoming disagreement may not be possible in practice. Still, a practitioner of these measures will be better positioned to improve his accuracy as a result of the attempt at agreement, since he will be less bound to his previous position. He can still hope to gain many of the benefits from overcoming disagreement, making his efforts worthwhile in the end.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Unknown

    Another idea is to take what someone says and say to oneself “how is that right?” instead of “how is that wrong?” Focusing on the latter question may be one of the main causes of persistent disagreement.

  • Vladimir Slepnev

    Renouncing the social game and reaching zen… To everybody who likes the sound of that phrase, I suggest the following exercise: defend racists and pedophiles in online discussions, using your real name. If you’re afraid to do it, well you’ve learned something about yourself.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Also good for practice sessions is if you can disagree about questions whose true answer you can discover in a near-term timeframe, either because they are near-term future events, or because the true answer can be looked up and neither of you has done so yet.

  • Joseph Knecht

    This is a great post. I hope you’ll add it to your “Favorite Posts” list, Robin, and perhaps incorporate the idea into the “Welcome” page as a guideline for participation on OB.

  • Joseph Knecht

    AAHH! Hal, not Robin, is the author. I serve as example 1 of a status-lowering mistake, albeit unintentional.

  • Lord

    One shouldn’t expect to always agree. Evolution is perfect example. Does debating it after more than century have any point?

  • spindizzy

    I think that playing chess has helped me develop some emotional maturity. It’s a work in progress, but these days I feel I am less sore about losing, still pleased about winning, and accepting of the fact that other people will progress faster.

  • Julian Morrison

    Vladimir Slepnev: you sound like you are “doubting, but not allowed to successfully doubt”. Or worse, just playing devil’s advocate with debating-points you personally consider trumped, but you think will slip past your less sophisticated audience. To fairly take a side, you have to use arguments strong enough to convince or at least thoroughly shake your certainty – you have to be vulnerable to your own weapons.

    I don’t think I could do that for racism, but I think pedophilia could be defended pretty thoroughly and a rationalist would have to say at minimum, “everything I once assumed is so contaminated, and I personally am so full of bias, I need more data before picking a side”.

  • Alan

    Yours is a thoughtful post, like so many others. The following remarks are intended in no way to question the intellectual work that went into it. I hope my comments aren’t going construed as going off into some irrelvant directions. If they are, please disregard them. To frame the remarks, let me recall a saying attributed to Chuang-Tzu, namely, “men honor what lies within the sphere of their knowledge, but do not realize how dependent they are on what lies beyond it.”

    1. Ideal (platonic?) worlds in which there is no disagreement have been subject to experiment. Witness some of the defining events of 20th century world history in pursuit of removing disagreements. Is reduction in disagreement per se a worthy common objective? Are we speaking of the world of ideas or the social and physical world, or some combination?

    2. Is the main point of this post to question how best to reach a synthetic agreement following more or less classical dialectal argumentation between two sides?

    3. I read the assertion that disagreement can be easily explained in terms of human social status as attempting to explain too much with one variable. Status is always depedent upon time and context. Example, penning philosophical treatises in a cafe on the Left Bank could be viewed either as the lofty pursuit of truth by an alpha, or as the effete scribblings of one the opportunity cost of whose time is negligible.

    4. With respect, I have to question your notion of what a zen-like state is. (I’m no expert, so critique away!) But I think of Zen as being inextricably freighted with Bushido culture, where, to oversimplify, a samurai empties his mind, all the better to respond automatically. In the west I think it has developed a different complexion as being peace-loving, but that’s another topic.

    5. If humans evolved for the principal “purpose” of passing on their genetic endowment to successive generations–rather than discovering truth–then why not acknowledge that we are hard-wired to strive to become alphas, to acquire status and its benefits? Training our superegos to check our nuclei accumbens can be hard work. Is it worth the price we pay?

    6. Would it be giving up to say perhaps the best we can hope to do is to continually be cognizant of our biases and fallacies, and attempt to work around them via heuristics and reason?

    7. Is it possible to behave in a conciliatory manner without being accomodating?

    8. What if I want to win an argument, and the other side is advocating something that is morally objectionable? Why should I ever want to overcome the disagreement?

    Thanks for reading. Best regards.

  • Tiiba

    @Joseph Knecht: The post, to be fair, was typical for Robin. I also thought it’s him at first. I’m more accustomed to Hal blaming everything that goes wrong in his life on “human error”.

  • gwern

    Alan: I don’t really see how Zen is inextricably bound up with the Bushido. Ch’an Buddhism was around many centuries in both China and Japan before Bushido was ever developed or codified.

  • http://hanson.gmu.edu Robin Hanson

    Yes, practice disagreeing on small issues that will be come resolved, so you notice how those disagreements differ from others. Avoid taking strong positions when the evidence is weak or mixed. And beware social status of winning arguments. I wish we could create a community where social status goes to those who argue reasonably, rather than dominantly.

  • Phil

    In my eyes, admitting you’re wrong INCREASES your status and reliability. It identifies you as someone who is willing to remain objective, someone with whom it is more profitable to discuss issues, and someone who, when he insists that he is right, is much, much more credible.

    There are some people, who, in lawyerly fashion, will not yield the smallest point, no matter how obvious. Those people thus identify themselves as not worth anyone’s time in any debate.

  • http://apperceptual.wordpress.com/ Peter Turney

    It seems that there are (at least) two types of disagreement: disagreement over facts (“That pole is one meter tall.” versus “No, it’s two meters tall.”) and disagreement over values (“All human life is valuable and must be protected.” versus “Some people do not deserve to live.”). When you say, “In an ideal world, disagreements would not exist,” I assume you only mean disagreement over facts. For example, ideally, we would all agree on whether the many worlds interpretation of quantum mechanics is correct. I assume you do not believe that we will all agree on values, even in an ideal world. For example, we may both agree that gold is good, but we may not agree on how much gold I should have versus how much gold you should have.

    However, the distinction between facts and values is questionable. Once you see that every “fact” may be laden with “value”, and you admit that agreement on values is often difficult or impossible, then you see that agreement on facts can be difficult or impossible, due to their inherent implications for values. The claim that, “In an ideal world, (factual) disagreements would not exist,” then becomes very problematic. Maybe it’s true, depending on what you mean by “ideal”, but it is certainly not obvious that it is true.

  • http://theviewfromhell.blogspot.com Sister Y

    Hal, you’re awesome. I think you can get more general than that to account for almost all cases that aren’t explained by information differences. The disagreements that, in a perfect world, wouldn’t exist are descriptive disagreements, subject to logical or empirical verification; ego often contaminates the ability to come to the correct answer, but, more generally, descriptive disagreements get contaminated by normative and aesthetic disagreements. “You can’t be right because it is socially important that I am right” is one of these, but you could generalize that to “that can’t be right because it seems to conflict with an important aesthetic/moral value of mine.” Someone might reject multiple world because he staked out a contrary position, but he might just as likely reject it because it gives him the willies (aesthetic disagreement infecting judgment). Someone who disagrees that there are gender differences in cognition might be ego-motivated, or might be letting some other normative disagreement (“it’s important that there not be gender differences in cognition,” or “the implications of there being gender differences in cognition are horrifying”) contaminate his thinking.

    I think you’re right that people who become aware of the involvement of ego in disputes will be more likely to agree. In general, disagreements seem to get cleared up by unpacking the normative and aesthetic layers underneath them.

    One question, along Phil’s lines: have you ever had someone come around to your way of thinking (change his mind) and thereby lost respect for him?

  • steven

    You are all invited to play the Aumann Game… though maybe the lack of words makes it unrealistic as practice.

  • http://shagbark.livejournal.com Phil Goetz

    Hal – I love the phrase, “”agreeing to disagree” is a sign of mutual disrespect and contempt.”

    Alan – Great point: Just as you sometimes need to add noise to an optimization/search problem to find a better local maximum, an irrationally high amount of disagreement may be necessary for progress.

    3 observations:

    1. As I mentioned yesterday, many of my disagreements with people are because of values/beliefs they hold which are pre-rational. By that I mean they form part of the foundation a person must have before reasoning, it does not appear to make sense to reason about them, and hence our disagreement can’t be reasoned away. For instance:
    a) Someone’s post yesterday, putting forth the value that x amount of harm to 1 person outweighs x amount of good to 1,000 people, or x/1000 amount of good to 1,000,000 people.
    b) A theologian I am arguing with who believes that meaning (both in terms of purpose, and in terms of semantics for language – which, curiously, he does not distinguish from each other) for a system must be defined in reference of things outside that system, and therefore it is impossible for life, or even logical propositions, to have meaning without a God outside the system to define basic values and meanings.

    2. I looked up the discussion of copyright and software-for-profit that someone referenced earlier this week in the disclaimer thread. It reminded me how, in the present day, there are a large number of very smart people who hold fervently the idea that copyright is both socially inefficient and morally wrong. This idea (of theirs) is so indefensible, that whatever is going on, it clearly isn’t reasoning of any kind. And yet the people clinging to this indefensible position typically come from (wild guess) the top 1% of the general populace in terms of reasoning ability. And clinging to this idea does not have anything to do with status.

    3. “renounce the social game and accept the possible lowering of status in disagreement, achieving a zen-like equanimity in the face of social disaster.” This is a recipe for social disaster. It would reduce the effectiveness of thinking people, not improve it.

  • Joseph Knecht

    @Phil

    Could you spell out your reasons for thinking that “achieving zen-like equanimity in the face of social disaster” would reduce the effectiveness of thinking people? I can understand how you might believe that if you interpreted “zen-like equanimity” as something like “not caring about social matters”, but that isn’t what I understand by equanimity.

    What I took Hal to be saying was that we should strive to consider arguments as we would if we did not have an ego interest in our particular position being the correct one, which we can more easily and effectively do by accepting the social embarrassment of being publicly wrong as a necessary part of the learning process.

  • Alan

    Gwern wrote, “I don’t really see how Zen is inextricably bound up with the Bushido. Ch’an Buddhism was around many centuries in both China and Japan before Bushido was ever developed or codified.”

    I hope this is not too much a digression. OK, Zen is not inextricably linked to Bushido in the western social context–but represents syncrestic developments from Japanese form, but Zen is not pacifist or necessarily filled with equanimity either, as commonly conceived.

    To support a different perspective, may I direct your attention to to the book, “Zen at War,” by Zen master Brian Victoria. Gwern, I appreciate your understanding of the etymology of Zen, coming from Ch’an (and even further back from dhyana). Since you are knowledgeable about these matters, I presume your awareness that not all the Rinzai and Soto sects, for example, have nice things to say about each other. In group, out group, same sociology story. I just don’t want the impression to be conveyed that Zen is shorthand for zoning out on equanimity. Cheers.

  • http://entitledtoanopinion.wordpress.com TGGP

    Peter Turney, your link was not convincing. To me there is a clear distinction between facts and values based on falsifiability (or predictive power).

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    “Of course, OB participants are far above such mundane considerations as power and influence.”

    I plan to discuss the interesting topics in your post a lot more, but this sentence seemed funny to me in part because you chose to drop “status” from the list of “mundane considerations” that OB participants are “far above”.

    One other quick idea. Disagreements don’t just provide benefits to the winner, they seem to me to provide potential benefits to the loser too, because both may benefit from representational privilege. The group may more attention to both arguers than to two other members of the group who are agreeing with each other.

  • http://apperceptual.wordpress.com/ Peter Turney

    To me there is a clear distinction between facts and values based on falsifiability (or predictive power).

    Let’s take a specific example. There is much disagreement about what it is that is measured by IQ testing. Is IQ purely descriptive or is it predictive (i.e., suitable for causal inference)? Here is a case where value is so wrapped up with fact that it is extremely difficult to separate the two. Your proposal to use falsifiability (predictive power, causal inferential power) to distinguish fact from value does not resolve the issue.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    “Could you spell out your reasons for thinking that “achieving zen-like equanimity in the face of social disaster” would reduce the effectiveness of thinking people? I can understand how you might believe that if you interpreted “zen-like equanimity” as something like “not caring about social matters”, but that isn’t what I understand by equanimity.

    What I took Hal to be saying was that we should strive to consider arguments as we would if we did not have an ego interest in our particular position being the correct one, which we can more easily and effectively do by accepting the social embarrassment of being publicly wrong as a necessary part of the learning process.”

    The OB commenting community has many people in it weirdly unable to separate internal consideration of arguments from public performance of cosideration of arguments. For example, publicly I’m religious, accept death as a natural part of life, and think lots of things in this world are more important than what I can best discern will maximize my persistence odds. Internally I aspire to understand reality as accurately as possible to the degree it will maximize my persistence odds. I’m both vested in “avoidng social embarrasment of publicly being wrong” and in “consider[ing] arguments as [I] would if [I] did not have an ego interest in [my] particular [public] position being the correct one” so that I can have as “optimized a learning curve” as is in my persistence-maximizing interest.

  • http://brokensymmetry.typepad.com Michael F. Martin

    I like this post quite a bit. You’ve hit what I consider to be some of the most important points.

    An aspect of this that I’ve been exploring is how we can design institutions that encourage people to do the kinds of things that you’re saying would help to overcome disagreements.

    I think the First Amendment is a prime candidate because it guarantees people a free-pass from gov’t violence in retaliation for the expression of ideology.

    Another fascinating example comes from an Egyptian prison, in which Islamist clerics responsible for theorizing jihad were able to debate jihad long enough (without being able to kill one-another) to reach the conclusion that violence was, in fact, not a great way to accomplish Islamist goals.

    http://www.newyorker.com/reporting/2008/06/02/080602fa_fact_wright

  • Z. M. Davis

    “The OB commenting community has many people in it weirdly unable to separate internal consideration of arguments from public performance of cosideration [sic] of arguments. […]”

    “Weirdly”? It’s not that we’re unable to separate profession and belief, it’s just that we would rather not, because we’re not monomaniacal solipsists bent on living forever at any and all costs.

  • Joseph Knecht

    HA: are you criticizing that I made my internal consideration public in this case, or are you saying that an inability to separate (internal) thinking from public display of thinking somehow explains how renouncing social games would make thinking people less effective thinkers (which was Phil’s point)?

    If the former, by making my internal thoughts public, I make it easier for Phil to correct any misunderstanding on my part or to state how his understanding differs (and thus why he thinks the consequences of such a renunciation would be negative). That is why I made them public (and these considerations, too).

    I’m both vested in “avoidng social embarrasment of publicly being wrong” and in “consider[ing] arguments as [I] would if [I] did not have an ego interest in [my] particular [public] position being the correct one” so that I can have as “optimized a learning curve” as is in my persistence-maximizing interest.

    The question under consideration was whether caring less about the social embarrassment of publicly being wrong would reduce the effectiveness of thinking people, as Phil stated. How does what you wrote relate to that?

  • http://shagbark.livejournal.com Phil Goetz

    Joseph: I took him to mean that we should give social+political considerations a weight of zero. People who took this advice would be marginalized. A political party that took this approach would lose every election. An author who took this approach for his first book would not be read.

  • http://www.hopeanon.typepapd.com Hopefully Anonymous

    Joseph,
    “The question under consideration was whether caring less about the social embarrassment of publicly being wrong would reduce the effectiveness of thinking people, as Phil stated. How does what you wrote relate to that?”
    Because I think
    1. Thinking people are rational to care about the social embarrassment of publicly being wrong, due to what could be called an embarrasser’s veto (like the heckler’s veto).
    2. We can get many (and perhaps all) of the benefits of thinking people putting out ideas that could be socially embarrassing if proved wrong, by encouraging them to publicize those ideas anonymously. An archetypal example of this might be the Federalist Papers. The icing is that there can be mechanisms for the anonymous thinker to then claim credit as the originator of those ideas if they’re likely to add to the thinker’s status rather than detract from it.

    These points are trivial. So it’s puzzling to me why many OB contributors act like belief in and publicization of an idea isn’t separable from public performance of one’s beliefs.

  • Joseph Knecht

    Phil: thanks, I understand your point now in terms of how other people would treat those who shunned social games. I was thinking that you meant such people would be less effective thinkers rather than that they would be treated differently by other people and thus have less influence.

    Having said that, I’m not so sure that the response would be so universally negative. It would be the natural and immediate reaction for many people, but there would certainly be widespread discussion of the issue if more than a few undertook the practice systematically. I think that upon reflection many would see the sense behind abandoning some of the more pernicious games we play.

  • Unknown

    “In fact, we often see a sort of bargaining process, in which adopting certain aspects of one’s opponent’s views leads to demands that the other side adopt something of one’s own views in return.”

    This may be more reasonable than Hal seems to suggest. Generally in a disagreement it is unlikely that one person is totally wrong and the other totally right. This is why the suggestion to consider what is right about the other person’s views is so often fruitful. So if one person adopts certain aspects of his opponent’s view, but his opponent refuses to adopt anything of the first person’s view, in most cases (not all) the person who refuses to change his views is the less reasonable one.

    However, it would be totally unreasonable to say, “I will accept some of what you say if you accept some of what I say,” leaving the first move for the other. The reason for this is that in this case there is nothing left but a bargaining process, as Hal called it, without any search for truth. For if you already suspect that there is some truth in your opponent’s view, then you should adopt it immediately, without waiting for him to respond.

  • http://www.scheule.blogspot.com Scott Scheule

    Even Eliezer’s recent explanations of his various disagreements largely come down to making cases for why his disputants should agree with him, not for why they should all continue to disagree.

    Hahaha. That’s different from any other person arguing how?

  • http://zbooks.blogspot.com Zubon

    “[A]greeing to disagree” is a sign of mutual disrespect and contempt.

    In universes where Overcoming Bias t-shirts and coffee mugs exist, this quote is one of the top sellers.

  • Ben Jones

    I suspect that many OB readers are already somewhat alienated from popular human social mores.

    It’s so true. *Sob*

    Alan, there’s a huge amount of maths here that defines terms like ‘disagreement’ and ‘evidence’ in a very fixed way, which addresses your first few points.

    a samurai empties his mind, all the better to respond automatically.

    Not sure how this is being framed as a bad thing. Those automatic responses are probably as close to a bias-free answer as you can get from a human. Remember that precious half-second before your mind is made up.

    For point 5, it may well be true that alpha-arrogance is an evolutionarily useful response. But you don’t come to this site for that, you come to find out how to get closer to rationality. If it’s the pursuit of truth you’re after, then yes, it is worth it. Point 6 is a very good one.

    Point 8,: put aside that immediate moral repugnance and ask ‘are they right?’ If so, update. If not, convince them that they’re wrong.

  • Abigail

    After reading a comment on a subsequent post, I ask Why should you care what anyone else thinks? Why should you need to convince them?

    Perhaps you are a candidate for the Nice party, which seeks to win power in order to enslave the whole population apart from a small clique. Then you seek to convince people to vote for you, out of personal interest.

    Perhaps you are part of a small tribe, just managing to support itself, with a strong moral sense that no-one should be allowed to starve. You try to persuade someone that his farming method could be improved, for your own good.

    However, it is in my interest to be as close to right about things as I can possibly be. This involves assessing other’s views, and changing my own when appropriate. If X thinks he will gain kudos by overcoming in argument someone who actually knows better than him, I believe that his loss when he finally comes up against reality will be greater than any loss of kudos from admitting earlier that he was wrong.

    Why should Dawkins care if someone believes that the world was created in six days six thousand years ago, if that belief makes the believer happy?

    I do agree to disagree about lots of things, especially where my potential loss from being wrong is low. It may be better to live with a false but unimportant belief about X, than to spend the energy necessary to find what belief about X is perfectly right.

  • dagon

    I like the post, but I’m having a hard time coming up with good test cases. I do, in fact, change my mind often, and I try to assign probability distributions to my beliefs rather than binary ones. This has served me well, but I still have disagreements, especially with those who present their beliefs otherwise.

    The majority of disagreements I have which result in the mutual disrespect of agreeing to disagree seem to be on topics where additional evidence is hard to come by, and suspicion of bias (likely true!) in both participants degrades communication.

    What disagreements have you successfully practiced on?

  • http://sti.pooq.com Stirling Westrup

    Strangely enough, since most humans refuse to easily concede defeat in an intellectual disagreement, doing so seldom works. I’m always willing to admit when I’m wrong, but I’ve discovered the following failure modes to doing so:

    1) People assume you are insincere. They don’t believe you think they are right, and get upset when you tell them you concede. They insist on continuing to try to ‘convince’ you until they see signals that they believe in.

    2) When you tell someone that they’ve just made a telling point and that you’re going to have to sit back and rethink your entire argument, they won’t let you. In fact, they most often keep reiterating bits of information that you’ve already covered and had no effect on your position (either because you agreed to them, or because you could provide cogent counterarguments.) I’ve had to actually RUN AWAY from people who would win the argument by simply shutting up, but who seemed constitutionally incapable of doing so, to the extent that leaving was the only way to end the ‘debate’.

  • Overcoming Laziness

    “In an ideal world, disagreements would not exist.”

    Really? How do you know this? Isn’t this begging the question?

  • http://geniusnz.blogspot.com GNZ

    Stirling,
    You must know some very difficult people!
    I generally find people magnanimous in victory.
    One effect I have noticed that might be related is that when you want to concede you generally don’t want to concede everything that was discussed. If you highlight that it draws attention to the remaining issue. The other side may interpret it as you saying that was what “really mattered” in the first place and be too committed to winning to be able to let it go.

    BTW great post Hal.

  • http://www.daviddfriedman.com David Friedman

    It’s an interesting argument.

    One possible solution is to do your disagreeing with people you are not in social competition with. That includes people who are dead but whose arguments survive in their books, people whose social status is much higher than yours already, people you are interacting with only via arguments, whom you will never meet and be in competition with, … . One could even set up an internet forum where all posters were anonymous, with some mechanism for matching up those with differing beliefs on a variety of subjects.

  • http://hamstermotor.motime.com pookleblinky

    David Friedman’s idea can be generalized according to Rawles’ idea of isonomy through ignorance. Just as Rawles asked what legal structure would be most fair to adopt in the absence of knowledge of where one’s position in society would be, so we can ask what argumentative structure would be optimal given zero knowledge of how it will be recieved and by whom.

    If you did not know whom a given argument came from, or who will see your counter argument, what is the best strategy for your expressing your position?

    This constraint alone would mitigate the felt urge to engage in ad hominems, polite evasions, and most redundant information-reducing mechanisms so common in arguments over Big Questions.

    I think consistent adherency would lead us all to act like Feynman: clear, concise, and without any qualms at all about refuting an idiotic statement no matter where it comes from.

  • J Thomas

    Is this a secret code? Somebody looks on OB at this thread and sees the message, and depending on just how it’s phrased and who the author is they know which secret orders to follow?

  • http://www.kocizmirevdeneve.com izmir evden eve

    In an ideal world, disagreements would not exist.

  • Pingback: Rationality trick: openly acknowledge errors | David Ruescas