Disagreement Debate Status?

This blog has had many posts on disagreement, especially early on.  For example, I’ve posted on the basic idea that we can’t foresee to disagree, that we should have common priors and not accept genetic influences, and that this all should apply to logical truths and values.  I discussed specific math models, majoritarianism and meta-majoritarianism, how to share info without disagreeing, and two examples of when to agree and one of when to disagree.   I also give frequent talks on the subject. 

So what do folks think is the status of the debate on the rationality of disagreement?  That is, how reluctant do you think people should typically be to knowingly disagree with one another, and if the arguments I’ve outlined  seem to have some potential to influence this reluctance, what more is needed to see if they can fulfill this potential?

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Tim Tyler

    I think that many disagreements arise because people are not out to “believe the truth” in the first place. Rather they often believe things that help promote their own interests.

    Females tend to think the male-female brain size difference doesn’t affect intelligence. Nurses believe nurses should be paid more. Homeopaths believe that they really can help people. Moslems believe the Al-Aqsa Mosque is the site of prophet Muhammad’s ascent into Heaven, whereas Jews think that is nonsense, and represents a false claim on the land of the Temple of Jerusalem – and so on.

    Such positions seem reasonably rational – assuming that your goal is to further your own interests, rather than to uncover the truth.

  • Carl Shulman

    “So what do folks think is the status of the debate on the rationality of disagreement?”

    The theory looks sound given the assumptions, and we can safely conclude that transparently honest Bayesian wannabes are extremely rare to nonexistent. The challenge is deciding how much to disagree when you know that:

    1. We, and our interlocutors, are working with pervasively biased brain architecture that prevents you from doing effect adjustment of quantitative estimates.
    2. The overwhelming majority of humans admit preferences over belief, and explicitly or implicitly admit self-deception in accordance with those preferences.
    3. The statements of experts are not strongly aimed at honest communication, and there are large gaps between them and anonymous surveys, in part because of large-scale active distributed efforts at deception.
    4. The beliefs of the population and of experts are not a collection of independent noisy shots at the truth, but rather reflect correlated large-scale social processes.

    “That is, how reluctant do you think people should typically be to knowingly disagree with one another, and if the arguments I’ve outlined seem to have some potential to influence this reluctance, what more is needed to see if they can fulfill this potential?”

    I think we should be very reluctant to disagree with those we estimate as our epistemic peers or superiors, but that it is difficult to reliably identify those, given overconfidence and other biases. The development of better tools to sort interlocutors into epistemic classes would seem to make the theory more easily applicable to human situations. Lie detectors, prediction market track records, psychological profiles tested against accuracy in large-scale studies, standardized tests of subject matter knowledge with results made public, etc.

  • http://hanson.gmu.edu Robin Hanson

    Tim, yes of course, but the question is when you can have reasons to think you care about truth more than others, relative to your interests.

    Carl, yes it would be nice to get stronger clues about who is how rational in each situation, but even so we will almost always be in situations where all we have is relatively weak clues, so we must learn to deal with that situation.

  • http://drchip.wordpress.com/ retired urologist

    When I accepted the invitation to answer the questions in the Disagreement Case Study post, and posted the answers here, I discovered that it is far easier for a world-class intellectual to discuss disagreement rationally when the disagreement does not involve him personally. Via intellectual attribution bias, smart people are about nine times more likely to attribute their own position on a given subject to rational reasons than they are the position of others, which they will attribute to emotional reasons, even if that position is the same as theirs (Michael Shermer).

  • Tim Tyler

    The question is when you can have reasons to think you care about truth more than others, relative to your interests.

    If you have reason to believe that you are a zealous truth-seeker and truth-propagator, then you will most-likely recognise that the majority of other agents are not genuine truth-seekers – so you will probably wind up disagreeing with them a fair bit.

    What about the idea that everyone thinks they are a truth-seeker – but that most people are kidding themselves? I’m not convinced of the accuracy of that. Certainly some people exhibit more external signs and symptoms of caring about the truth than others. I think to some extent people have an idea of how much they value the truth – just as they know how willing they are to tell lies.

  • http://goodmorningeconomics.wordpress.com jsalvati

    Ok, you say we should resist genetic influence, but don’t our priors come largely from our evolution? For example, we have occam’s razor like priors because occams’s razor has worked in the past (we are here after all), because evolution has built us on what has worked in the past. Perhaps this idea needs some refinement, perhaps it is the influence of genetic variance that we should resist? I don’t think that’s quite right either.

  • Henry V

    Are we talking about disagreement about purely factual matters only?

    IMHO, most disagreement does *not* revolve around purely factual matters:

    “Government should provide public education.”
    “No. Government should subsidize private education.”

    “Income inequality is too high and tax policy should be used to correct it.”
    “No. Individual liberty should be valued more highly than income inequality.”

    “It’s time to clean the house.”
    “No, it’s not.”

    I’d wager that the example causes more disagreement than the other two.

  • Z. M. Davis

    My current understanding is that while Bayesians and computationally-limited “Bayesian-wannabes” would agree, it does not therefore follow that we can become more Bayesian simply by agreeing. It may be useful to keep in mind that agreement should emerge if you interpret other people’s opinions as evidence, but if it doesn’t, you can’t just force it. IIRC, this is Eliezer’s opinion.

    I guess I should disclaim that it would take me more study to truly understand the math in the relevant papers.

  • mas

    My self and several other people I know have been convinced by your arguments. I now do my best to find the sources of disagreement between myself and everyone else and try to root out the causes. On several occasions this has resulted in my accepting arguments I previously had thought were absurd. Although, I have found that the practical realization of this idea usually involves a great deal of back-and-forth argumentation (to correct for differences in priors and understanding the other party’s reasoning). Many individuals I have come across interpret my behavior as me trying signal that I am smarter, cleverer, or make better arguments than other people. Unfortunately, this means that I am not taken very seriously. Or, if people do accept that I honestly am seeking truth they will often not regard the truth-seeking process as being worth their time (possibly because they assume they have the right perspective and I have the wrong one–which is often quite probable).

    Amongst the small group of people I know who do accept the arguments, however, I have managed to rid most of the disagreements for which we have common sets of information.

    One thing that does bother me is that I find it very difficult to argue about the value of truth-seeking with someone who does not place a high-value on truth-seeking. It’s quite possible that they have good arguments, which they won’t present but are perfectly valid.

  • Tim Tyler

    In biology, organisms may be expected to be interested in making copies of their genes. Seeking the truth might happen as a by-product of that – but we shouldn’t expect to find many organisms prioritising it that highly.

  • http://occludedsun.wordpress.com Caledonian

    Tim, yes of course, but the question is when you can have reasons to think you care about truth more than others, relative to your interests.

    There’s part of the problem, right there — caring about the truth is an absolute, not a relative.

    Very few people value possessing and stating the truth more than maintaining political power or being able to think well of themselves. The question we must ask is not “whether we care about truth more than the next guy”, but whether we care about truth as anything more than merely instrumentally, as a way of getting the things we really care about.

  • Dagon

    “So what do folks think is the status of the debate on the rationality of disagreement?”

    What debate? It’s clear that rational agents with common priors (or in many cases with similar reasonable priors) will not disagree. Therefore, any disagreements are due to non-rational agents or unreasonable priors.

    Which describes humans pretty well.

  • ao

    Robin,

    I would like to see some bloggingheads.tv debates on disagreement between you and someone you disagree with very much.

  • Matt

    Caledonian,

    Can you think of any cases where NOT being a truth-seeker is preferred (better get us what we want)?If you can, how truth-seeking should we be?

  • Tim Tyler

    It’s clear that rational agents with common priors (or in many cases with similar reasonable priors) will not disagree.

    No it isn’t. The agents must also have truth-seeking as their top priority. If they have other goals, they can easily find themselves with irreconcilable differences.

  • jls

    What I’m going to say is perhaps a little off-topic, since I’m not going to address the whole issue of disagreement, but rather the issue of majoritarianism. Perhaps I have misunderstood this idea, or not, but in any case I have many doubts about its usefulness. It seems to me that “agreeing with the majority unless you have a powerful reason to disagree” is an idea which makes sense, but we won’t have many chances of putting it into practice since the exception, having a powerful reason to disagree, is something that will happen almost always on most important topics. I have read several posts in this blog that have convinced me that the majority is biased (on economics, policy…), and that most people actually have the same biases, so they don’t average out but rather add up.

    I don’t see how meta-majoritarianism would fix this, it just seems to me that someone said (I don’t know who came up with the idea of meta-majoritarianism) “oh well, majoritarianism won’t work, so let’s try and fix it by making it ‘meta'”. But we are as biased when we judge other people’s abilities to judge issues as we are when we judge the issues directly.

    The thing is, a perfectly unbiased rationalist doesn’t need majoritarianism, but I haven’t found any, so the idea might still be useful, if someone finds out how to fix it. I have an idea, which might not be a good idea, but what the heck, I’ll just tell you about it. I’d call it majoritarianism mixed with “overcoming bias for the people”. That is, if more people would make this effort some of us make to overcome bias, even if it was to a lesser extent, the opinion of the majority would be worth considering. It would require rationalist education, or rationalist proselitism (however you wish to call it) instead of what seems to me an elite of rationalist truth-seekers. I’m not saying everyone has to be an earnest truth-seeker, just that if people are aware of the more common biases, other biases might well cancel out instead of adding up.

    Of course I can think of some risks. The main one is the fact that Eliezer pointed out a while ago, that some rationalism might in fact be worse than no rationalism at all. I can imagine what a population of “clever arguers” would look like. In the best case it would be irritating, in the worst it would be disaster. Also, I recall Eliezer discussing whether bias can make people happy, and if I recall correctly his main conclusion was: oh well, you really have no choice once you have started seeking truth, since you can’t be selectively rational. Well, if my idea of rationalist proselitism were to be seriously considered, the issue would need a second thought. Maybe we don’t have the option of being selectively rational, but we can let other people be as irrational as they like, if that makes them happy.

  • jls

    Forgot to say: sorry for the long post.

  • Tim Tyler

    I give courtship as an example of when it can pay to believe untruths here:

    It is generally in a man’s genetic interests to maximise his number of descendants by maximising the number of his immediate offspring – by techniques such as impregnating as many females as possible, and skimping on parental care of offspring.

    However, this is not something prospective mates are particularly keen to hear from males. Instead females prize traits such as fidelity. They generally prefer monogomous relationships, which allow the most scope for males offering parental care.

    Consequently males interested in pusuing this sort of strategy (which evolutionary theory suggests are most males) are put into a position where they have to deceive their prospective mates about their intentions.

    Hamilton suggests that they may do this by employing double-think – actually believing themselves to be whatever the females desire them to be – while not necessarily acting according to those beliefs.

    It often pays to believe untruths when others in the rest of society act favourably towards believers – because lying convincingly is difficult for humans.

  • http://profile.typekey.com/michaeljameswebster/ michael webster

    It is my understanding from following some of threads that the formal model which gives currency to Robin’s views is Aumman’s Formal Theory of Common Knowledge, which I understand Robin has expanded on.

    Before entering this debate, I should like to know the ground rules about formal models accepted by this community.

    Is there a general acceptance that a formal model of reasoning is a:

    a) a truth functional;
    b) translation of the natural language to a formal language;
    c) which preserves a set of inferences or inference in the natural language.

  • rcriii

    Matt: Can you think of any cases where NOT being a truth-seeker is preferred (better get us what we want)?If you can, how truth-seeking should we be?

    What if seeking the truth (or resolving a disagreement) is more costly than living with the disagreement/error? In that case we should focus our truth seeking on cases where:

    1) An error would be costly and/or truth-seeking is cheap
    2) We can significantly affect the outcome.

  • Stuart Armstrong

    Robin, what do you do when confronted with people and arguments you disagree with?

    Trying to “never agree to disagree” is pointless in a world where disagreements are so entrenched; the correct question is how do we behave in practice, while knowing that in theory we should “never agree to disagree”.

    I’ve boiled down most of these posts on disagreements to a greater humility towards my certainties, and a few rules of thumb to keep in mind during arguments. Should I expect to get more out them, in practice?

  • http://www.bizop.ca/blog2/how-would-you-play-that/disagreement-debate-status.html THE BIZOP NEWS

    Disagreement Debate Status?

    Image by Getty Images via DaylifeRobin Hanson asks about the very prospect of rational disagreement, a topic of much practical interest to mediators and negotiators….

  • Tim Tyler

    A few quotes from Darwin’s Cathedral – by David Sloan Wilson:

    Rationality is not the gold standard against which all other forms of thought are to be judged. Adaptation is the gold standard against which rationality must be judged, along with all other forms of thought. […]

    If there is a trade-off between the two forms of realism, such that our beliefs can become more adaptive only by becoming factually less true, then factual realism will be the loser every time. […]

    Factual realists detached from practical reality were not among our ancestors. It is the person who elevates factual truth above practical truth who must be accused of mental weakness from an evolutionary perspective. […]

    It is only when a pair of factual truth seekers meet that they can’t disagree for long – and we can’t expect to find many such seekers left in this modern era.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    I’ve written a number of posts on disagreement myself, but I think that most reasonable parties who’ve been keeping track of the debate, at this point, should confess the following:

    1) Ideal rational agents with common priors should never have common knowledge of disagreement.

    2) In the real world, two sane rationalists with common knowledge of each other’s sanity should not have common knowledge of disagreement. (“Sane” here is a variable that ranges over different definitions of sanity, but it excludes e.g. priors too crazy to reflect on their own causal origins.)

    3) The fact that humans persistently have common knowledge of disagreements indicates that something is very wrong.

    4) (3) shows that humans systematically overestimate their own meta-rationality, that is, ability to judge whether others are more or less rational than themselves.

    5) …and that, in a lot of cases, Disagreements Aren’t About Belief.

    It’s where we start talking about practical remedies for this dreadful, dreadful situation, that I think we begin entering into the area of – ahem – reasonable disagreement. I don’t think the debate has settled the question of what to do when you find yourself disagreeing.

  • Tim Tyler

    Ideal rational agents with common priors should never have common knowledge of disagreement.

    As I pointed out further up the page, such agents must also have truth-seeking as their top priority for this to hold. If they have other goals, they can easily find themselves with irreconcilable differences.

    You can surely be rational and not have truth seeking as your primary goal. Rationality and goals are totally orthogonal things – at least in my book. Does the repeated occurrence of this curious idea mean that people are mixing these concepts together?

    The fact that humans persistently have common knowledge of disagreements indicates that something is very wrong.

    It indicates that humans do not have truth-seeking as their primary goal. Of course, evolutionary theory suggests that agents with truth-seeking as their primary goal can be expected to be rare – so this hardly seems like news to me.