Disagreement Is Near-Far Bias

Back in November I read this Science review by Nira Liberman and Yaacov Trope on their awkwardly-named "Construal level theory", and wrote a post I estimated "to be the most dense with useful info on identifying our biases I've ever written":

[NEAR] All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits. 

[FAR] Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits. 

Since then I've become even more impressed with it, as it explains most biases I know and care about, including muddled thinking about economics and the future.  For example, Ross's famous "fundamental attribution error" is a trivial application. 

The key idea is that when we consider the same thing from near versus far, different features become salient, leading our minds to different conclusions.  This is now my best account of disagreement.  We disagree because we explain our own conclusions via detailed context (e.g., arguments, analysis, and evidence), and others' conclusions via coarse stable traits (e.g., demographics, interests, biases).  While we know abstractly that we also have stable relevant traits, and they have detailed context, we simply assume we have taken that into account, when we have in fact done no such thing. 

For example, imagine I am well-educated and you are not, and I argue for the value of education and you argue against it.  I find it easy to dismiss your view as denigrating something you do not have, but I do not think it plausible I am mainly just celebrating something I do have.  I can see all these detailed reasons for my belief, and I cannot easily see and appreciate your detailed reasons. 

And this is the key error: our minds often assure us that they have taken certain factors into account when they have done no such thing.  I tell myself that of course I realize that I might be biased by my interests; I'm not that stupid.  So I must have already taken that possible bias into account, and so my conclusion must be valid even after correcting for that bias.  But in fact I haven't corrected for it much at all; I've just assumed that I did so.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Unknown

    Robin, this is excellent.

  • Julian Morrison

    This sounds like it could be an important part of explaining what Anders Sandberg was wondering about, “silliness”, whereby some particular sorts of valid ideas are reflexively dismissed with a completely closed mind.

    Per this, silliness could be defined as “a Near response to a Far stimulus”. It’s OK for Far things to stay Far, and it’s OK for them to organically drift into Near and receive a Near response, like spending money towards targeted projects. But to spend targeted money on a Far problem is “silly”. (Spending untargeted money, a Far response, is “sensible”.)

    Most transhumanist projects, from anti-aging to AI, are classified Far by the general public.

  • frelkins

    @Robin

    So I must have already taken that possible bias into account, and so my conclusion must be valid even after correcting for that bias. But in fact I haven’t corrected for it much at all; I’ve just assumed that I did so.

    Ok Robin, so what to do? To take your example, someone arguing for education gives you 90% that it’s useful, while the one arguing against it gives you 10% that it’s useful. So you exchange information and both revise your estimates: now the pro-educator thinks it’s 60% useful and the anti-educator thinks it’s 40% useful.

    Haven’t you then both done the right thing? Aren’t you on the path then to agreement? (This example given knowing that most folk here at OB understand that education as currently practiced is mostly signaling credential, etc.) What more would you have people do?

  • http://drchip.wordpress.com/ retired urologist

    In spite of the Nobel awarded to Kahneman and Tversky regarding the ill design of the human brain for making rational decisions, Alex Pouget at the University of Rochester has recently presented evidence that the brain is hard-wired to make optimal decisions, given the info available, but only when it does so unconsciously. Pouget says, “Kahneman’s approach was to tell a subject that there was a certain percent chance that one of two choices in a test was “right.” This meant a person had to consciously compute the percentages to get a right answer—something few people could do accurately”… “Once we started looking at the decisions our brains make without our knowledge, we found that they almost always reach the right decision, given the information they had to work with.” The article is here, and the Science Daily review is here.

    According to this line of thought, it would seem the unconscious biases that you discuss are more likely to be correct than the conclusions drawn from active “rational” analysis.

    Disclaimer: I know nothing about Dr. Pouget or the validity of his research.

  • http://zbooks.blogspot.com Zubon

    Most transhumanist projects, from anti-aging to AI, are classified Far by the general public.

    So the practical effect of Ray Kurzweil’s efforts is to get more people thinking of them as (The Singularity is) Near, while the Long Now Foundation is inadvertantly helping people think of them as Far (10,000 year clock, 100 year bets)?

  • Johnicholas

    In contrast to Zubon, I see the Long Now Foundation as helping people think of thousand-year projects as Near.

    If people reflexively associate “now” to “the current 10kyr interval” then we’ve brought a lot of issues from “far-type thinking” into “near-type thinking”.

    If I understand correctly, this blog says if you feel comfortable with your decision or estimate, you haven’t actually applied the bias-correcting factors. When you look at the bias-corrected decision or estimate, you should feel your intuition pulling in a particular direction, the opposite of the (process-mandated) correction.

  • http://profile.typekey.com/huono_ekonomi/ Mikko

    Robin: Very interesting.

    Btw, new paper on heuristics and biases of doctors is out: How Psychiarists Think.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    It doesn’t seem that this needs to be postulated as distance-specific systematic bias. Starting point of disagreement is mismatch of beliefs, which can come from ignorance of either or both sides. Subsequently, resolution of disagreement requires a convincing method of communicating the truth. Which in some cases just isn’t there, in particular when discussing the future, or when both sides are clueless.

    Beliefs on the current stage are driven either by intuition, or by known algorithms of intelligence, such as mathematics and experimental science. Intuition is faulty outside immediate perception, but in many domains it’s still the best means of rationality. Sometimes, it becomes possible to augment it with known algorithms of intelligence, to verify guesses it provides, to guide it in the right direction. Sometimes, we are left to ourselves. Without experimental science we were as clueless about detailed workings of the world as we are about many facts in the future now. Each nontrivial question about the future for all purposes requires its very own scientific field, to discover truths that our hapless intuition can not. And we can’t even use experiments or more or less “direct” observations, our strongest somewhat-understood algorithms of intelligence. Some questions just can’t be reliably answered without proper tools, and no cheap trick will do.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Julian, silliness must fit in here somewhere, but it surely isn’t as simple as that it is silly to think of typically far things in a near way.

    frelkins, the issue is when can you have reasonable evidence that you are less wrong than average and your disagreeing partner is more wrong than average.

    retired, the whole point is that given two differing estimates of the same thing, at least one must be biased. They can’t both be your best estimate. Sure it would be best to look at something both ways and then average the two, but it appears we almost never do that.

  • http://drchip.wordpress.com/ retired urologist

    Robin Hanson: I don’t see how your response addresses Pouget’s findings, again quoted: “Once we started looking at the decisions our brains make without our knowledge (my emphasis), we found that they almost always reach the right decision, given the information they had to work with.”

  • http://yudkowsky.net/ Eliezer Yudkowsky

    This explains persistent disagreement on the meta-level – why people don’t take others’ estimates as heavily as their own, and correct for them. The initial disagreement still has to get started on the object level, somewhere. Not objecting, just think it’s worth specifying. And to inject my own pet theories of disagreement, I think it’s worth noting that directly exchangeable information doesn’t have to be trusted as much as intuitions, shifts of focus and emphasis, and choices between two compelling-sounding arguments. So the less communicable the source of a belief is, indeed, the more likely we are to get persistent meta-disagreement when we can see our own reasons but not the other person’s, and see the other’s biases but not our own.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Maybe the situation where you prefer your own belief and don’t like to submit to modesty argument can be seen as “non-opaque belief bias”: you not only see the conclusion of your own belief, but also glimpses of how your intuition produced it, whereas from the opponent in a disagreement you only see a conclusion, if a strong argument can’t be made. The pieces of your own intuitive understanding feel right, whereas dispersed pieces of opponent’s intuitive understanding that you see in discussions seem wrong or irrelevant. And so you prefer your own conclusion, even though you may have the same chance of being right.

    But there are also good reasons for not accepting modesty argument at a face value. You are more in control with your own intuition, since you have the whole piece of knowledge and would be able to update on it, to detect both its strengths and weaknesses from distant facts, whereas (partially) accepting your opponent’s conclusion leaves you little to work with. After you used modesty argument and included your opponent’s conclusion into consideration, any new data can invalidate its strength, so you’d need to check back on your opponent’s updated conclusion as well, instead of sticking with that old inflexible conclusion. It’s like correcting your model of weather by averaging its output with other model’s output on a given day without seeing the model itself: all you can get from that is correction for that day, and maybe a tiny bit of update for the model as a whole, much weaker than what you get even from raw data.

    This point diminishes as disagreement becomes about conclusions that have important well-understood theories conditional on them, as you get the whole theory in that case, not just an isolated hypothesis.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Johnicholas, there should certainly be a uncomfortable period when you adjust to accounting for your bias, thought it is not obvious you must always remain uncomfortable with such adjustment.

    Vladimir and Eliezer, yes disagreements typically start with diverging analyses, evidence, etc. I’m talking about why they persist after becoming mutually known.

    Eliezer, yes direct exchange of info detail can reduce info detail as a cause of disagreement, but yes many belief sources/causes cannot be easily exchanged.

    Vladimir, I’m not following you. Yes you have more meta-data on your reasoning. So does your disputant. How can that justify your both each sticking to differing views?

  • Grant

    retired urologist,

    I’ve always suspected people might be more rational than many researchers indicate. Evolution doesn’t select for rational conscious problem-solving or survey-taking, after all; it selects for rational action (which may include cognitive biases in our conscious minds). However, Pouget’s work doesn’t seem like it makes any claims about conscious decision-making, which is important. Some decisions require conscious reasoning, unlike following dots across a screen or stopping at stop signs.

  • http://profile.typekey.com/halfinney/ Hal Finney

    The sticking point is still this assumption that you have taken salient factors into account, when you have not. I don’t see how this comes about. Why should you assume that you took into account your more detailed access to your own arguments, if you didn’t do so? Does the near-far bias explain why you thought you took something into account, when you didn’t?

    I see another place the near-far bias helps to explain disagreement, especially in the context of the Aumann results. Your own mind and your own argument is near, and is seen in detailed and concrete terms. The other mind and other argument is far, and is seen in general and abstract terms. This asymmetry makes it seem like it is easier for you to understand the other person’s perspective than it is for the other person to understand what you are saying. Given that you have an over-simplified caricature of the other mind, for you to understand the other person is for a complex mind to understand a simple one, which is easy. But in the other direction, a simple mind has to understand a complex one, which seems impossible. So there is an inherent bias towards believing that the other person just doesn’t understand what you are saying, while you fully grasp his arguments. I think I saw some expressions of this perspectiv in the disagreement discussion you had with Eliezer.

    This bias, as with most, would be subconscious. The fact that we can’t overcome it easily, even knowing about it, suggests that the conclusions of our reasoning processes are controlled by unconscious factors more than conscious ones.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Robin:

    Vladimir, I’m not following you. Yes you have more meta-data on your reasoning. So does your disputant. How can that justify your both each sticking to differing views?

    It doesn’t, that’s why I wrote only about acceptance “at a face value”. The point is that even when you do accept your opponent’s conclusion, it only makes you marginally stronger, and only temporarily. If neither of you can produce an explicit convincing argument, the question is far from being resolved, and so you should expect needing to update your position when new data comes to your attention, but you can’t update your opponent’s static conclusion. The acceptance of your opponent’s position quickly expires if you stop communication, and so your opponent’s position is worth less to you than your own, even when you know that it’s equally likely to be right.

    Opponent’s position is a teacher’s password to you, even if you know that words are from a well-vetted textbook, almost certainly stating correct facts.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Note that this model also justifies a practice of holding a persistent disagreement: you continue to communicate, and update each other’s intuitive models and not just declarative conclusions. The disagreement is resolved when one person understands (either explicitly or on intuitive level) other side’s intuitive model, at which point both models are integrated, and if that person is not wrong about success of communication (which he can be), he is now in a superior position to judge other side’s correctness, and so he can either declare himself correct or switch sides (modulo learned lessons). In practice, it may be impossible or insanely expensive to truly understand other side’s model (e.g. you may need to become an expert in another discipline), and so disagreement holds.

  • http://retiredurologist.com retired urologist

    I thought the “Near-Far Bias” concept of disagreement had a familiar ring. It is simiiar to Michael Shermer’s description of “intellectual attribution bias” as described in his book Why People Believe Weird Things, specifically the chapter entitled “Why Smart People Believe Weird Things. In it, he states, “smart people are about nine times more likely to attribute their own position on a given subject to rational reasons than they are other people’s position, which they will attribute to emotional reasons, even if that position is the same as theirs.” He also concludes, “smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons”. In other words, the smarter you are, the less likely you are to recognize your own bias. In this light, if one were truly successful at “Overcoming Bias”, it would signal low g. So far, by my observation, there seems little risk of that occurring.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Hal, yes your thinking they are easy to understand does embolden you more to disagree.

    retired, yes that is an example,

    Vladimir, I agree that it is better to understand why a position is good, but that doesn’t excuse you from taking the best position you know of, whether you understand why or not.

  • http://retiredurologist.com retired urologist

    @Robin Hanson: retired, yes that is an example

    I think you may have it backwards. Intellectual attribution bias is not an “example” of your near-far concept; “near-far” is an example of IAB. Interestingly, IAB makes this less apparent to you. And since you are very smart, no one is likely to convince you otherwise, just as Shermer observes.

    For example, imagine I am well-educated and you are not, and I argue for the value of education and you argue against it. I find it easy to dismiss your view as denigrating something you do not have, but I do not think it plausible I am mainly just celebrating something I do have. I can see all these detailed reasons for my belief, and I cannot easily see and appreciate your detailed reasons.

    This would be a perfect scenario to explain what intellectual attribution bias means. The fact that someone else noticed this reason for disagreement long ago does not diminish the importance of your adaptation; it’s’ simply not the “Hanson Theory”.

  • vanveen

    retired urologist,

    the near-far bias is Robin’s nickname for Construal level theory, which is a general theory of how the mind operates over time or within frames. essentially, thinking of psychologically near or distant concepts increases the probability of other psychologically near or distant concepts influencing future thoughts, which will lead unavoidably to various types of bias. the attribution bias you’ve cited is an example one such type.

  • http://retiredurologist.com retired urologist

    @vanveen re: construal level theory

    Thank you for the clarification. I suppose I should have been sharp enough to see the association. My major in college was psychology, so I am painfully aware that it is not a science (nor is social science or economics). How does one justify placing such importance on this unscientific construal approach to human behavior, while at the same time eschewing similar conduct by the medical profession when it uses perceived patterns of human response (to disease, treatments, etc.), not necessarily reflecting true scientific thought, in its interaction with patients?

  • androit

    Retired: I think Hanson’s beef with medicine is grounded in science– the low (or nonexistant) marginal utility of the last 50% spent as established statistically.

    What I’ve always been curious about is why single out that sector of the economy? What sector couldn’t be pruned back 30-50%? At the individual level, what goods or services?

    Manufacturing, defense, finance, housing? Agriculture, maybe. (Hell, probably not.)

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Hal, hopefully today’s post better explains “why you thought you took something into account, when you didn’t.”

    androit, if you can show me data on no-harm 30-50% cutbacks in other sectors, I’m all ears.

  • androit

    I will do a little thinking and research and try to write up a potential post for submission(or to clutter up the open thread.)

  • Pingback: Excelente blog…ti hace pnsr, creo? creo crei creere!! « Diego Gonzalez

  • Pingback: The fractured self | Clare Flourish

  • Pingback: Overcoming Bias : Analysis Is Far Skeptical

  • Pingback: » Why the Internet is Awesome Letters to Sg

  • Pingback: The Non-Obvious Obvious: Changing Your Stance | Haris