25 Comments

I will do a little thinking and research and try to write up a potential post for submission(or to clutter up the open thread.)

Expand full comment

Hal, hopefully today's post better explains "why you thought you took something into account, when you didn't."

androit, if you can show me data on no-harm 30-50% cutbacks in other sectors, I'm all ears.

Expand full comment

Retired: I think Hanson's beef with medicine is grounded in science-- the low (or nonexistant) marginal utility of the last 50% spent as established statistically.

What I've always been curious about is why single out that sector of the economy? What sector couldn't be pruned back 30-50%? At the individual level, what goods or services?

Manufacturing, defense, finance, housing? Agriculture, maybe. (Hell, probably not.)

Expand full comment

@vanveen re: construal level theory

Thank you for the clarification. I suppose I should have been sharp enough to see the association. My major in college was psychology, so I am painfully aware that it is not a science (nor is social science or economics). How does one justify placing such importance on this unscientific construal approach to human behavior, while at the same time eschewing similar conduct by the medical profession when it uses perceived patterns of human response (to disease, treatments, etc.), not necessarily reflecting true scientific thought, in its interaction with patients?

Expand full comment

retired urologist,

the near-far bias is Robin's nickname for Construal level theory, which is a general theory of how the mind operates over time or within frames. essentially, thinking of psychologically near or distant concepts increases the probability of other psychologically near or distant concepts influencing future thoughts, which will lead unavoidably to various types of bias. the attribution bias you've cited is an example one such type.

Expand full comment

@Robin Hanson: retired, yes that is an example

I think you may have it backwards. Intellectual attribution bias is not an "example" of your near-far concept; "near-far" is an example of IAB. Interestingly, IAB makes this less apparent to you. And since you are very smart, no one is likely to convince you otherwise, just as Shermer observes.

For example, imagine I am well-educated and you are not, and I argue for the value of education and you argue against it. I find it easy to dismiss your view as denigrating something you do not have, but I do not think it plausible I am mainly just celebrating something I do have. I can see all these detailed reasons for my belief, and I cannot easily see and appreciate your detailed reasons.

This would be a perfect scenario to explain what intellectual attribution bias means. The fact that someone else noticed this reason for disagreement long ago does not diminish the importance of your adaptation; it's' simply not the "Hanson Theory".

Expand full comment

Hal, yes your thinking they are easy to understand does embolden you more to disagree.

retired, yes that is an example,

Vladimir, I agree that it is better to understand why a position is good, but that doesn't excuse you from taking the best position you know of, whether you understand why or not.

Expand full comment

I thought the "Near-Far Bias" concept of disagreement had a familiar ring. It is simiiar to Michael Shermer's description of "intellectual attribution bias" as described in his book Why People Believe Weird Things, specifically the chapter entitled "Why Smart People Believe Weird Things. In it, he states, "smart people are about nine times more likely to attribute their own position on a given subject to rational reasons than they are other people’s position, which they will attribute to emotional reasons, even if that position is the same as theirs." He also concludes, “smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons”. In other words, the smarter you are, the less likely you are to recognize your own bias. In this light, if one were truly successful at "Overcoming Bias", it would signal low g. So far, by my observation, there seems little risk of that occurring.

Expand full comment

Note that this model also justifies a practice of holding a persistent disagreement: you continue to communicate, and update each other's intuitive models and not just declarative conclusions. The disagreement is resolved when one person understands (either explicitly or on intuitive level) other side's intuitive model, at which point both models are integrated, and if that person is not wrong about success of communication (which he can be), he is now in a superior position to judge other side's correctness, and so he can either declare himself correct or switch sides (modulo learned lessons). In practice, it may be impossible or insanely expensive to truly understand other side's model (e.g. you may need to become an expert in another discipline), and so disagreement holds.

Expand full comment

Robin:

Vladimir, I'm not following you. Yes you have more meta-data on your reasoning. So does your disputant. How can that justify your both each sticking to differing views?It doesn't, that's why I wrote only about acceptance "at a face value". The point is that even when you do accept your opponent's conclusion, it only makes you marginally stronger, and only temporarily. If neither of you can produce an explicit convincing argument, the question is far from being resolved, and so you should expect needing to update your position when new data comes to your attention, but you can't update your opponent's static conclusion. The acceptance of your opponent's position quickly expires if you stop communication, and so your opponent's position is worth less to you than your own, even when you know that it's equally likely to be right.

Opponent's position is a teacher's password to you, even if you know that words are from a well-vetted textbook, almost certainly stating correct facts.

Expand full comment

The sticking point is still this assumption that you have taken salient factors into account, when you have not. I don't see how this comes about. Why should you assume that you took into account your more detailed access to your own arguments, if you didn't do so? Does the near-far bias explain why you thought you took something into account, when you didn't?

I see another place the near-far bias helps to explain disagreement, especially in the context of the Aumann results. Your own mind and your own argument is near, and is seen in detailed and concrete terms. The other mind and other argument is far, and is seen in general and abstract terms. This asymmetry makes it seem like it is easier for you to understand the other person's perspective than it is for the other person to understand what you are saying. Given that you have an over-simplified caricature of the other mind, for you to understand the other person is for a complex mind to understand a simple one, which is easy. But in the other direction, a simple mind has to understand a complex one, which seems impossible. So there is an inherent bias towards believing that the other person just doesn't understand what you are saying, while you fully grasp his arguments. I think I saw some expressions of this perspectiv in the disagreement discussion you had with Eliezer.

This bias, as with most, would be subconscious. The fact that we can't overcome it easily, even knowing about it, suggests that the conclusions of our reasoning processes are controlled by unconscious factors more than conscious ones.

Expand full comment

retired urologist,

I've always suspected people might be more rational than many researchers indicate. Evolution doesn't select for rational conscious problem-solving or survey-taking, after all; it selects for rational action (which may include cognitive biases in our conscious minds). However, Pouget's work doesn't seem like it makes any claims about conscious decision-making, which is important. Some decisions require conscious reasoning, unlike following dots across a screen or stopping at stop signs.

Expand full comment

Johnicholas, there should certainly be a uncomfortable period when you adjust to accounting for your bias, thought it is not obvious you must always remain uncomfortable with such adjustment.

Vladimir and Eliezer, yes disagreements typically start with diverging analyses, evidence, etc. I'm talking about why they persist after becoming mutually known.

Eliezer, yes direct exchange of info detail can reduce info detail as a cause of disagreement, but yes many belief sources/causes cannot be easily exchanged.

Vladimir, I'm not following you. Yes you have more meta-data on your reasoning. So does your disputant. How can that justify your both each sticking to differing views?

Expand full comment

Maybe the situation where you prefer your own belief and don't like to submit to modesty argument can be seen as "non-opaque belief bias": you not only see the conclusion of your own belief, but also glimpses of how your intuition produced it, whereas from the opponent in a disagreement you only see a conclusion, if a strong argument can't be made. The pieces of your own intuitive understanding feel right, whereas dispersed pieces of opponent's intuitive understanding that you see in discussions seem wrong or irrelevant. And so you prefer your own conclusion, even though you may have the same chance of being right.

But there are also good reasons for not accepting modesty argument at a face value. You are more in control with your own intuition, since you have the whole piece of knowledge and would be able to update on it, to detect both its strengths and weaknesses from distant facts, whereas (partially) accepting your opponent's conclusion leaves you little to work with. After you used modesty argument and included your opponent's conclusion into consideration, any new data can invalidate its strength, so you'd need to check back on your opponent's updated conclusion as well, instead of sticking with that old inflexible conclusion. It's like correcting your model of weather by averaging its output with other model's output on a given day without seeing the model itself: all you can get from that is correction for that day, and maybe a tiny bit of update for the model as a whole, much weaker than what you get even from raw data.

This point diminishes as disagreement becomes about conclusions that have important well-understood theories conditional on them, as you get the whole theory in that case, not just an isolated hypothesis.

Expand full comment

This explains persistent disagreement on the meta-level - why people don't take others' estimates as heavily as their own, and correct for them. The initial disagreement still has to get started on the object level, somewhere. Not objecting, just think it's worth specifying. And to inject my own pet theories of disagreement, I think it's worth noting that directly exchangeable information doesn't have to be trusted as much as intuitions, shifts of focus and emphasis, and choices between two compelling-sounding arguments. So the less communicable the source of a belief is, indeed, the more likely we are to get persistent meta-disagreement when we can see our own reasons but not the other person's, and see the other's biases but not our own.

Expand full comment

Robin Hanson: I don't see how your response addresses Pouget's findings, again quoted: "Once we started looking at the decisions our brains make without our knowledge (my emphasis), we found that they almost always reach the right decision, given the information they had to work with."

Expand full comment