17 Comments

Nick - you briefly mention organizational communications. I think the topic of organizational bias is fascinating, but very different from individual bias, and requires its own taxonomy. The most common type of organizational bias I see is basically failed mechanism design: when the natural outcome of a system is different from the intended one because the mapping from mechanism to outcome is really hard to make. Or sometimes, when the mechanism wasn't even designed consciously, but just evolved, and doesn't necessarily serve organizational goals. Well, that and misrepresentation, of course :).

Anyway, I definitely agree on the need for a taxonomy.

Expand full comment

I teach a course in Critical Thinking and am working on a text of my own, so this is something that has taken up a lot of my attention for a while. I think Bostrom is dead on about the need. I've actually worked on a taxonomy but interestingly, whereas Bostrom's appears to be divided as to the cause, I've been thinking to type them functionally--that is, what the biases typically do in reasoning and where they typically occur. I think the causal story is very interesting, and one that I think deserves a great deal of thought as well, but inasmuch as rationality is a normative concern, the causal sources of biases is a different part of the story--sort of 'how things can go wrong'. Psychologists, as Bostrom and everyone here well knows, have whole libraries devoted to descrying the various causal mechanisms, most of which have been brought up here. I agree that these are done in a much too wholesale and unorganized fashion. One thing that I've been thinking about is the place of what are commonly called fallacies (mostly of the informal sort, but also lots of the common formal ones--Wason stuff, etc.) within the taxonomy of biases. I think fallacies are best viewed as a type of bias, and not the other way around. I'd be interested to hear others' comments on that.

Expand full comment

There is a relevant psych bulletin article by Hal Arkes (you might know him from reading about sunk costs) that you all might find interesting."Costs and Benefits of Judgment Errors: Implications for Debiasing" Psychological Bulletin, 1991, vol 110, number 3.

He provides a threefold taxonomy based on strategy errors (these are remedied by thinking more), association based errors (these are exacerbated by thinking more, e.g anchoring), and psychophysical errors (these are mostly errors that affect preferences like loss aversion, and its unclear what "debiasing" means).

As the saying goes, it's an oldie but a a goodie...

Expand full comment

I think having a classification framework for biases -- a "taxonomy" -- is a great idea. But I don't like the Type 1, 2, etc. flavor. It could be confused with type I and II biases in statistics; moreover it is non-descriptive.

Jeff

Expand full comment

This piece is excellent and much appreciated.

With regard to Robin's aboriginal comment (the only one I've read thus far as I wade into the comment section) I assume that he's asking a methodological question rather than a moral one in rhetorically asking what benefit there is to the study of bias. If I understand him correctly, he's wondering whether the STUDY of bias will lend itself to our employing biased logic less often. In answering his own question however (and my apologies Robin for the third-person usage but it's too late now ;-) Robin appears to consider as acceptable the application of one bias (such as "shaming" the idea of having a particular bias) in routing another - a thing which appears to me to be akin to chasing one's tail (predicated on the assumption that 'shaming', or otherwise making unfashionable, a thought or belief beyond those that are communally accepted is likely to wreak some violence on the discovery of additional truths).

As for "social biases of type 1", it would appear to me that (at least according to the atheistic views of many contributors here) we can include among them the triad beliefs in "meaning", "morality" and some sort of inherent "value" in human life.

Can we not?

mnuezmnuezwww.mnuez.blogspot.com

Expand full comment

Robin, maybe a better example of a general social bias would be the bias against your conclusions regarding disagreement. Disagreement itself would seem to be largely rooted in individual biases, perhaps mainly in the overconfidence bias. But there would be a social motive to say that your general conclusions are incorrect, somewhat like this. After persistently disagreeing with someone, we make an agreement: I admit you're being reasonable even though you disagree with me, if you admit that I'm being reasonable even though I disagree with you. There is a collective social benefit from this because in this way we avoid the pain of having to confront our irrationality.

Of course there might be a way to overcome this bias anyway, namely by pointing out there would be even greater social benefits to putting up with the pain and so beginning to overcome our irrationality.

Expand full comment

Eliezer: "...the particular bias of a group of futurists who believe that China will achieve ascendancy over the US on May 3, 2028 using a hang glider and a spool of orange thread". I would think that would be a Type III bias if it is a bias at all, rather than just error. Type II bias were meant to originate in some way from the architecture or "hardware" or operating sytems of our brains, rather than from some particular cluster of misguided beliefs.

Yes, I'm hoping that the taxonomy can be either improved or at least more clearly articulated and explained.

Nicholas, Type III biases might not "really" be biases - in fact, I think I tentatively proposed a definition in one of the early posts on this blog that would have made that kind of error non-bias. However, many of the posts on this blog address type III error... This might be a case where there is no truth to be discovered just a convention to be stipulated. Even on the above taxonomy, though, Type III bias clearly fades into simple error as we consider not general worldviews and broad assumptions but specific pieces of information. Somebody who assigns 80% credence to Sweden being a member of NATO is not very accurate, but we would not say he suffers from a Sweden-is-in-Nato-overconfidence-bias.

Expand full comment

I was a bit worried about whether Type-III biases --'arise from our avoidable ignorance of facts or lack of insights, the possession of which would have improved our epistemic accuracy across a broad domain.'-- are really biases.

Certainly there can be truths of this kind, but not knowing them seems more like a misfortune rather than a bias, which, in this context, I would want to be a kind of systematic epistemic irrationality, and I would also take to imply some fault in the believer. Someone subject to your type III bias might be impeccably assessing the evidence they have. Perhaps you mean to put the weight on the avoidable ignorance, but again, unless there is some systematic epistemic irrationality in that ignorance I don't see why it must be a bias.

Perhaps what you have in mind is an external sense of 'bias': a sense in which the believer is not at fault but is just unfortunately placed, and as a consequence of that misfortune will form beliefs that are systematically skewed in a certain respect.

To join this up with another thought, overcoming bias is one part of epistemic wisdom, but what you are speaking of as absent in the type III bias lies in the positive domain of epistemic wisdom, namely: truths which when believed have a distinctive kind of epistemic value due to their helping us get systematically right beliefs in some area.

Expand full comment

Nick Bostrom,I think "type 1-3" is a misleading naming system. As you indicate, type 3 is quite compatible with either type 1 or 2, while type 1 and 2 are largely incompatible with each other. Maybe it's a good idea in an initial naming system to use numbers rather than words (to avoid founder effects), but adjectives, say, encourage layering.Anyhow, my advice is to be even more explicit about what you consider orthogonal axes and what you consider incompatible traits.

Expand full comment

Certainly psychopathologies can exacerbate biases like Bostrom says, but I tend to believe that some psychopathologies ameliorate biases often enough that if I had a few million dollars with which to form a less-biased dream team I would concentrate my search among exactly those populations.

For example, depressed individual are much less vulnerable to the overconfidence that infects most psychologically healthy individuals.

Perfectionism is often pathologized (under labels like obsessive-compulsive and anal-retentive) and yet perhaps it is not a coincidence that two of civilization's three most fertile natural scientists were perfectionists, namely, Netwon and Darwin with Galileo being the third of the three. I have not enough biographical data to judge whether Galileo was a perfectionist though the frequency of perfectionism in the Italian population seems quite low. One possible mechanism by which perfectionism yields less-biased beliefs is that perhaps perfectionists spend more mental effort doubting their own rightness. Or perhaps it is that human pleasures cause less Hebbian learning in perfectionists than they do in nonperfectionists with the result that perfectionists are relatively immune to the biasing effect of the thousand shards.

Narcissism is not only pathologized but is considered by many authors to be the worst psychopathology there is, yet among leaders in Silicon Valley and Hollywood, narcissists are vastly overrepresented compared to their frequency in the general population (and at least one author says the same observation holds in many academic departments). I have a theory on this one too, which is that narcissists are not heeding taboos.

In the interest of brevity I will stop here and invite those interested in more to contact me (click my name).

Expand full comment

Type II would be the former

Even within Type II you'd have to distinguish the general conjunction fallacy and representativeness heuristic, a cognitive bias; from a blog post dealing with, say, the particular bias of a group of futurists who believe that China will achieve ascendancy over the US on May 3, 2028 using a hang glider and a spool of orange thread.

Expand full comment

Robin and Unknown, there is also the possibility of two different biases partially canceling out, so that by getting rid of only one of them one makes things worse on balance. (Some might speculate that this is the case with regard to racism: there may be racist biases that cause bad decisions when there are perceived racial differences, so we socially cultivate another bias which makes us blind to racial differences?)

Eliezer, I suppose one could distinguish cognitive from cultural or memetic biases; but I don't think the non-cultural (non-memetic) biases need be human-universal. Personality variation (and in extreme cases, psychopathology) might create biases which are not specific to particular cultures or memetic lineages yet are not human-universal. But one might try to distinguish between biases that result from either universal human cognitive architecture or non-doxastic personality variation, on the one hand, and biases resulting from having been exposed to particular cultures and belief systems. Type II would be the former, Type III the latter, and Type I could be either alone or both in combination.

Expand full comment

@Eliezer Yudkowsky:

Moral bias, perhaps?

Expand full comment

Unknown, a bias might benefit a particular group, but not a larger group that group is part of. If so, we could still argue that overcoming bias will benefit this larger group, and so try to get this larger group to coordinate to expose and shame bias.

Expand full comment

As Unknown's comment highlights, somewhere in this typology should be the distinction between fundamental, human-universal, cognitive biases; and self-serving beliefs or heuristic failures specific to particular cultures or memetic lineages, to which the fundamental biases give rise. "Cognitive bias" seems like a good idea for the former, but I can't think up any good term for the latter.

Expand full comment

One example of a social bias of type I might be the bias against the recognition of group differences, for example, possible racial differences in intelligence.

Expand full comment