Towards a typology of bias

It seems to me that we have reached a stage in our discussions on this blog, and in the field of bias studies more generally, where it would be useful to begin to develop a more systematic typology.  There are so many different alleged biases that without some unifying framework it is easy to get lost in the details.  Finding the right categories would also help us theorize better about bias.

To this end, let me tentatively propose a classification scheme, organized around the sources of bias:

Type-I biases arise from the fact that our beliefs sometimes serve functions – such as social signaling – that can conflict with their navigational (truth-tracking) function.  For example, our tendency to overestimate our own positive attributes may be an example of a Type I bias.

Type-II biases arise from the shortcomings and flaws of our minds.  We are subject to various kinds of processing constraints, and even aside from these hard limitations we weren’t very successfully optimized for efficiency in abstract rationality even in contexts where no adaptive function interferes with the navigational function of our beliefs.  Type II biases can result from fast-and-frugal heuristics that compromise accuracy for speed and ease of use, or from various idiosyncratic features of our brains and psyches.  We can distinguish subtype-II(a) biases deriving from shortcomings general to the human psyche (availability bias?), and subtype-II(b) biases deriving from shortcomings specific to some individual or group (beliefs about being danger among the paranoid?)

Type-III biases arise from our avoidable ignorance of facts or lack of insights, the possession of which would have improved our epistemic accuracy across a broad domain.  (Many of Eliezer’s recent postings appear to aim to overcome Type III bias, for example by explaining important facts about evolution, which would help us form more accurate belief about many specific issues that are illuminated by evolutionary biology.)  We distinguish subtype-III(a) resulting from lack of (procedural) insights about methodology, logic, or reasoning principles (e.g. anthropic bias), and subtype-III(b) resulting from lack of (substantial) knowledge about theoretical or concrete facts (e.g. errors resulting from ignorance about the basic findings of evolutionary psychology).

The distinctions between these different types are not always clear-cut.  For example, biases of Type I and II may often be overcome by the right kind of information: does not this mean that they are really Type III biases?  But I think in many cases we can reasonably judge what the principal source of the error is: just as when somebody comes down with pneumonia we can point to bacterial infection as the principal cause, not their failure to take a prophylactic dose of antibiotics, even though antibiotics would have prevented and may now be the way to overcome the disease.

Type III bias fades into simple error from unsystematic ignorance.  I think the most paradigmatic kind of bias is Type I bias.

If we consider the statements in addition to beliefs, we can add a fourth type of bias: misrepresentation.  This type of bias occurs when an individual or organization makes statements that systematically misrepresent its real beliefs.

We can further expand the concept so that it can be applied to objects that are neither beliefs nor statements.  We could say that a body of data is biased, for example, if the most straightforward interpretation of it gives a systematically misleading picture of reality.  Similarly, a scientific instrument could be biased if it tends to deliver biased data.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • OK, relative to this typology, we can ask why we should try harder to overcome bias. That is, what suggests that there are substantial gains from thinking about biases in general, rather than just focusing on specific topics. In this typology, type I seems the best candidate to provide such a reason. Once we realize we together fail to achieve as much truth as we could because we are individually pursuing other conflicting goals, that suggests the possibility of a gain from coordination. We can coordinate to expose and shame such biases.

    Or course this raises the further question of whether there are social biases of type I, whereby we together achieve other goals by sacrificing truth. Coordination is not a good reason to overcome this sort of bias.

  • Unknown

    One example of a social bias of type I might be the bias against the recognition of group differences, for example, possible racial differences in intelligence.

  • As Unknown’s comment highlights, somewhere in this typology should be the distinction between fundamental, human-universal, cognitive biases; and self-serving beliefs or heuristic failures specific to particular cultures or memetic lineages, to which the fundamental biases give rise. “Cognitive bias” seems like a good idea for the former, but I can’t think up any good term for the latter.

  • Unknown, a bias might benefit a particular group, but not a larger group that group is part of. If so, we could still argue that overcoming bias will benefit this larger group, and so try to get this larger group to coordinate to expose and shame bias.

  • @Eliezer Yudkowsky:

    Moral bias, perhaps?

  • Robin and Unknown, there is also the possibility of two different biases partially canceling out, so that by getting rid of only one of them one makes things worse on balance. (Some might speculate that this is the case with regard to racism: there may be racist biases that cause bad decisions when there are perceived racial differences, so we socially cultivate another bias which makes us blind to racial differences?)

    Eliezer, I suppose one could distinguish cognitive from cultural or memetic biases; but I don’t think the non-cultural (non-memetic) biases need be human-universal. Personality variation (and in extreme cases, psychopathology) might create biases which are not specific to particular cultures or memetic lineages yet are not human-universal. But one might try to distinguish between biases that result from either universal human cognitive architecture or non-doxastic personality variation, on the one hand, and biases resulting from having been exposed to particular cultures and belief systems. Type II would be the former, Type III the latter, and Type I could be either alone or both in combination.

  • Type II would be the former

    Even within Type II you’d have to distinguish the general conjunction fallacy and representativeness heuristic, a cognitive bias; from a blog post dealing with, say, the particular bias of a group of futurists who believe that China will achieve ascendancy over the US on May 3, 2028 using a hang glider and a spool of orange thread.

  • Certainly psychopathologies can exacerbate biases like Bostrom says, but I tend to believe that some psychopathologies ameliorate biases often enough that if I had a few million dollars with which to form a less-biased dream team I would concentrate my search among exactly those populations.

    For example, depressed individual are much less vulnerable to the overconfidence that infects most psychologically healthy individuals.

    Perfectionism is often pathologized (under labels like obsessive-compulsive and anal-retentive) and yet perhaps it is not a coincidence that two of civilization’s three most fertile natural scientists were perfectionists, namely, Netwon and Darwin with Galileo being the third of the three. I have not enough biographical data to judge whether Galileo was a perfectionist though the frequency of perfectionism in the Italian population seems quite low. One possible mechanism by which perfectionism yields less-biased beliefs is that perhaps perfectionists spend more mental effort doubting their own rightness. Or perhaps it is that human pleasures cause less Hebbian learning in perfectionists than they do in nonperfectionists with the result that perfectionists are relatively immune to the biasing effect of the thousand shards.

    Narcissism is not only pathologized but is considered by many authors to be the worst psychopathology there is, yet among leaders in Silicon Valley and Hollywood, narcissists are vastly overrepresented compared to their frequency in the general population (and at least one author says the same observation holds in many academic departments). I have a theory on this one too, which is that narcissists are not heeding taboos.

    In the interest of brevity I will stop here and invite those interested in more to contact me (click my name).

  • Douglas Knight

    Nick Bostrom,
    I think “type 1-3” is a misleading naming system. As you indicate, type 3 is quite compatible with either type 1 or 2, while type 1 and 2 are largely incompatible with each other. Maybe it’s a good idea in an initial naming system to use numbers rather than words (to avoid founder effects), but adjectives, say, encourage layering.
    Anyhow, my advice is to be even more explicit about what you consider orthogonal axes and what you consider incompatible traits.

  • I was a bit worried about whether Type-III biases –‘arise from our avoidable ignorance of facts or lack of insights, the possession of which would have improved our epistemic accuracy across a broad domain.’– are really biases.

    Certainly there can be truths of this kind, but not knowing them seems more like a misfortune rather than a bias, which, in this context, I would want to be a kind of systematic epistemic irrationality, and I would also take to imply some fault in the believer. Someone subject to your type III bias might be impeccably assessing the evidence they have. Perhaps you mean to put the weight on the avoidable ignorance, but again, unless there is some systematic epistemic irrationality in that ignorance I don’t see why it must be a bias.

    Perhaps what you have in mind is an external sense of ‘bias’: a sense in which the believer is not at fault but is just unfortunately placed, and as a consequence of that misfortune will form beliefs that are systematically skewed in a certain respect.

    To join this up with another thought, overcoming bias is one part of epistemic wisdom, but what you are speaking of as absent in the type III bias lies in the positive domain of epistemic wisdom, namely: truths which when believed have a distinctive kind of epistemic value due to their helping us get systematically right beliefs in some area.

  • Eliezer: “…the particular bias of a group of futurists who believe that China will achieve ascendancy over the US on May 3, 2028 using a hang glider and a spool of orange thread”. I would think that would be a Type III bias if it is a bias at all, rather than just error. Type II bias were meant to originate in some way from the architecture or “hardware” or operating sytems of our brains, rather than from some particular cluster of misguided beliefs.

    Yes, I’m hoping that the taxonomy can be either improved or at least more clearly articulated and explained.

    Nicholas, Type III biases might not “really” be biases – in fact, I think I tentatively proposed a definition in one of the early posts on this blog that would have made that kind of error non-bias. However, many of the posts on this blog address type III error… This might be a case where there is no truth to be discovered just a convention to be stipulated. Even on the above taxonomy, though, Type III bias clearly fades into simple error as we consider not general worldviews and broad assumptions but specific pieces of information. Somebody who assigns 80% credence to Sweden being a member of NATO is not very accurate, but we would not say he suffers from a Sweden-is-in-Nato-overconfidence-bias.

  • Unknown

    Robin, maybe a better example of a general social bias would be the bias against your conclusions regarding disagreement. Disagreement itself would seem to be largely rooted in individual biases, perhaps mainly in the overconfidence bias. But there would be a social motive to say that your general conclusions are incorrect, somewhat like this. After persistently disagreeing with someone, we make an agreement: I admit you’re being reasonable even though you disagree with me, if you admit that I’m being reasonable even though I disagree with you. There is a collective social benefit from this because in this way we avoid the pain of having to confront our irrationality.

    Of course there might be a way to overcome this bias anyway, namely by pointing out there would be even greater social benefits to putting up with the pain and so beginning to overcome our irrationality.

  • This piece is excellent and much appreciated.

    With regard to Robin’s aboriginal comment (the only one I’ve read thus far as I wade into the comment section) I assume that he’s asking a methodological question rather than a moral one in rhetorically asking what benefit there is to the study of bias. If I understand him correctly, he’s wondering whether the STUDY of bias will lend itself to our employing biased logic less often. In answering his own question however (and my apologies Robin for the third-person usage but it’s too late now 😉 Robin appears to consider as acceptable the application of one bias (such as “shaming” the idea of having a particular bias) in routing another – a thing which appears to me to be akin to chasing one’s tail (predicated on the assumption that ‘shaming’, or otherwise making unfashionable, a thought or belief beyond those that are communally accepted is likely to wreak some violence on the discovery of additional truths).

    As for “social biases of type 1”, it would appear to me that (at least according to the atheistic views of many contributors here) we can include among them the triad beliefs in “meaning”, “morality” and some sort of inherent “value” in human life.

    Can we not?


  • I think having a classification framework for biases — a “taxonomy” — is a great idea. But I don’t like the Type 1, 2, etc. flavor. It could be confused with type I and II biases in statistics; moreover it is non-descriptive.


  • a decision science graduate student

    There is a relevant psych bulletin article by Hal Arkes (you might know him from reading about sunk costs) that you all might find interesting.
    “Costs and Benefits of Judgment Errors: Implications for Debiasing” Psychological Bulletin, 1991, vol 110, number 3.

    He provides a threefold taxonomy based on strategy errors (these are remedied by thinking more), association based errors (these are exacerbated by thinking more, e.g anchoring), and psychophysical errors (these are mostly errors that affect preferences like loss aversion, and its unclear what “debiasing” means).

    As the saying goes, it’s an oldie but a a goodie…

  • Stephan Johnson

    I teach a course in Critical Thinking and am working on a text of my own, so this is something that has taken up a lot of my attention for a while. I think Bostrom is dead on about the need. I’ve actually worked on a taxonomy but interestingly, whereas Bostrom’s appears to be divided as to the cause, I’ve been thinking to type them functionally–that is, what the biases typically do in reasoning and where they typically occur. I think the causal story is very interesting, and one that I think deserves a great deal of thought as well, but inasmuch as rationality is a normative concern, the causal sources of biases is a different part of the story–sort of ‘how things can go wrong’. Psychologists, as Bostrom and everyone here well knows, have whole libraries devoted to descrying the various causal mechanisms, most of which have been brought up here. I agree that these are done in a much too wholesale and unorganized fashion. One thing that I’ve been thinking about is the place of what are commonly called fallacies (mostly of the informal sort, but also lots of the common formal ones–Wason stuff, etc.) within the taxonomy of biases. I think fallacies are best viewed as a type of bias, and not the other way around. I’d be interested to hear others’ comments on that.

  • Patri Friedman

    Nick – you briefly mention organizational communications. I think the topic of organizational bias is fascinating, but very different from individual bias, and requires its own taxonomy. The most common type of organizational bias I see is basically failed mechanism design: when the natural outcome of a system is different from the intended one because the mapping from mechanism to outcome is really hard to make. Or sometimes, when the mechanism wasn’t even designed consciously, but just evolved, and doesn’t necessarily serve organizational goals. Well, that and misrepresentation, of course :).

    Anyway, I definitely agree on the need for a taxonomy.