It seems to me that we have reached a stage in our discussions on this blog, and in the field of bias studies more generally, where it would be useful to begin to develop a more systematic typology. There are so many different alleged biases that without some unifying framework it is easy to get lost in the details. Finding the right categories would also help us theorize better about bias.
To this end, let me tentatively propose a classification scheme, organized around the sources of bias:
Type-I biases arise from the fact that our beliefs sometimes serve functions – such as social signaling – that can conflict with their navigational (truth-tracking) function. For example, our tendency to overestimate our own positive attributes may be an example of a Type I bias.
Type-II biases arise from the shortcomings and flaws of our minds. We are subject to various kinds of processing constraints, and even aside from these hard limitations we weren’t very successfully optimized for efficiency in abstract rationality even in contexts where no adaptive function interferes with the navigational function of our beliefs. Type II biases can result from fast-and-frugal heuristics that compromise accuracy for speed and ease of use, or from various idiosyncratic features of our brains and psyches. We can distinguish subtype-II(a) biases deriving from shortcomings general to the human psyche (availability bias?), and subtype-II(b) biases deriving from shortcomings specific to some individual or group (beliefs about being danger among the paranoid?)
Type-III biases arise from our avoidable ignorance of facts or lack of insights, the possession of which would have improved our epistemic accuracy across a broad domain. (Many of Eliezer’s recent postings appear to aim to overcome Type III bias, for example by explaining important facts about evolution, which would help us form more accurate belief about many specific issues that are illuminated by evolutionary biology.) We distinguish subtype-III(a) resulting from lack of (procedural) insights about methodology, logic, or reasoning principles (e.g. anthropic bias), and subtype-III(b) resulting from lack of (substantial) knowledge about theoretical or concrete facts (e.g. errors resulting from ignorance about the basic findings of evolutionary psychology).
The distinctions between these different types are not always clear-cut. For example, biases of Type I and II may often be overcome by the right kind of information: does not this mean that they are really Type III biases? But I think in many cases we can reasonably judge what the principal source of the error is: just as when somebody comes down with pneumonia we can point to bacterial infection as the principal cause, not their failure to take a prophylactic dose of antibiotics, even though antibiotics would have prevented and may now be the way to overcome the disease.
Type III bias fades into simple error from unsystematic ignorance. I think the most paradigmatic kind of bias is Type I bias.
If we consider the statements in addition to beliefs, we can add a fourth type of bias: misrepresentation. This type of bias occurs when an individual or organization makes statements that systematically misrepresent its real beliefs.
We can further expand the concept so that it can be applied to objects that are neither beliefs nor statements. We could say that a body of data is biased, for example, if the most straightforward interpretation of it gives a systematically misleading picture of reality. Similarly, a scientific instrument could be biased if it tends to deliver biased data.
Nick - you briefly mention organizational communications. I think the topic of organizational bias is fascinating, but very different from individual bias, and requires its own taxonomy. The most common type of organizational bias I see is basically failed mechanism design: when the natural outcome of a system is different from the intended one because the mapping from mechanism to outcome is really hard to make. Or sometimes, when the mechanism wasn't even designed consciously, but just evolved, and doesn't necessarily serve organizational goals. Well, that and misrepresentation, of course :).
Anyway, I definitely agree on the need for a taxonomy.
I teach a course in Critical Thinking and am working on a text of my own, so this is something that has taken up a lot of my attention for a while. I think Bostrom is dead on about the need. I've actually worked on a taxonomy but interestingly, whereas Bostrom's appears to be divided as to the cause, I've been thinking to type them functionally--that is, what the biases typically do in reasoning and where they typically occur. I think the causal story is very interesting, and one that I think deserves a great deal of thought as well, but inasmuch as rationality is a normative concern, the causal sources of biases is a different part of the story--sort of 'how things can go wrong'. Psychologists, as Bostrom and everyone here well knows, have whole libraries devoted to descrying the various causal mechanisms, most of which have been brought up here. I agree that these are done in a much too wholesale and unorganized fashion. One thing that I've been thinking about is the place of what are commonly called fallacies (mostly of the informal sort, but also lots of the common formal ones--Wason stuff, etc.) within the taxonomy of biases. I think fallacies are best viewed as a type of bias, and not the other way around. I'd be interested to hear others' comments on that.