What exactly is bias?

Bias is something bad from an epistemological point of view, but what exactly is it and how is it distinct from other kinds of epistemological error?

Let’s start with one remark Robin made in passing: "Bias is just avoidable systematic error." One big question here is what makes a systematic error avoidable. For example, suppose somebody has never heard about overconfidence bias. When such a person (erroneously) rates herself above average on most dimensions without strong specific evidence, is she making an avoidable error?

It seems to be avoidable only in the sense that there is information which she doesn’t have but which she could get which would make it possible for her to correct her beliefs about her own abilities. But in this sense, we would be biased whenever we lack some piece of information that bears on some broad domain. Socractes would be biased merely by being ignorant of evolution theory, neuroscience, physics, etc. This seems too broad.

Conversely, if she is systematically overestimating her own abilities, it seems she is biased even if these errors are unavoidable. Suppose she does learn about overconfidence bias, but for some deep psychological reasons simply cannot shake off the illusion. The error is systematic and unavoidable, yet I’d say it is a bias.

Here is an alternative explication that I’d like to put forward tentatively for discussion: A bias is a non-rational factor that systematically pushes one’s beliefs in some domain in one direction.

If we go with this, then to show somebody is biased it would be neither sufficient nor necessary to show that she is systematically in error in some domain (although showing that would often be inconclusive evidence in favor of the hypothesis that she is biased).

It is not sufficient, because it would also have to be shown that the systematic error results from the influence of some non-rational factor (as opposed, for instance, to some high-level assumption that rationally seems plausible to the subject, based on her evidence, but happens to be wrong).

It is not necessary, because somebody could be subject to the influence of a non-rational factor that systematically pushes her beliefs in some domain in a direction which fortuituosly makes her beliefs more accurate. Somebody might think highly of her own abilities as a result of a psychological mechanism that has evolved for signalling purposes, yet in some case it might happen to result in correct beliefs (for somebody who happens to be above average on most dimensions) even though she has no evidence for these beliefs. 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/ericschliesser/ Eric Schliesser

    Robin and Nick emphasize the systematicity of error due to bias. But in the right theoretical (and evidential) context, systematic errors (while tenacious) are far easier to spot (and correct) than errors that (may) cancel each other out. If bias is genuine systematic (as in, say, loaded dice) then with the right policing and/or competitive mechanism it is fairly innocent.
    A studiously-maintained stance of neutrality or disinterestedness (cloaked in the language of ‘rationality’) that manages to avoid error can still be genuinely biased if it does not let certain search-domains or options become salient for discussion, etc. That is, a point of social social science is not only to give us error-free truth, but (say) to help us think through trade-offs between different options–even to expand our imaginations of what is possible. If the space of potential options is limited (say, because ‘no good data’ is available, or — worse — the researcher honestly never even considers a possible option in the regression, or the equations must produce one outcome) no error has been caused, but bias is nevertheless present.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Nick, I will soon make separate post on this topic, so let me just respond here to your proposal. Usually we figure out the meanings of words intuitively, and don’t bother with definitions. But sometimes we have an unusual desire to be clear about what we mean, and then we may resort to definitions. Because of this, the words we try to define tend to be relatively unclear, and we prefer to define those words in terms of other words that are relatively clear. My concern about your proposal to define a bias as a “non-rational factor of belief” is that “rational” is if anything even less clear than “bias.”

    Eric, yes indeed, it is possible that a stance of disinterestedness may induce bias by making important options seem less salient.

  • Paul Gowder

    To what extent is your notion of non-rationality socially contingent or contingent on resistance to knowledge? For example, if someone systematically misevaluates probabilities, we tend to say they’re biased (for example if they systematically apply the gambler’s fallacy). But that might be simply due to their failure to understand independence. We wouldn’t have called Socrates non-rational for not understanding independence for the same reason we wouldn’t have called him non-rational for not understanding evolution. Similarly, how about someone who systematically underweights opportunity cost having never taken an economics class?

    It seems like in order to call someone biased because non-rational when they don’t understand independence, or other techniques of reasoning, we either have to presuppose some kind of baseline level of ordinary knowledge about reasoning technique, or only call someone biased when they’ve been exposed to a technique but fail to adopt it.

  • http://profile.typekey.com/ericschliesser/ Eric Schliesser

    Nick, I am with Robin (and Pau): to appeal to a notion of rationality to explain or define bias introduces unnecessary complexity and controversy. Moreover, surely there are cases where intentional bias could be the rational thing to do? (Think of acts of deception.)

  • http://profile.typekey.com/halfinney/ Hal Finney

    In response to Paul, I’d suggest that if we think of bias as something we want to overcome, it makes sense to include even errors we don’t (yet) know about when we speak of bias. Part of overcoming bias means seeking out information about presently-unknown sources of error.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Eric, I agree that bias can manifest as neglect of relevant considerations or as failure to even form a belief about some topic. To improve the explication, we might extend it to deal with such cases, but for the sake of simplicity we might start with a Bayesian model of the mind in which precise probabilities are assigned to all relevant propositions.

    Paul, on my explication the two examples you give could result from bias or not. If somebody succumbs to the gambler’s fallacy because of wishful thinking, then it would be bias. If they committ the fallacy because they apply some general heuristic the purpose of which is to get at the truth, and it just so happens that this heuristic fails in the context of gambling devices, then their error would not show bias, it would just be a systematic mistake. Similarly for opportunity cost.

    Eric, yes there are cases where intentionally making oneself biased is the rational thing to do. I don’t see how this is a problem for my explication. It can be rational to place oneself under the influence of non-rational factors.

    Robin, by “non-rational factor” I mean something like: something shaping our beliefs whose function is not to maximize epistemic accuracy. The idea is that our minds contain many adaptations, some of whose functions are to skew our beliefs in order to achieve certain non-epistemic goals, such as appearing convincing or appealing to others. Perhaps we can also consciously bias ourselves if in our believing we aim for other things than the truth. Moreover, we talk of bias not only of individual minds but also of the opinions of groups. Such opinions can be influenced by various biasing factors. For example, if everybody holding a certain view were killed by a capricious despot, then the average opinion held by the surviving population would be biased. More subtly, social bias can result from funding arrangements, the media, file drawer effect etc.

    I think bias (in at least one sense, the one I’m trying to explicate) is a theoretical concept. It will be understood better as we form better theories about non-rational factors influencing opinion. There is only so much clarity we can get in the concept a priori. To get more clarity, we need to build theories drawing on empirical findings.

  • Paul Gowder

    Nick, it seems like “something shaping our beliefs whose function is not to maximize epistemic accuracy” is a pretty good definition on its own. Why should a bias involve pushing beliefs in some specific direction along some dimension? (Suppose there was evidence that I chose beliefs at random after a period of thinking, in order to display my decisiveness. Even those beliefs wouldn’t necessarily be skewed in any particular “direction,” wouldn’t it still be a bias?)

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Nick, Paul has a good point, it seems more straightforward to just define bias as the belief changes due to non-epistemic-accuracy belief functions. This is closer to the proposal I just posted.

  • Guy Kahane

    I agree with Robin that often there is no need for precise definitions, but it’s only rarely that there is no need for clarity. But it’s better to aim at clarity at the level of specific claims made rather than at that of the overarching theme under which these claims are made, as I think it’s almost always more fruitful to work with a broad, open-ended and intuitive sense of the subject.

    As Nick notes, ‘bias’ is best understood as an epistemic notion, referring to unjustified rather than to false belief, and there are all sorts of distinctions in epistemology that it might be useful to draw on to make charges of biases clearer — if only to ensure that different people are really talking about the same thing. (For example, ‘justified’ can be understood in either an internalist or externalist sense.)

    I share Nick’s worry about talk about *avoidable* error. An error might be avoidable by the believer himself, and more importantly, it might be avoidable in a sense where we can hold him responsible for his unjustified beliefs. But there are many areas where we speak about bias where this isn’t true. Or it may be avoidable simply in the sense that it’s correctable. But do we have a clear notion of a form of error in belief that, although we have been made aware of it, we can’t appropriately revise our beliefs? I prefer to interpret Robin’s talk of ‘avoidable’ at a second-order level: given that we are aware that many of our beliefs may be biased, we can start a reflective inquiry into the epistemic standing of our beliefs, and in virtue of having started this inquiry, many of our biases become avoidable even in the narrower sense. (In other words, those consciously engaged in the correction of bias are saddled with greater epistemic responsibilities than ordinary believers.)

    Another point I think Nick was raising was about generality. Do we want to think of any widespread false/unjustified belief, or even disposition to such belief, as a bias? The term ‘bias’ does suggest something wider, something with the causal powers to affect a whole sector of beliefs. But this distinction may not matter all that much when a disposition to form a particular unjustified belief causes a believer to form many other false beliefs.

  • http://profile.typekey.com/ericschliesser/ Eric Schliesser

    How about: bias is something that can shape our beliefs whose function is not to maximize epistemic accuracy. This is close to Nick’s working definition, but allows bias to be a potential and indirect cause of error.
    However, a concern I have with this is an old Kuhnian insight: sometimes, to increase accuracy one narrows the domain of inquiry. But we are unlikely to think of this as a form of bias, even if (as I noted in an earlier posting) it can be in some circumstances. So I propse the following: bias is something that can shape our beliefs whose function is not to maximize epistemic accuracy and generality.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Paul, Eric,

    Bias = “something shaping our beliefs whose function is not to maximize epistemic accuracy”?

    This seems a little too broad. Time-constraints and lack of interest will in some sense shape our beliefs – in the sense that we would believe different things if these factors were not present. The function of these factors is not to maximize epistemic accuracy. Yet I don’t think one is biased merely because one does not spend all one’s time on epistemic pursuits.

    For the influence of such a factor to be a bias, I think it would have some “directional” impact on our beliefs. I now put the word “directional” in quotation marks because I’m not sure it’s exactly the right term. A factor that tended to make our beliefs unjustifiably confident (to make us appear competent) could be a bias even if it was indifferent between to the propositional content of the belief, as Paul points out.

    How about this:

    A bias = a factor that systematically shapes our beliefs whose function is neither to maximize epistemic accuracy nor to conserve intellectual resources

  • http://profile.typekey.com/nicholasshackel/ Nicholas Shackel

    I don’t think we can adequately characterise bias without saying something about the rationality of belief in terms of believing in accordance with possessed evidence and in the light of the cost of acquiring further evidence, something like:

    a rational belief in p is believing p in accordance with possessed theoretical reasons such that the expected cost of being wrong is less than the expected cost of acquiring more evidence.

    I have some complications in what I would say about a theoretical reason, but I’ll omit them for now. The relevance of expected cost is not purely the conservation of intellectual resources, but is a full bloodedly practical element and for this reason I don’t agree with the other Nick’s suggestion. I also think we need to include bias in causes and effects. So I would go for something like

    bias in belief is either a cause of systematically irrational belief or the possession of systematically irrational beliefs

    (where the systematicity could itself be either theoretical or practical)

  • Guy Kahane

    Nick, this definition still wouldn’t do, due to the element I mentioned in my comments on Robin’s later post. Our interests and values are presumably a factor that can systematically shapes our beliefs and whose ‘function’ is neither to maximise epistemic accuracy nor to conserve intellectual resources. Interests and values systematically shape our beliefs because they determine the direction of inquiry, but this need not involve any bias (whereas wishful thinking does). So a further qualification is needed. (Would adding ‘DIRECTLY shapes our beliefs’ help? But ‘directly’ is not the clearest notion.)

    (Another point: bias can be a property of different things. When we ascribe bias to a person, I think we are usually also ascribing to that person a measure of epistemic blame. This is not so when we ascribe bias to sub-systems or capacities, e.g. the perceptual system or one’s capacity to estimate probabilities. Finally, bias can be a property of one’s evidence (‘a biased sample’) even when one isn’t in a position to be aware of this and all of one’s relevant sub-systems/capacities are in perfect order.)

  • Guy Kahane

    Oh, I now see that Nick S’s comment already contains my main point. Just to prevent misunderstanding: my previous comment was aimed at Nick B!

  • http://profile.typekey.com/nicholasshackel/ Nicholas Shackel

    We should probably also distinguish subjective and objective bias. My post was about subjective bias, but objectively biased belief could probably be similarly defined in terms of systematic untruth. You might be unfortunately placed or just not know enough without any reason to know more, and so be in possession of evidence which is systematically misleading.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Nick S, Guy,

    I take your point that conserving intellectual resources is not the only non-rational non-biasing driver. Inserting “directly” as Guy suggests might help.

    The idea was that when diagnosing bias we should set aside shapers of our beliefs when those shapers are things like lack of time, effort, areas of interest, computational limitations etc. These shapers, one might say, are not trying to shape our beliefs in any particular way. They are, in some sense which I’m not yet sure how to express exactly, neutral vis-a-vis the content of out opinions: their function is not to make us believe p rather than not-p, or vice versa, although they may have indirect effects on what we believe by causing us to spend more time acquiring or evaluating evidence on some topics rather than others. We can use Guy’s term “directly” as a placeholder for this idea, but I think it would be possible to replace this with a more precise explanation.

    Nick S proposes to define bias as systematically irrational belief, where a belief can be irrational if it would have been practically rational for us to have acquired more evidence. (Is this a correct interpretation of what he said?) One worry I have about this is that it might indicate bias too often. Bob is intellectually lazy. It would be practically rational for him to acquire more evidence (and do more thinking) about a wide range of topics. It would then seem on NickS’s account that most of Bob’s beliefs on these topics are irrational (because his probability assignments would change if he investigated more). Nevertheless, all parts of Bob’s brain, insofar as they work on forming beliefs at all, aim exclusively for the truth; and Bob is perfectly calibrated. His intellectual machine uses all the time and fuel it gets with optimal efficiency to generate accurate beliefs. It seems to me that what is wrong with Bob is not that he is biased.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    I’ve been away for the day and I see you’ve all been busy! 🙂 The last concrete proposal I see here is “a factor that systematically shapes our beliefs whose function is neither to maximize epistemic accuracy nor to conserve intellectual resources.”

    The problem here is clarifying “intellectual resource.” If I think better of my friends so they will like me more, and then pass more relevant info to me, is that good resource management or bias?
    It seems that there is an intuition here of legitimate vs. illegitimate costs one may consider.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    My mind keeps coming back to trying to elaborate a counterfactual where my goals put more weight on reducing belief errors and less weight on things like wanting people to like and respect me. Considerations that would remain just as important in this counterfactual seem legitimate, and considerations that would be less important seem illegitimate. So costs of time and money seem legitimate, while costs of people liking me less seem illegitimate. “Biases” seem to me to be closely tied up with these illegitimate considerations.

  • http://profile.typekey.com/nicholasshackel/ Nicholas Shackel

    On Nick Bs interpretation: Also can be irrational if it were theoretically rational for us to acquire further evidence. Expected cost includes anything we take as costs, and so theoretical rationality can impinge on acquiring further evidence in two ways: lowering the probability of p so increasing expected costs of being wrong; wanting strongly to know whether p can raise expected cost of being wrong by raising cost of being wrong.

    Bob’s not biased under my defn. Bob’s intellectually lazy so this increases the expected cost to him of acquiring new evidence. Under my defn: 1. He might not be irrational in belief because laziness mades the expected cost of evidence acquisition outweigh his expected cost of being wrong. 2. He might be irrational in belief but not biased because the irrationality is not systematic.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Nick S,
    In my example of Bob, I meant that it would be practically rational for him to do more intellectual work, not that his laziness would raise the cost of intellectual work so much that it would not be practially rational for him to do it. I also meant that he systematically errs on doing too little intellectual work.

    Now, it might be the case that this means that Bob has a bias against doing intellectual work, although in at least one sense of the word I think he would not be biased if his systematic error did not result “directly” from the influence of some factor whose function is not to maximize accuracy of belief.

    More importantly, however, even if Bob had a bias about the rewards of intellectual work, I don’t think that would mean that he was also biased about all other topics which are such that his beliefs about these other topics would change if he did more intellectual work. It would seem that on your explication Bob’s possible bias about the merits of intellectual work would spill over to make him generally biased about almost everything.

    Robin, yes bias does seem closely tied up with “illegitimate considerations” – I think we are playing around with different attempts to develop and clarify this general idea.

  • http://profile.typekey.com/nicholasshackel/ Nicholas Shackel

    I’m not sure if we’re disagreeing about the nature of practical rationality (probably) or laziness, but we are certainly disagreeing about what is systematic in bias.

    I take laziness to be aversion to work, and aversion to something influences what is practically rational for someone.

    General intellectual laziness would be likely to lead to generalised inaccuracy in belief, (but cf Bishop, Michael. “In Praise of Epistemic Irresponsibility: How Lazy and Ignorant Can You Be?” in Synthese, 2000, 122: 179-208, available at http://www.niu.edu/phil/~bishop/Research.shtml) but general inaccuracy of belief is not bias and is not what I mean by systematicity when talking about bias. I mean either irrationally skewed belief about some topic of knowledge or specific kinds of error in the use of specific kinds of evidence.

  • Guy Kahane

    There are various ways in which people can be help culpable for having unjustified or false beliefs, but, whether or not we want to use the term ‘bias’ to cover the whole range, I wouldn’t leave out, as Nick S is suggesting, factors that shape the direction of inquiry, and focus only on those that shape the formation of belief given a body of evidence.

    Intellectual laziness is one example. If someone heard about this blog but prefers not to read it, because he doesn’t care much about having truer beliefs, then this doesn’t automatically relieve him of responsibility for having biases beliefs.

    An even better example is self-deception. Self-deception is surely a form of bias, but self-deception operates not only by leading a person to believe falsely in the face of contrary evidence, but also (perhaps primarily) by causing him to be ‘lazy’ in gathering certain kinds of information that might force him to correct his beliefs.

    Perhaps you’d want to leave intellectual laziness out because it operates across the board. But intellectual laziness may be focused even when there’s no self-deceptive motivation at work. Perhaps someone just developed the habit of avoiding reading articles about science — perhaps such an article bored him a long time ago, although reading such articles wouldn’t bore him now or even take a great effort. Won’t we say that this is a bias?

  • Pingback: Train to be rational? | Rational Poker

  • Pingback: GuĂ­a EscĂ©ptica – 3 – “Si con esa cola sobrevives yo me quiero reproducir contigo.” | GuĂ­a EscĂ©ptica

  • Pingback: GE 03 – “Si con esa cola sobrevives yo me quiero reproducir contigo.” | GuĂ­a EscĂ©ptica