To the barricades! Against … what exactly?

Within us are powerful tendencies to distort our beliefs, tendencies which hinder us from solving many other problems.  My hope is that we can build a community as serious about this problem as these folks are about theirs (minus the blood):

Delacroix_1 But to be effective we need not only passion but also precision.  Therefore, let us try to define our goal as clearly as possible, while avoiding tangential haggling over detail or verbal gymnastics.  Nick just posted on this issue, and several comments have previously raised it.  So, here goes. 

We have many mental attitudes, and a “belief” is an attitude that estimates a truth.   These truths can include facts about the world around us and our place in it, moral truths, and truths about our or others’ values.  The error of a belief estimate is how much it deviates from its truth.   

If our minds had been built only as error-reduction machines, we would try as best we could to reduce a weighted average of our belief errors, given resource constraints like the information, time, and money available to us.   There would be little point in having a group like ours devoted to reducing error; that would be everyone’s task all the time.

Sometimes human minds do seem to function roughly as error-reduction machines.   Overall, however, our minds seem to have been built to create beliefs which also achieve other functions, such as having other people like or respect us.   And the mental and social tendencies we have that pursue these other functions, such as wishful thinking and overconfidence, often come at the cost of belief error.   We are built to not see these distortions in ourselves, though we often see them in others. 

Most give lip service to reducing belief error, but we believe that we want more than most to adjust our mental and social machinery to reduce our error, even if this means we achieve other belief functions less.   Given such a willingness, it makes sense for us to expect that we can in fact reduce our error, and that we can help each other by gathering together.

For example, we seem to have been built with a tendency toward overconfidence in our abilities, because by thinking better of ourselves we induce others to think better of us.   We can correct for this bias relatively cheaply, individually by remembering to be more modest, and socially by creating norms that encourage modesty, if we are willing to pay the price that others won’t think as well of us.  For us, this is a cheaper way to reduce error than studying our abilities in more detail.

The question though is what to call this effort.   We could have called it “overcoming error,” but this would not well connote the reason we think that our effort makes sense.   By calling it “overcoming bias” we better connote the kinds of distortions we feel we have a chance to overcome.  After all, many literary definitions of “bias” use words like “partiality,” “unfair,” “prejudice,” “favoritism,” and “unreasoned,” indicating the kinds of distortions we have in mind. 

Unfortunately, other more technical definitions of “bias” refer instead to words like “distortion,” “systematic,” “error,” “tendency,” and “deviation,” which only indicate a pattern of error.   And so every time we use the word “bias” we seem to invite people to comment that bias is not obviously blameworthy or correctable. 

Words should be our servants, not our masters.  So I propose that in this forum we usually understand the word “bias” to refer roughly to “cheaply avoidable error,” since our topic is how to overcome error.  I say “usually” because of course we are free to clarify that we have another meaning in mind in a particular context.   

In the rare situations where more precision is called for, I suggest that “bias” refer to the sort of belief errors that might be especially and easily avoidable by sacrificing other belief functions, such as having us other people like or respect us.  I say “especially” because everyone knows that they could reduce error on most any topic by just devoting more time and effort to that topic.  We have in mind a cheaper approach, at least for who place less value on other belief functions.

To summarize, I propose we let “bias” usually mean “cheaply avoidable error,” and more precisely “belief error especially avoidable by sacrificing other belief functions,” because our topic here is how to reduce our error if we are especially motivated to do so. 

GD Star Rating
Tagged as: ,
Trackback URL:
  • Guy Kahane

    It’s pretty much a tautology to say that, if a belief unjustified, then there is *epistemic* reason to correct it. And given the nature of belief, when we know that a belief of ours is unjustified we are disposed to correct it — to respond to this epistemic reason. But it’s another question whether we should aim to overcome bias in our beliefs — whether we should actively aim to identify such biases and correct them. This is a question, not about our epistemic reasons but about our practical reasons. In short it’s a question about the value of knowledge, justified belief, and true belief.

    Like the question of definition, this is a question that has been hovering around some of the discussions. It probably deserves a separate post, but it also surfaces in Robin’s post. Some of the mechanisms that shape our beliefs have non-epistemic functions (e.g. to increase others’ respect). But whether we should correct the biases caused by these mechanisms depends, not on their function, but on our rational goals. Since having true beliefs isn’t valuable in itself, it very much depends on whether the beliefs in question are about something independently important; biases about trivial matters aren’t worth correcting. Robin is probably responding to this point when he refers to errors that are *cheaply* avoidable. But I’m not sure this is the right way to put it. If we suffer from biases about some crucially important matters, then surely we should be willing to make a great effort to find out whether this is the case, and if it is, to aim to overcome these biases?

  • Guy, yes, the question is the value of true belief, relative to the values achieved by the other functions of our beliefs. Being here is a signal that we think we care more than most about the value of true belief, at least on many important topics, and so we are more willing than most to work to identify and correct our biases, at least when the cost of that work is relatively cheap. Yes, what matters is cheapness relative to the value of true beliefs on important topics.

  • When we speak of “cheaply” avoidable error, what kind of cost are we talking about here? Computational cost, so that we don’t worry about errors that could only be corrected if we were vastly smarter than we are? Or reputational cost, so that we don’t worry about errors that make us look bad in public?

    Seems to me that we should not consider reputational cost in aiming to overcome bias. Otherwise we would probably just behave normally and not care about this topic at all, since evolution probably aims to optimize us for a balance between good reputation and true beliefs.

  • RH: “belief error especially avoidable by sacrificing other belief functions”

    If we ignore the idea of “cheaply avoidable” and if we take “avoidable by sacrificing other belief functions” to mean “would disappear if the other belief functions ceased to exert an influence” then this is very close to the explication I proposed, and which we are trying to refine under the other thread.

    One problem with phrasing it as avoiding “belief error” which I pointed out earlier is the possibility that somebody might accidentally come to have less erroneous beliefs by being biased. E.g. Brown has a bias against people with red hair. This belief is caused by a mechanism whose function is to make Brown feel superior. This bias leads Black to think that Red is stupid. The cause of this belief is Brown’s bias against red-haired people (and Red has red hair). In fact, Red is stupid, so Brown’s belief is accurate. Brown’s belief error would increase if his prejudice were eliminated because he would then become agnostic about Red’s intellectual capacities, having no more information about him than having seen a picture.

  • Hal, yes, the qualification you propose on “cheaply” is just the qualification I was trying to propose.

    Nick, yes, it might be better to refer to a “tendency toward belief error” rather than “belief error,” to avoid the issue of accidentally correct or incorrect beliefs. And yes, I did have in mind effects that would disappear, at least as a tendency, if we cared only about minimizing our belief errors. I do think we are very close, though we have agreed on an exact wording yet.

  • I see some problems with distinguishing between cheap and non-cheap avoidable error, and defining bias as just the cheap kind: If bias is cheaply avoidable error, then what do we call expsnsive avoidable error? Why is it OK to have avoidable error that is expensive? And what is the dividing line, how expensive does avoidable error have to be before we will say that we shouldn’t care about eradicating it?

    Defining the goal as overcoming bias, and then needing to define what bias is, is something of a double negative. Maybe it would be better to state things in positive terms: our goal is to minimize belief errors. This would then automatically lead us to focus on reducing those sources of error which provide the best cost/benefit payoff.

  • Hal, the idea is that we don’t want to require infinite effort to obtain info, analyze it, etc. Typically there is an opportunity cost for reducing error; we are interesting in ways to reduce effort that cost less than the usual cost we pay. Also, yes, as I said, naming this “Overcoming Error” would be a cleaner definition; it would just less clearly connote the reasons we think our goal is achievable.