17 Comments

Let me second Robin's suggestion.

The discussion here clearly indicates that each of us is using the term slightly differently. When I think of bias, I think of it as something closer to the statistical, whereas my reading of Eliezer's comments is that he thinks of cognitive bias as something that is relatively unconscious and that cannot be useful in the sense of leading to true beliefs.

Is there a better term than '"useful bias" for the idea of adding a large error in a known direction to reduce the possibility of small (or in the case of alcoholism large) errors in an unknown direction?

Expand full comment

Eliezer, that sounds like a useful post, to make clear those distinctions.

Expand full comment

Eric Baum is using it in a different sense. "Inductive bias", "cognitive bias", and "statistical bias" are not the same thing.

Expand full comment

For a description of useful biases, see Eric Baum's book What is Thought?A good example are the genetic biases which cause young humans to learn language with unusual ease, possibly better than a general purpose Bayesian mind would given the same evidence.Is he using the word bias in a different way than those who deny biases can be useful?

Expand full comment

It seems like we're talking about two different things here:

a. Sometimes, rationality and correct perception of reality isn't the best way to accomplish a goal. Evolution has outfitted us with biases that probably are based on this fact. For whatever reasons, it paid off historically for, say, parents to perceive their kids as prettier and smarter and nicer than they really were. So we have this bias which was adaptive in some environment. I take Hal Finney's alcoholism example this way--even if each social drinking event has only a low probability of leading the alcoholic back into disaster, it may be more adaptive for him to believe that a single drink is death.

b. Sometimes, we use mental modules in our own minds, or the minds of other people, as tools. Then, it's often easier to introduce an intentional bias in the perceived or claimed facts input into the module than to redesign it. A nice example of this is leading the target when you're shooting. Instead of reprogramming your mind to calculate the path of the bullet and the bird, you just lead the bird by a certain amount. I think this is the set into which the original example falls. There are all kinds of similar tricks, right? How many of us set a clock ahead 10 minutes to help ourselves get out the door on time? We're lying to some module or part of our brain, in order to get it to function better.

Expand full comment

In the statistical sense, a biased estimator may have greater likelihood, and an unbiased estimator may sometimes have absurd qualities or have greater mean square error than a biased estimator.

http://en.wikipedia.org/wik...

Are there perhaps analogies with beliefs?

Expand full comment

There could be useful bias’s in the case where holding to a rational belief is an optimal strategy for me, but sub-optimal for society as a whole. For instance it is possible that we have a useful bias to behave in a non-sociopath manner, even if I believe all morality is inherently relative and situational. Thus even given the opportunity with no chance of being caught, I would not steal someone’s life savings. This bias (to behave according to current society norms of morality, regardless of what I say I rationally believe) could be an important factor in allowing commerce and trade, so increasing overall welfare of society.

Expand full comment

Hal, but in that case, they're right to be afraid of social drinking. The people who tell them it's safe would be simply wrong. That's why I don't see how you can have it both ways.

Expand full comment

Maybe another way to think of it is that a bias could be "useful" if it compensates for another, harmful bias.

That is one way to think of the alcoholism situation. Alcoholics who had an unbiased view of the possibilities of safe social drinking might imbibe to an unhealthy degree, because they would overestimate the short-term pleasures of social drinking relative to the long-term harm of relapsing into alcoholism. By adopting the biased view that no amount of social drinking is safe, they avoid falling into the trap induced by the other bias which over-weights short term pleasures compared to long-term harm.

Expand full comment

Hal, the decision to abstain completely from alcohol can be "useful" or it can be a "bias" but I don't see how you can have it both ways.

Please note: decisions and beliefs are not the same thing. The artillery officer who overshoots knows that he is overshooting - he does not have a mistaken, non-veridical, false belief. Instead he makes a decision, as a matter of calculated utility, to shoot higher than he believes will hit the target. Since this whole method relies on knowing - having veridical beliefs - about the direction in which your action overshoots the target, it would be calamitous indeed were the artillery officer to actually bias his beliefs.

Expand full comment

Here's an example of what I mean when I refer to the belief among some alcoholics that no amount of drinking is "safe" as a bias:

http://www.habitsmart.com/cntrldnk.html

'It is truly shocking that the issue of controlled drinking still evokes such violent debate. Most people simply don’t know the facts regarding controlled drinking as a viable alternative for some problem drinkers. Popular press, driven by a strong 12-step coalition, has created an abstinence-only public mind set: Those with the disease of alcoholism (defined nebulously as "people with drinking problems") simply should not drink at all and that for professionals to advocate anything but abstinence and AA attendance is tantamount to malpractice. The purpose of this article is to set the record straight with regard to the controlled drinking debate.'

It's an emotional issue for many alcoholism support groups and they stick very firmly to the position that no amount of drinking is safe even though the evidence is (apparently) in the other direction. That's why I call this a bias among such people, and it is arguably a useful one as well.

Expand full comment

I think there are more examples of a "useful bias" in fact. As pointed out elsewhere, biases are not only pitfalls but can be seen as useful shortcuts too..

...at least if we choose to look at them that way.

Meaning: you don't really need, IMHO, to classify them as "biases". To me, the examples you provide are more like the application of a higher level of reasoning. In computational terms (pardon me, I'm really stretching the comparison here) it could be seen as the difference between bubble sort and, say, quick sort. One being the obvious and maybe easiest approach sorting, the other adding some complexity but greatly improving efficiency.

Expand full comment

Hal, provided that one's model of reality includes the fundamental fact that one is not in complete control of one's behavior (so that one's actual behavior might deviate from one's policy) straightforward decision theory will arrive at the same decision (namely, abstinence), so IMHO your example is not a bias in the sense we have been using the word on this blog.

This comment replaces an earlier comment that Richard asked that I delete. - Adrian

Expand full comment

I would put a little different spin on Eliezer's example. Suppose erring in one direction was far more dangerous and costly than erring in the other. Then wouldn't it be useful to bias ourselves in favor of the safer direction?

For example, alcoholics often respond to their problem by completely abstaining. This even though probably many of them could maintain a safe level of alcohol consumption by sufficient vigilance, now that they are aware of their problem. However, they err on the side of caution by perhaps intentionally adopting a view that any level of drinking is unsafe. This could be said to be a useful bias.

Expand full comment

Someone pointed out that my earlier remark needs clarification. Indeed it does. Sorry.

I'm a general skeptic of the notion that cognitive biases should be coopted or tolerated rather than overcome; or at least I'm very skeptical that they can act to produce true beliefs, which is my sole criterion of epistemological utility. Some biases, mostly motivated and political biases, seem obviously "adaptive" from evolution's perspective (which doesn't give a damn about you or your goals, remember). But many other biases seem to the result of evolutionary hackery or cognitive computing limitations, and yet people try to make these adaptive too. In short, people seem unwilling to accept that brains are as screwed up as they really are.

The cases that Tschoegl cites are not examples of what I usually mean by cognitive bias - though he didn't say they were. These are cases where the error-correcting procedure deals more easily with a large error in a known direction than a small error in an unknown direction, so a large error in a known direction is deliberately added. Could there be analogues in human cognitive bias? It's an interesting question, worth investigating in specific cases.

It's the phrase useful bias that I object to. Even if, say, some particular amount of optimism was adaptive in the ancestral environment, not for political reasons, but because people have an easier time recovering from optimistic errors than pessimistic ones - note that this itself strikes me as a total contradiction to experience - it wouldn't follow that such bias was adaptive in this unancestral environment, useful to our declared goals, or helpful in arriving at veridical beliefs. These would require large burdens of proof, especially the last.

Biases are lemons, not lemonade and I wish people would stop trying to make lemonade out of them.

Expand full comment

Seems that this isn't a useful bias as much as a way of compensating for a known bias. Maybe I am just getting caught up in semantics.

Expand full comment