# Useful bias

I would like to introduce the perhaps, in this forum, heretical notion of useful bias.  By useful bias I mean the deliberate introduction of an error as a means to solving a problem.  The two examples I discuss below are concrete rather than abstract and come from my training as an infantry officer many years ago.  Now technology solves the problems they solved, but the examples may still serve to illustrate the notion.

The first example comes from land navigation, which is the use of compass and map to get from one point to another.  One standard problem is to get from a point in a wood, or other occluded terrain, to a point on a road or the like, some distance away.  The unbiased approach is to take a bearing, i.e., determine a direction, from where one is to where one wants to go, and then follow it.  The problem is that as one follows the bearing, with each step a little random lateral error creeps in so that when one reaches the road one may not be sure whether the point one is seeking is to the right or the left.  The biased approach is to follow a bearing that is sufficiently to the left or right of the objective that when one reaches the road one can assume with a high degree of probability that the objective is to the right or left.

The second example comes from directing artillery fire to strike a target that one can observe, but that is an unknown distance away.  The unbiased approach is to estimate (guess) the distance, notify the gunners, observe the first shot, and then walk subsequent shots towards the target in increments of distance.  (Up 100. Up 100. etc.) The biased approach is “bracketing” the target.  In bracketing, the observer estimates (guesses) the distance, and then adds a large increment to the estimate to ensure that the first shot will fall beyond the target.  The observer then adjusts the fall of the sequence of subsequent shots by halving the distance between subsequent shots.  (ideally, by cycling through a sequence of over and under shots.  As n increases, X plus (0.5) to the nth power, times β sub n, where β is the unknown bias in estimating the unknown range X, will converge on X.  Experiments have shown that on average, bracketing will result in a faster convergence of the fire on the target than will walking.

So long as satellites and batteries don’t go dead, GPS and laser range finders now solve the land navigation and ranging problems in an unbiased manner.  Still, the questions that motivated this post remain: is the notion of useful bias itself useful?  That is, are there other, more pacific examples in the cognitive realm?

GD Star Rating
Tagged as: , ,
• http://profile.typekey.com/sentience/ Eliezer Yudkowsky

When life hands you a lemon, burn it.

• Chris Lloyd

Seems that this isn’t a useful bias as much as a way of compensating for a known bias. Maybe I am just getting caught up in semantics.

• http://profile.typekey.com/sentience/ Eliezer Yudkowsky

Someone pointed out that my earlier remark needs clarification. Indeed it does. Sorry.

I’m a general skeptic of the notion that cognitive biases should be coopted or tolerated rather than overcome; or at least I’m very skeptical that they can act to produce true beliefs, which is my sole criterion of epistemological utility. Some biases, mostly motivated and political biases, seem obviously “adaptive” from evolution’s perspective (which doesn’t give a damn about you or your goals, remember). But many other biases seem to the result of evolutionary hackery or cognitive computing limitations, and yet people try to make these adaptive too. In short, people seem unwilling to accept that brains are as screwed up as they really are.

The cases that Tschoegl cites are not examples of what I usually mean by cognitive bias – though he didn’t say they were. These are cases where the error-correcting procedure deals more easily with a large error in a known direction than a small error in an unknown direction, so a large error in a known direction is deliberately added. Could there be analogues in human cognitive bias? It’s an interesting question, worth investigating in specific cases.

It’s the phrase useful bias that I object to. Even if, say, some particular amount of optimism was adaptive in the ancestral environment, not for political reasons, but because people have an easier time recovering from optimistic errors than pessimistic ones – note that this itself strikes me as a total contradiction to experience – it wouldn’t follow that such bias was adaptive in this unancestral environment, useful to our declared goals, or helpful in arriving at veridical beliefs. These would require large burdens of proof, especially the last.

Biases are lemons, not lemonade and I wish people would stop trying to make lemonade out of them.

• http://profile.typekey.com/halfinney/ Hal Finney

I would put a little different spin on Eliezer’s example. Suppose erring in one direction was far more dangerous and costly than erring in the other. Then wouldn’t it be useful to bias ourselves in favor of the safer direction?

For example, alcoholics often respond to their problem by completely abstaining. This even though probably many of them could maintain a safe level of alcohol consumption by sufficient vigilance, now that they are aware of their problem. However, they err on the side of caution by perhaps intentionally adopting a view that any level of drinking is unsafe. This could be said to be a useful bias.

• http://profile.typekey.com/hollerith/ Richard Hollerith

Hal, provided that one’s model of reality includes the fundamental fact that one is not in complete control of one’s behavior (so that one’s actual behavior might deviate from one’s policy) straightforward decision theory will arrive at the same decision (namely, abstinence), so IMHO your example is not a bias in the sense we have been using the word on this blog.

This comment replaces an earlier comment that Richard asked that I delete. – Adrian

• http://luigig.blogspot.com Luigi Galli

I think there are more examples of a “useful bias” in fact. As pointed out elsewhere, biases are not only pitfalls but can be seen as useful shortcuts too..

…at least if we choose to look at them that way.

Meaning: you don’t really need, IMHO, to classify them as “biases”. To me, the examples you provide are more like the application of a higher level of reasoning. In computational terms (pardon me, I’m really stretching the comparison here) it could be seen as the difference between bubble sort and, say, quick sort. One being the obvious and maybe easiest approach sorting, the other adding some complexity but greatly improving efficiency.

• http://profile.typekey.com/halfinney/ Hal Finney

Here’s an example of what I mean when I refer to the belief among some alcoholics that no amount of drinking is “safe” as a bias:

http://www.habitsmart.com/cntrldnk.html

‘It is truly shocking that the issue of controlled drinking still evokes such violent debate. Most people simply don’t know the facts regarding controlled drinking as a viable alternative for some problem drinkers. Popular press, driven by a strong 12-step coalition, has created an abstinence-only public mind set: Those with the disease of alcoholism (defined nebulously as “people with drinking problems”) simply should not drink at all and that for professionals to advocate anything but abstinence and AA attendance is tantamount to malpractice. The purpose of this article is to set the record straight with regard to the controlled drinking debate.’

It’s an emotional issue for many alcoholism support groups and they stick very firmly to the position that no amount of drinking is safe even though the evidence is (apparently) in the other direction. That’s why I call this a bias among such people, and it is arguably a useful one as well.

• http://profile.typekey.com/sentience/ Eliezer Yudkowsky

Hal, the decision to abstain completely from alcohol can be “useful” or it can be a “bias” but I don’t see how you can have it both ways.

Please note: decisions and beliefs are not the same thing. The artillery officer who overshoots knows that he is overshooting – he does not have a mistaken, non-veridical, false belief. Instead he makes a decision, as a matter of calculated utility, to shoot higher than he believes will hit the target. Since this whole method relies on knowing – having veridical beliefs – about the direction in which your action overshoots the target, it would be calamitous indeed were the artillery officer to actually bias his beliefs.

• http://profile.typekey.com/halfinney/ Hal Finney

Maybe another way to think of it is that a bias could be “useful” if it compensates for another, harmful bias.

That is one way to think of the alcoholism situation. Alcoholics who had an unbiased view of the possibilities of safe social drinking might imbibe to an unhealthy degree, because they would overestimate the short-term pleasures of social drinking relative to the long-term harm of relapsing into alcoholism. By adopting the biased view that no amount of social drinking is safe, they avoid falling into the trap induced by the other bias which over-weights short term pleasures compared to long-term harm.

• http://profile.typekey.com/sentience/ Eliezer Yudkowsky

Hal, but in that case, they’re right to be afraid of social drinking. The people who tell them it’s safe would be simply wrong. That’s why I don’t see how you can have it both ways.

• ChrisA

There could be useful bias’s in the case where holding to a rational belief is an optimal strategy for me, but sub-optimal for society as a whole. For instance it is possible that we have a useful bias to behave in a non-sociopath manner, even if I believe all morality is inherently relative and situational. Thus even given the opportunity with no chance of being caught, I would not steal someone’s life savings. This bias (to behave according to current society norms of morality, regardless of what I say I rationally believe) could be an important factor in allowing commerce and trade, so increasing overall welfare of society.

• James Wetterau

In the statistical sense, a biased estimator may have greater likelihood, and an unbiased estimator may sometimes have absurd qualities or have greater mean square error than a biased estimator.

http://en.wikipedia.org/wiki/Bias_of_an_estimator

Are there perhaps analogies with beliefs?

• albatross

It seems like we’re talking about two different things here:

a. Sometimes, rationality and correct perception of reality isn’t the best way to accomplish a goal. Evolution has outfitted us with biases that probably are based on this fact. For whatever reasons, it paid off historically for, say, parents to perceive their kids as prettier and smarter and nicer than they really were. So we have this bias which was adaptive in some environment. I take Hal Finney’s alcoholism example this way–even if each social drinking event has only a low probability of leading the alcoholic back into disaster, it may be more adaptive for him to believe that a single drink is death.

b. Sometimes, we use mental modules in our own minds, or the minds of other people, as tools. Then, it’s often easier to introduce an intentional bias in the perceived or claimed facts input into the module than to redesign it. A nice example of this is leading the target when you’re shooting. Instead of reprogramming your mind to calculate the path of the bullet and the bird, you just lead the bird by a certain amount. I think this is the set into which the original example falls. There are all kinds of similar tricks, right? How many of us set a clock ahead 10 minutes to help ourselves get out the door on time? We’re lying to some module or part of our brain, in order to get it to function better.

• http://profile.typekey.com/bayesian/ Peter McCluskey

For a description of useful biases, see Eric Baum’s book What is Thought?
A good example are the genetic biases which cause young humans to learn language with unusual ease, possibly better than a general purpose Bayesian mind would given the same evidence.
Is he using the word bias in a different way than those who deny biases can be useful?

• http://profile.typekey.com/sentience/ Eliezer Yudkowsky

Eric Baum is using it in a different sense. “Inductive bias”, “cognitive bias”, and “statistical bias” are not the same thing.

• http://profile.typekey.com/robinhanson/ Robin Hanson

Eliezer, that sounds like a useful post, to make clear those distinctions.