In[]cautious defense of bias

I think it might be worthwhile to speculate on ways that bias might have beneficial effects, in the course of asking ourselves how committed we ought to be to its elimination. I can think of four effects that seem to be particularly interesting, and I’ve outlined them beneath the fold.

In summary, the possible benefits I’d like to kick around are as follows: (a) random error (“noise”) might permit truth to develop by an evolutionary process; (b) bias-originated views might break the hegemony of other bias-originated views; (c) some biases might generate beneficial self-fulfilling prophecies; and (d) bias-originated errors might help us exercise and develop our argumentative and educative capacities.. (Warning: this is a fairly long post.)

1. Exogenous Shock to Evolutionary Stable Strategies
It seems fair to say that the truth-finding technology of any given society is always going to be imperfect. In Socrates’ time, there was no probability theory, no game theory, no evolutionary theory, and so forth. In our time, statisticians, mathematicians, and computer scientists are continuing to extend our ability to engage in inductive reasoning and deductive reasoning, as well as our computational power. Thus, there are, in principle, some truths that are not reachable by current truth-finding methods. At the same time, our truth-finding methods are stable: they work extremely well as far as they go, and there’s no disruptive and major dispute (although there are plenty of non-disruptive disputes) about the central ideas of rationality.

It seems like we can accordingly model our current best beliefs about the world (the hypothetical set of each belief x1…xn, where for each xi, no alternative belief would better match our truth-finding processes) as an evolutionary stable strategy, in the loose sense that every item in the set should be selected over every item (mutation) outside the set, such that over time, absent failure to follow the truth-finding procedures, they should be universally accepted.

Now consider a being who has evolved to include a paradigm case of biasing — say that .001% of the time, it chooses its beliefs at random rather than following a rational process. In a society with a stable but incomplete truth-finding procedure, it has some non-zero probability of hitting on a belief that happens to be true but is not in the “best beliefs” set because it is not reachable by current procedures. If that belief happens to be conducive to survival (either because it leads to physical or superior social success), evolution might select for it wholly apart from society’s incomplete truth-finding process.

2. Space-Clearing against Previous Biases
Closely related to the previous idea is that one false biased idea might sufficiently unsettle another false biased idea, such that the social dominance of the original idea is mitigated and truth can be accepted. For example, we might tell the following story about the Protestant reformation: the Catholic church was dominant and suppressed all dissent. When Luther nailed his theses to the door, he was motivated by bias (assume arguendo that religion is inherently motivated by bias). The ensuing conflict between the Catholics and the Protestants created enough instability in the system of social control to permit things like the enlightenment, which greatly enhanced our truth-finding processes and could never have happened in the face of complete single-church hegemony. Because the Protestant story was so compelling (partially as a result of biases which demanded a religion but wanted to avoid some of the abuses of the main church), it could defeat the hegemony of the church in a way that mere rational argument might not have achieved.

(I have no idea if this story, which I just invented, is true, but the example suggests that the effect is in principle possible. I’d also appreciate any pointers to good historical scholarship on this kind of effect.)

3. Self-Fulfilling Prophecies
I suggested this point in the comments to the earlier post about teaching altruism. There might be some non-empty set of beliefs such that each belief in the set, xi, meets the following conditions: (a) xi is currently false; (b) xi would become true if enough people believed it; and (c) we would all be better off if xi was true, including the people who were initially tricked into believing it. It seems that the belief that people are generally altruistic might fall into this category, and we can imagine others too. To the extent this is true, perhaps we ought to encourage those beliefs? I think there’s basically a collective action problem argument to be made here: no individual has an incentive to adopt falsely altruistic-expecting beliefs, but society would be made better off if we all did.

4. Mill’s Argument
Toward the end of chapter 2 of On Liberty, John Stuart Mill argues that a state (that buys utilitarian theory) must not censor a view opposed to the prevailing dogma, even if it can be absolutely certain that the view is false. His argument, roughly, is that even false views provide overwhelming social benefits: they encourage the rest of society to develop arguments for the truth, thus developing everyone’s critical faculties and deepening their understanding of the true position.

It seems like a similar effect could counsel against society encouraging complete bias-elimination. Might we need someone who, for example, systematically fails to apply conditional probability appropriately, in order that we can learn from refuting their errors?

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://marknau.livejournal.com/39973.html Mark Nau

    The first item I though of was similar to your “Self-Fulfilling Prophecies” item, except it operates on an individual. Overconfidence, for example, can have the effect of making a person work harder toward making some difficult thing happen, thereby increasing the probability of success (even though that true probability is still less than believed by the individual.)

    Another I can think of are biases which help to “lock down” useful commitment strategies. The fact that people are known to hold certain biases make certain statements of commitment more credible than they would be otherwise, which can facilitate certain agreements that might not occur otherwise.

  • http://pdf23ds.net pdf23ds

    First, about number 1. What kind of bias introduces random error? That seems like an unusual thing among biases as we conceive of them here. And if the bias isn’t random, it’s not going to provide suitable variation for natural selection. Now, a new bias that wasn’t random could happen to correct some of the wrong “best beliefs” from before the mutation, but it would likely skew many more beliefs than it corrected and wouldn’t be adaptive.

    Your second point demonstrates that a new bias can be useful to overcome an old one, but not that the bias would be useful absent the power of the old one. But at this point we’re talking more about memetic warfare than elimination of bias.

    To your third point, I’d add the condition that (d) any particular individual would be worse off by adopting /xi/ in isolation. Also, I don’t think a belief like /xi/ would count in any sense as a bias. I don’t see how your third section is connected to bias.

    Also, it’s noteworthy, I think, that many beliefs similar to /xi/ also have the property that even in a world where everyone believes /xi/, not believing /xi/ can be advantageous. In a world of pacifists, one group could secretly develop advanced weapons and take over.

    I’ll leave your fourth point for others.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Paul, your point one confuses beliefs and actions. Even if we benefit by exploring the space of possible actions, that doesn’t mean we need biased beliefs about how those actions will turn out. Regarding four we are nowhere near complete bias elimination. And self-fulfilling beliefs are not biased, if they work. There is no doubt that some biases could correct for other ones.

    A better case might be market failures corrected by biased beliefs. For example, if free-riding means there are too few public goods, biases toward overvaluing the public good or overestimating its ease of production might counter that. Of course these biases would be good for society, but at the expense of the individuals holding them.

  • http://pdf23ds.net pdf23ds

    “And self-fulfilling beliefs are not biased, if they work.”

    If one overestimates the chance of success of a project as .8 when it’s really .4, but would be .2 if one believed it to be .2, then it’s still a bias, but it still works. Isn’t that the essence of the mechanism of the Lake Wobegon effect?

  • Paul Gowder

    Robin: A rational actor will never engage in an action that doesn’t lead to the best result according to his/her beliefs. Experimenting with the action requires experimenting with the beliefs.

    pdf: I think your addition of (d) is right, but xi clearly constitutes a bias, in the sense of disposition to adopt a presently (false) belief for non-truth-finding purposes (although on the social level, it would be for truth-creating purposes). That’s why I think everyone was fairly comfortable with Robin’s description of Barash’s strategy as “biasing” children.

    With respect to the secret non-pacifists (or non-altruists), this might be true, but that’s just a question of (c). It might not be better if a critical mass of people adopted a belief in pacifism (normative and positive) for just the reason you identify. But a similar threat wouldn’t necessarily exist in the case of altruism: a greedy minority might prosper for a while in an altruistic society, but maybe eventually they’d get caught and sanctioned, and/or the critical mass of altruistic people would still get to enough optimal solutions (because it would permit them to make credible committments, as Mark aptly pointed out) that the injury caused by the altruistic defectors would just be noise. Anyway, these are all questions that would have to be answered about a particular belief before society decided to foster it.

    Yeah, I’m not sure if random error biases exist, but might some of the classic cognitive psych heuristics (like salience, representativeness, etc.) fit in this category if we broaden the definition of “bias” to include that kind of cognitive-load-saving-shortcut?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Paul, I think you are trying to cover too many topics in one post. It would be better to pick one at a time, and cover each one more carefully.

  • Paul Gowder

    You think? Maybe so… I’d intended this as nothing more than a thin sketch of some preliminary and loosely related thoughts, but if this isn’t productive, I’ll break it up over the next week or so and elaborate each point a little further.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    “Now consider a being who has evolved to include a paradigm case of biasing — say that .001% of the time, it chooses its beliefs at random rather than following a rational process.”

    That’s a paradigm case of variance, not a paradigm case of bias. (Of course, as a follower of E. T. Jaynes, I probably shouldn’t believe in the bias-variance decomposition because it isn’t Bayesian enough.)

    “In a society with a stable but incomplete truth-finding procedure, it has some non-zero probability of hitting on a belief that happens to be true but is not in the “best beliefs” set because it is not reachable by current procedures.”

    Not all non-zero probabilities are worth pursuing. Lottery tickets, and monkeys typing Shakespeare, both come to mind. Truth is a much smaller target to hit than error – of all possible ways to obtain it, a random number generator has got to rank among the least effective.

    “If that belief happens to be conducive to survival (either because it leads to physical or superior social success), evolution might select for it wholly apart from society’s incomplete truth-finding process.”

    All you’ve done is describe an additional truth-finding process, and not, it seems to me, a very good one: “Adopt beliefs produced by random number generators, and let natural selection take its course.” The problems being, (1), a random number generator is pretty unlikely to hit anything but gibberish; (2), not everything that correlates to the number of surviving offspring is interpretable as a belief, and those interpretable as beliefs aren’t necessarily true; (3) random beliefs aren’t necessarily heritable with digital fidelity; (4), even if the underlying trick worked, you could do much better by tracking census statistics on what people believe and how many children they have, and examining the statistical conclusions, rather than waiting thousands of generations for natural selection to take its course.

    I agree it’s worth noticing when an odd-seeming belief seems to correlate to, say, the ability to manipulate physical reality. But it seems to me that it’s much better to try to bring this criterion into the deliberate judgment process, than to embrace noise (much less bias) in our cognitive systems.