Overcoming Bias

Share this post

In[]cautious defense of bias

www.overcomingbias.com

In[]cautious defense of bias

Paul Gowder
Nov 28, 2006
Share this post

In[]cautious defense of bias

www.overcomingbias.com

I think it might be worthwhile to speculate on ways that bias might have beneficial effects, in the course of asking ourselves how committed we ought to be to its elimination. I can think of four effects that seem to be particularly interesting, and I’ve outlined them beneath the fold.

In summary, the possible benefits I’d like to kick around are as follows: (a) random error (“noise”) might permit truth to develop by an evolutionary process; (b) bias-originated views might break the hegemony of other bias-originated views; (c) some biases might generate beneficial self-fulfilling prophecies; and (d) bias-originated errors might help us exercise and develop our argumentative and educative capacities.. (Warning: this is a fairly long post.)

1. Exogenous Shock to Evolutionary Stable Strategies
It seems fair to say that the truth-finding technology of any given society is always going to be imperfect. In Socrates’ time, there was no probability theory, no game theory, no evolutionary theory, and so forth. In our time, statisticians, mathematicians, and computer scientists are continuing to extend our ability to engage in inductive reasoning and deductive reasoning, as well as our computational power. Thus, there are, in principle, some truths that are not reachable by current truth-finding methods. At the same time, our truth-finding methods are stable: they work extremely well as far as they go, and there’s no disruptive and major dispute (although there are plenty of non-disruptive disputes) about the central ideas of rationality.

It seems like we can accordingly model our current best beliefs about the world (the hypothetical set of each belief x1…xn, where for each xi, no alternative belief would better match our truth-finding processes) as an evolutionary stable strategy, in the loose sense that every item in the set should be selected over every item (mutation) outside the set, such that over time, absent failure to follow the truth-finding procedures, they should be universally accepted.

Now consider a being who has evolved to include a paradigm case of biasing — say that .001% of the time, it chooses its beliefs at random rather than following a rational process. In a society with a stable but incomplete truth-finding procedure, it has some non-zero probability of hitting on a belief that happens to be true but is not in the “best beliefs” set because it is not reachable by current procedures. If that belief happens to be conducive to survival (either because it leads to physical or superior social success), evolution might select for it wholly apart from society’s incomplete truth-finding process.

2. Space-Clearing against Previous Biases
Closely related to the previous idea is that one false biased idea might sufficiently unsettle another false biased idea, such that the social dominance of the original idea is mitigated and truth can be accepted. For example, we might tell the following story about the Protestant reformation: the Catholic church was dominant and suppressed all dissent. When Luther nailed his theses to the door, he was motivated by bias (assume arguendo that religion is inherently motivated by bias). The ensuing conflict between the Catholics and the Protestants created enough instability in the system of social control to permit things like the enlightenment, which greatly enhanced our truth-finding processes and could never have happened in the face of complete single-church hegemony. Because the Protestant story was so compelling (partially as a result of biases which demanded a religion but wanted to avoid some of the abuses of the main church), it could defeat the hegemony of the church in a way that mere rational argument might not have achieved.

(I have no idea if this story, which I just invented, is true, but the example suggests that the effect is in principle possible. I’d also appreciate any pointers to good historical scholarship on this kind of effect.)

3. Self-Fulfilling Prophecies
I suggested this point in the comments to the earlier post about teaching altruism. There might be some non-empty set of beliefs such that each belief in the set, xi, meets the following conditions: (a) xi is currently false; (b) xi would become true if enough people believed it; and (c) we would all be better off if xi was true, including the people who were initially tricked into believing it. It seems that the belief that people are generally altruistic might fall into this category, and we can imagine others too. To the extent this is true, perhaps we ought to encourage those beliefs? I think there’s basically a collective action problem argument to be made here: no individual has an incentive to adopt falsely altruistic-expecting beliefs, but society would be made better off if we all did.

4. Mill’s Argument
Toward the end of chapter 2 of On Liberty, John Stuart Mill argues that a state (that buys utilitarian theory) must not censor a view opposed to the prevailing dogma, even if it can be absolutely certain that the view is false. His argument, roughly, is that even false views provide overwhelming social benefits: they encourage the rest of society to develop arguments for the truth, thus developing everyone’s critical faculties and deepening their understanding of the true position.

It seems like a similar effect could counsel against society encouraging complete bias-elimination. Might we need someone who, for example, systematically fails to apply conditional probability appropriately, in order that we can learn from refuting their errors?

Share this post

In[]cautious defense of bias

www.overcomingbias.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Robin Hanson
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing