8 Comments

"Now consider a being who has evolved to include a paradigm case of biasing -- say that .001% of the time, it chooses its beliefs at random rather than following a rational process."

That's a paradigm case of variance, not a paradigm case of bias. (Of course, as a follower of E. T. Jaynes, I probably shouldn't believe in the bias-variance decomposition because it isn't Bayesian enough.)

"In a society with a stable but incomplete truth-finding procedure, it has some non-zero probability of hitting on a belief that happens to be true but is not in the "best beliefs" set because it is not reachable by current procedures."

Not all non-zero probabilities are worth pursuing. Lottery tickets, and monkeys typing Shakespeare, both come to mind. Truth is a much smaller target to hit than error - of all possible ways to obtain it, a random number generator has got to rank among the least effective.

"If that belief happens to be conducive to survival (either because it leads to physical or superior social success), evolution might select for it wholly apart from society's incomplete truth-finding process."

All you've done is describe an additional truth-finding process, and not, it seems to me, a very good one: "Adopt beliefs produced by random number generators, and let natural selection take its course." The problems being, (1), a random number generator is pretty unlikely to hit anything but gibberish; (2), not everything that correlates to the number of surviving offspring is interpretable as a belief, and those interpretable as beliefs aren't necessarily true; (3) random beliefs aren't necessarily heritable with digital fidelity; (4), even if the underlying trick worked, you could do much better by tracking census statistics on what people believe and how many children they have, and examining the statistical conclusions, rather than waiting thousands of generations for natural selection to take its course.

I agree it's worth noticing when an odd-seeming belief seems to correlate to, say, the ability to manipulate physical reality. But it seems to me that it's much better to try to bring this criterion into the deliberate judgment process, than to embrace noise (much less bias) in our cognitive systems.

Expand full comment

You think? Maybe so... I'd intended this as nothing more than a thin sketch of some preliminary and loosely related thoughts, but if this isn't productive, I'll break it up over the next week or so and elaborate each point a little further.

Expand full comment

Paul, I think you are trying to cover too many topics in one post. It would be better to pick one at a time, and cover each one more carefully.

Expand full comment

Robin: A rational actor will never engage in an action that doesn't lead to the best result according to his/her beliefs. Experimenting with the action requires experimenting with the beliefs.

pdf: I think your addition of (d) is right, but xi clearly constitutes a bias, in the sense of disposition to adopt a presently (false) belief for non-truth-finding purposes (although on the social level, it would be for truth-creating purposes). That's why I think everyone was fairly comfortable with Robin's description of Barash's strategy as "biasing" children.

With respect to the secret non-pacifists (or non-altruists), this might be true, but that's just a question of (c). It might not be better if a critical mass of people adopted a belief in pacifism (normative and positive) for just the reason you identify. But a similar threat wouldn't necessarily exist in the case of altruism: a greedy minority might prosper for a while in an altruistic society, but maybe eventually they'd get caught and sanctioned, and/or the critical mass of altruistic people would still get to enough optimal solutions (because it would permit them to make credible committments, as Mark aptly pointed out) that the injury caused by the altruistic defectors would just be noise. Anyway, these are all questions that would have to be answered about a particular belief before society decided to foster it.

Yeah, I'm not sure if random error biases exist, but might some of the classic cognitive psych heuristics (like salience, representativeness, etc.) fit in this category if we broaden the definition of "bias" to include that kind of cognitive-load-saving-shortcut?

Expand full comment

"And self-fulfilling beliefs are not biased, if they work."

If one overestimates the chance of success of a project as .8 when it's really .4, but would be .2 if one believed it to be .2, then it's still a bias, but it still works. Isn't that the essence of the mechanism of the Lake Wobegon effect?

Expand full comment

Paul, your point one confuses beliefs and actions. Even if we benefit by exploring the space of possible actions, that doesn't mean we need biased beliefs about how those actions will turn out. Regarding four we are nowhere near complete bias elimination. And self-fulfilling beliefs are not biased, if they work. There is no doubt that some biases could correct for other ones.

A better case might be market failures corrected by biased beliefs. For example, if free-riding means there are too few public goods, biases toward overvaluing the public good or overestimating its ease of production might counter that. Of course these biases would be good for society, but at the expense of the individuals holding them.

Expand full comment

First, about number 1. What kind of bias introduces random error? That seems like an unusual thing among biases as we conceive of them here. And if the bias isn't random, it's not going to provide suitable variation for natural selection. Now, a new bias that wasn't random could happen to correct some of the wrong "best beliefs" from before the mutation, but it would likely skew many more beliefs than it corrected and wouldn't be adaptive.

Your second point demonstrates that a new bias can be useful to overcome an old one, but not that the bias would be useful absent the power of the old one. But at this point we're talking more about memetic warfare than elimination of bias.

To your third point, I'd add the condition that (d) any particular individual would be worse off by adopting /xi/ in isolation. Also, I don't think a belief like /xi/ would count in any sense as a bias. I don't see how your third section is connected to bias.

Also, it's noteworthy, I think, that many beliefs similar to /xi/ also have the property that even in a world where everyone believes /xi/, not believing /xi/ can be advantageous. In a world of pacifists, one group could secretly develop advanced weapons and take over.

I'll leave your fourth point for others.

Expand full comment

The first item I though of was similar to your "Self-Fulfilling Prophecies" item, except it operates on an individual. Overconfidence, for example, can have the effect of making a person work harder toward making some difficult thing happen, thereby increasing the probability of success (even though that true probability is still less than believed by the individual.)

Another I can think of are biases which help to "lock down" useful commitment strategies. The fact that people are known to hold certain biases make certain statements of commitment more credible than they would be otherwise, which can facilitate certain agreements that might not occur otherwise.

Expand full comment