7 Comments

Hal, the idea is that we don't want to require infinite effort to obtain info, analyze it, etc. Typically there is an opportunity cost for reducing error; we are interesting in ways to reduce effort that cost less than the usual cost we pay. Also, yes, as I said, naming this "Overcoming Error" would be a cleaner definition; it would just less clearly connote the reasons we think our goal is achievable.

Expand full comment

I see some problems with distinguishing between cheap and non-cheap avoidable error, and defining bias as just the cheap kind: If bias is cheaply avoidable error, then what do we call expsnsive avoidable error? Why is it OK to have avoidable error that is expensive? And what is the dividing line, how expensive does avoidable error have to be before we will say that we shouldn't care about eradicating it?

Defining the goal as overcoming bias, and then needing to define what bias is, is something of a double negative. Maybe it would be better to state things in positive terms: our goal is to minimize belief errors. This would then automatically lead us to focus on reducing those sources of error which provide the best cost/benefit payoff.

Expand full comment

Hal, yes, the qualification you propose on "cheaply" is just the qualification I was trying to propose.

Nick, yes, it might be better to refer to a "tendency toward belief error" rather than "belief error," to avoid the issue of accidentally correct or incorrect beliefs. And yes, I did have in mind effects that would disappear, at least as a tendency, if we cared only about minimizing our belief errors. I do think we are very close, though we have agreed on an exact wording yet.

Expand full comment

RH: "belief error especially avoidable by sacrificing other belief functions"

If we ignore the idea of "cheaply avoidable" and if we take "avoidable by sacrificing other belief functions" to mean "would disappear if the other belief functions ceased to exert an influence" then this is very close to the explication I proposed, and which we are trying to refine under the other thread.

One problem with phrasing it as avoiding "belief error" which I pointed out earlier is the possibility that somebody might accidentally come to have less erroneous beliefs by being biased. E.g. Brown has a bias against people with red hair. This belief is caused by a mechanism whose function is to make Brown feel superior. This bias leads Black to think that Red is stupid. The cause of this belief is Brown's bias against red-haired people (and Red has red hair). In fact, Red is stupid, so Brown's belief is accurate. Brown's belief error would increase if his prejudice were eliminated because he would then become agnostic about Red's intellectual capacities, having no more information about him than having seen a picture.

Expand full comment

When we speak of "cheaply" avoidable error, what kind of cost are we talking about here? Computational cost, so that we don't worry about errors that could only be corrected if we were vastly smarter than we are? Or reputational cost, so that we don't worry about errors that make us look bad in public?

Seems to me that we should not consider reputational cost in aiming to overcome bias. Otherwise we would probably just behave normally and not care about this topic at all, since evolution probably aims to optimize us for a balance between good reputation and true beliefs.

Expand full comment

Guy, yes, the question is the value of true belief, relative to the values achieved by the other functions of our beliefs. Being here is a signal that we think we care more than most about the value of true belief, at least on many important topics, and so we are more willing than most to work to identify and correct our biases, at least when the cost of that work is relatively cheap. Yes, what matters is cheapness relative to the value of true beliefs on important topics.

Expand full comment

It's pretty much a tautology to say that, if a belief unjustified, then there is *epistemic* reason to correct it. And given the nature of belief, when we know that a belief of ours is unjustified we are disposed to correct it -- to respond to this epistemic reason. But it's another question whether we should aim to overcome bias in our beliefs -- whether we should actively aim to identify such biases and correct them. This is a question, not about our epistemic reasons but about our practical reasons. In short it's a question about the value of knowledge, justified belief, and true belief.

Like the question of definition, this is a question that has been hovering around some of the discussions. It probably deserves a separate post, but it also surfaces in Robin's post. Some of the mechanisms that shape our beliefs have non-epistemic functions (e.g. to increase others' respect). But whether we should correct the biases caused by these mechanisms depends, not on their function, but on our rational goals. Since having true beliefs isn't valuable in itself, it very much depends on whether the beliefs in question are about something independently important; biases about trivial matters aren't worth correcting. Robin is probably responding to this point when he refers to errors that are *cheaply* avoidable. But I'm not sure this is the right way to put it. If we suffer from biases about some crucially important matters, then surely we should be willing to make a great effort to find out whether this is the case, and if it is, to aim to overcome these biases?

Expand full comment