#### Discover more from Overcoming Bias

Often someone will estimate something, and then someone else will complain "That’s wrong; our best answer is that we do not know." For example, Chris A commented:

There is no way anyone can rationally calculate the probability of death by old age being solved by a certain date, all we can say is that it is a possibility. There are too many variables, and the variables interact in ways we don’t and can’t understand, therefore a rational person would not bet any sum on the probability.

Similarly, Odograph commented:

The only bias-free answer on "peak oil" or "the future of oil prices" is to say you don’t know. The only way to begin a definite answer is to layer assumptions – assumptions about the future strength of your nation’s economy, assumptions about future fossil fuel discoveries, assumptions about future technologies, assumptions about future patterns of consumption, assumptions about future international relations, assumptions about global warming and global warming responses, and on and on.

To someone facing a concrete choice to take (or not take) action, "I don’t know" says little. Concrete estimates, such as event probabilities or point estimates for numbers, say a lot more. So **if you want to complain an estimate is biased**, you *must* say where you think a better estimate can be found; at least **tell us the sign** **of the bias** or error you see.

Almost *any* estimate will *of course* have error. The true probability of any event is either zero or one; any other probability is wrong. And there is little chance that a point estimate of a real number will get it exactly right. So we almost always "don’t know" in the sense that our estimates are surely wrong.

If you think academic or financial market estimates on lifespans or peak oil are biased, fine. But don’t complain these estimates make assumptions or require error-prone calculations; this is a given. Stick your neck out and tell us in which direction those estimates are wrong. Tell us lifespans will be longer, oil prices higher, or that the variance of these estimates is higher. But saying "that’s wrong, because we just do not know" seems to me worse than useless.

**Added: **Let me try to be clearer. You may claim you disagree with someone, but saying "you are wrong," "I disagree," or "we do not know" is just not enough to make this clear. You could say such things even if in fact you had exactly the same probability estimates that they do.

I don’t see how you could make it clear you actually disagree without indicating at least one random variable for which you claim to disagree about its expected value. And I don’t see how you could make it clear that you did in fact disagree about this expected value without indicating the direction in which your opinion differs from theirs.

## All Bias is Signed

Wow, I missed this conversation by a year. Good comments, and I think those who defended "no one can know" got part of what I was saying.

The other part was a simple reminder that "peak oil" cannot be directly measured. All we have as measurable data are price and current production data. The next step is always an extrapolation based upon an assumption. One starts, for instance, with the assumption that Hubbert's method will hold for world production, and that a calculation done today will yield an accurate "high production" and "high production date."

How do you put error bars on that assumption, that Hubbert's method, a heuristic, will hold?

(And I might also comment that in the year since this post, the "Hubbert's date" for Peak oil has moved and argued again and again.)

EliezerI agree, I was only using the example as something we can all agree we can't know. If you would like an unarguable example where the distribution as well as the expected value is unknowable, how-about the number of intelligent life forms in a galaxy outside our light cone? My point was really that there are a range from things that we can know well to things that we can't know at all. But when we get a distribution from someone how do we know how well or how much it is underpinned by real knowledge?

If we look at, say, the global warming predictions, we get a range in possible rises in average temperatures - I have heard from 3 to 6 deg C. But how much faith should we put in this distribution? Clearly it is of worse quality than if the same distribution was provided for the temperatures in New York tomorrow. How could we "measure" or otherwise agree on this quality factor? Could the measure include whether the model that produced the distribution can be tuned by real feedback or not.