All Bias is Signed

Often someone will estimate something, and then someone else will complain "That’s wrong; our best answer is that we do not know."   For example, Chris A  commented:

There is no way anyone can rationally calculate the probability of death by old age being solved by a certain date, all we can say is that it is a possibility. There are too many variables, and the variables interact in ways we don’t and can’t understand, therefore a rational person would not bet any sum on the probability.

Similarly, Odograph commented:

The only bias-free answer on "peak oil" or "the future of oil prices" is to say you don’t know. The only way to begin a definite answer is to layer assumptions – assumptions about the future strength of your nation’s economy, assumptions about future fossil fuel discoveries, assumptions about future technologies, assumptions about future patterns of consumption, assumptions about future international relations, assumptions about global warming and global warming responses, and on and on.

To someone facing a concrete choice to take (or not take) action, "I don’t know" says little.   Concrete estimates, such as event probabilities or point estimates for numbers, say a lot more.   So if you want to complain an estimate is biased, you must say where you think a better estimate can be found; at least tell us the sign of the bias or error you see. 

Almost any estimate will of course have error.   The true probability of any event is either zero or one; any other probability is wrong.   And there is little chance that a point estimate of a real number will get it exactly right.   So we almost always "don’t know" in the sense that our estimates are surely wrong.  

If you think academic or financial market estimates on lifespans or peak oil are biased, fine.  But don’t complain these estimates make assumptions or require error-prone calculations; this is a given. Stick your neck out and tell us in which direction those estimates are wrong.   Tell us lifespans will be longer, oil prices  higher, or that the variance of these estimates is higher.  But saying "that’s wrong, because we just do not know" seems to me worse than useless.

Added: Let me try to be clearer.   You may claim you disagree with someone, but saying "you are wrong," "I disagree," or "we do not know" is just not enough to make this clear.   You could say such things even if in fact you had exactly the same probability estimates that they do. 

I don’t see how you could make it clear you actually disagree without indicating at least one random variable for which you claim to disagree about its expected value.   And I don’t see how you could make it clear that you did in fact disagree about this expected value without indicating the direction in which your opinion differs from theirs.   

GD Star Rating
Tagged as:
Trackback URL:
  • Kip Werking

    Oh come on.

    You do *not* have to know which way an estimate errs to know that it is too precise or insufficiently supported. I could think of a thousand examples to show this, but here’s one:

    Someone says Nostradamus told them that oil will peak in 2015. You say “Nostradamus is worthless, you don’t really know when oil will peak, if that is your only piece of evidence.” For that person to reply “well, you need to give me a directional arrow, to tell me in what direction I am erring” is ridiculous.

    Now, if that is the *only* evidence we have, then we can consider the Nostradamus fan to have a meaningless or *random* guess. In which case, the critic can’t do any better. But just because he can’t do better, doesn’t mean he can’t say “your evidence doesn’t support your conclusions either.” There is a distinction between the claim “your conclusion isn’t as supported or justified as you say it is” and the claim “I can do better.”

    And while it is true that “when facing a concrete choice to take (or not take) action, “I don’t know” says little”, it is also true that we are sometimes faced with paralyzing choices. The fact that we don’t have a justified or supported conclusion shouldn’t innoculate unsupported conclusions from criticism. All of this should be pretty obvious (unless I’ve somehow gone horribly wrong).

  • Kip, in your example, your complaint is about the evidence, not about the estimate. But I don’t think you can say an estimate isn’t “supported or justified” if you aren’t willing to say what estimate you think would be better.

  • MaxEnt places a strong pressure towards rational uncertainty even when certainty is so psychologically appealing.

  • This raises the issue of the “uninformative prior” which is still in dispute. How do you model ignorance?

    Granted, you are never truly ignorant, you can always in principle put some evidence to work. But what if the situation is that your beliefs are so uncertain that almost any new evidence could drastically change your probability estimates. If you say only, I think lifespans will be longer than he does, but you know that there is a good chance that you will be of the opposite opinion in the near future, then you have provided potentially misleading information. That’s a case where it might be more informative and helpful to say merely, I don’t know.

  • Perry E. Metzger

    I am sad to say, Robin, that I rather agree with the “sometimes you just don’t know” camp.

    Indeed, I’d go so far as to say that your “at least give us the sign of the error” idea is really bizarre. *REALLY* bizarre, in fact. Lets say that 50 people out there give me their best guesses about some future event and their guesses are uniformly distributed. I claim that they’re all just pulling numbers out of the air and have no basis for them. You then say, “No, that’s not good enough, give us the sign of the error each has made”. Lets say I give the sign of the error to each of them. What have I now done? I’ve predicted the real probability to within 2%, that’s what I’ve done, and you’ve asked me to do that when I claim in advance that there is no basis on which to do that.

    Sometimes, you just don’t have enough information for any meaningful prediction. You might be able to give a non-meaningful prediction, but that is uninteresting. What is a “meaningful prediction”? As an economically minded person, I’d say that is a prediction good enough that you would be willing to make it part of a profit making betting portfolio that you could live off of. I’d draw a distinction between “bets you believe well enough to make your living off of them” and “bets you are willing to make for sport”.

    If you have an array of propositions that you believe you have extremely accurate probability information for, you can take some capital and make diversified bets over these propositions and, with very precisely calculated probability, make consistent money on them. Insurance companies do this all the time by betting on a diverse array of events that they have very good probability distribution information on.

    On the other hand, lets say that the best you know is that there is a chance of something and that it is almost certain to have happened by a very distant date. You think it is nearly certain that by 2200 if there is still a human race it will have overcome aging, but you think it is nearly completely improbable that it will have done it by, say, January of 2008. Can you make consistent money off of this and similar propositions by betting on them? That is, could you make a living doing this? I don’t think you can. Given the choice between a bet based on the information given, and using the money some other way, “some other way” offers a range of better investments. At best, you’ll make a wager of a sum you don’t really care about for entertainment purposes. You can’t base a money management firm on bets like this. You can make a guess, but not a particularly meaningful guess.

    Lets say you claim to me, based on your best guesses, that you think there is a 23.5% chance of an appropriately defined “effective anti-aging therapy” being in place by 2015. I disbelieve — I don’t think you have any basis on which to assign a number like that. If you were to be extreme about your numbers — say you claimed 99.95% or 0.05% — I could give you a sign on your error, but only because there is only one direction your error could realistically go in. If you give me a number that’s middle of the road, like 23.5%, I honestly can’t say the sign of your error. I can only that I think the number is silly because you lack any reasonable evidence on which to make said precise prediction.

    Now, here is what I *would* be willing to do. I would be willing to make bets on an array of several hundred propositions where I think you are being silly and making an inappropriate prediction, with such bets structured so that I get a payoff if it turns out that the probabilities you assigned were reasonably far off the mark in either direction. However, I would not be willing to bet on any given proposition like bogus specific probability of anti-aging therapies by 2015. Why? Statistics again. It is only possible to show that you’re wrong by trying the experiment repeatedly, not by doing it just once.

    Robin then says:

    “Kip, in your example, your complaint is about the evidence, not about the estimate. But I don’t think you can say an estimate isn’t “supported or justified” if you aren’t willing to say what estimate you think would be better.”

    Actually, that’s not the right approach, Robin. One can make the claim that someone’s guess is indistinguishable from a random variable with some distribution. The claim that, for example, particular fund managers are no better than dart throwing at predicting market outcomes does not require that we come up with better predictions, just that we show that we can’t distinguish their predictions from particular random distributions. Your claim seems to me to be equivalent to saying “no, if you’re going to claim that Joe’s prediction for the price of Exxon Mobil shares is wrong because he’s doing it without information, you are obligated to more correctly predict the price.” To say the least, Robin, that’s a very poorly supported idea.

    Anyway, I’ll summarize. Your “give us the sign of the error” proposition is insupportable, in my opinion. You’re asking for a bit, and in many cases there is no rational basis on which to calculate even that one bit of information. Sometimes people just don’t know an answer, at all.

    You claim this isn’t true. Well, to use one of your least favorite phrases, we’ll have to agree to disagree. (By the way, that’s how people say “I think you’re wrong but I don’t want to spend the rest of my life arguing”. A rational person says this when he feels the value of further time spent on the argument isn’t very high to him compared to, say, eating, or watching television, or just spending the time away from the annoying person who claims that it is always irrational to “agree to disagree”.)

  • Perry, if your standard of whether you know is whether you would be willing to bet on it in financial markets, I would say that is a standard of whether you think you know *more* than the other people trading in the financial market. Your saying that you don’t know enough to bet is saying that the market price that would exist without your bet is better than anything else you can come up with. It is also clear that you are invoking risk aversion here, in that you are saying you would bet on a bundle of a hundred claims you disagree with, but not on any single one of them. I’d say that if you are willing to include a different estimate as part of such a bundle, that is a sufficient willingness to state the sign of the error you see.

  • Maybe the right thing to do in these cases is to give, not the sign of your difference, but the range of your probability distribution. Then you might be disagreeing with someone not about the median value but about the standard deviation.

    Also, Perry, while I think you’re right about what people usually mean when they “agree to disagree”, in this case do you in fact think that Robin is *irrational* to hold his view? Or merely informed differently than you? (Don’t mean to hijack the thread by asking this, but it’s a rare treat to see such a clear example of one of my favorite puzzles.)

  • Perhaps we need a fancy internet etiquette way of stating probabilities, etc. I mean we have emoticons, slashes for italics, and other ways to express ourselves in text, but including in your everyday writing your probability ranges or whatever is not apparently standardized yet. Opportunity to invent something?

  • Perry E. Metzger

    Hal; I indeed think Robin’s belief that you can always know the sign of the error in a claimed prediction is clearly incorrect. I’ll leave the term “irrational” for others to decide on. Clearly, however, if I have an oracle that can always state the sign of the inaccuracy, someone can feed said oracle a series of predictions, do a binary search of the range 0..1, and get as many bits of accuracy out as they want. Robin’s claim is equivalent, from what I can tell, to asserting the existence of such an oracle.

    I would agree that you can indeed give, as an alternative to a sign, a range of probabilities or (if you have somewhat more information) a probability distribution. However, if the best information you have leads you to something like “I think that at best we can say the probability is between 0 and 1”, you’re effectively saying “I have no basis on which to meaningfully judge the likelihood of the outcome.” (Asserting the range 0..1 with uniform probability is another way of saying “I have zero bits of data”.)

    People seek predictions so that they can plan for the future. If the information available is so vague that it cannot be used to make plans that are observably better than you could make without the information, then I don’t think the information is terribly useful. If a prediction is no better than a dart against a board would give you, I don’t think it helps anyone plan.

    It is clear that, at least some of the time, the available information is so limited that the answer “I don’t know” is the most honest one. Perhaps Robin finds this unsatisfactory, but I do not. I think, in fact, that those are some of the most admirable words to hear from someone. Far too often people give opinions when they have no information at all. “I don’t know” is said all too rarely.

  • So if you want to complain an estimate is biased, you must say where you think a better estimate can be found; at least tell us the sign of the bias or error you see.

    Nope, I have to disagree with the “at least tell us the sign” statement. I think that you’re too quick to dismiss risk aversion. It is a simple enough claim to say “investments A and B have the same expected rate of return, but I feel that investment A has more greater second and larger moments, so I prefer to invest in investment B because I am risk averse.”

    There are also plenty of other ways to have different probability distributions with the same expected value but different natures, places where the concept “sign of the error” makes little sense.

    Take the event “probability that economic growth will be positive in a country.” Suppose someone proves that for countries A and B, this probability in 90% for both. However, someone then points out that in the 10% where A has non-positive growth, the growth is essentially zero, whereas for B the growth is strongly negative. Of course, one can try to argue that this means that we’re talking about the probability of the “wrong” event, but that still demonstrates that “sign of the error” is a simplistic attitude. Similarly, one could demand a transformation from expected money to expected utility or other such functions in the risk-aversion example, but it’s also somewhat more complicated than “sign of the error.”

    And what about cross-correlations? Suppose that I am choosing between devoting resources to options A, B, and C. All have the same average rate of return, and I believe that those numbers are accurate. However, the uncertainties of A and C are related in such a way that A prospers when C does not, and vice versa. Then it makes sense to claim in a sense that “we don’t have enough information about B to choose it,” whereas we can reduce risk by investing partially in A and partially in C. There is no claim that there’s bias in the best estimate of the chance of success of B, and certainly no “sign of the error.”

    The true probability of any event is either zero or one; any other probability is wrong.

    Shudder. I disagree with this statement, but that’s because I disagree with how you’re using the words “probability” and “event.”

  • Consider also events with several different possible outcomes, where the proper preparation for each outcome is quite different, and even rivalrous. You claim that the expected temperature tomorrow when I visit is 60 degrees Fahrenheit. Very well, I will bring my long sleeve shirt, pants, and a light jacket. Ah, but it turns out that there’s actually equal chance of it being 10 or 100 degrees, and zero chance of it being 60 degrees. In that case I will bring two different sets of clothing, none of which will include the light jacket I bring in the first instance.

    If the possibility for tomorrow’s temperature is a uniform distribution over everything from absolute zero to the surface of the sun, then I should probably not go at all, nor even bother to prepare for tomorrow, but just enjoy today.

    Of course there are ways to consider different events and different probabilities so that one can come up with some single statistic whose sign is biased. However, I don’t see what’s necessarily so wrong with a shorthand argument.

  • Perry E. Metzger

    By the way, I have to say that this “you must have an opinion even if you have zero bits of information” claim is my problem with the so-called “Doomsday Argument”, and with similar arguments based on single self samples. In many such arguments, the number of bits of information provided by the so-called “experiment” is zero.

    For example, in the “doomsday argument” experiment, I look at my “birth number” and find it is some number N, and then conclude the human race is doomed soon. However, since absolutely every human performing the experiment will conclude exactly the same thing, the probability of the experiment yielding “doom soon” is 1, and thus by information theory the experiment is giving me zero bits of information — it is effectively worthless. Nick Bostrom whole notion of the “Strong Self Sampling Assumption” rubs me the wrong way for this reason.

    (For those unfamiliar with what I’m talking about, see for more information.)

    Anyway, Robin seems to be claiming that we always have information, or at least enough bits of information that claiming ignorance is wrong. I cannot agree. Sometimes, you really do have zero bits, or at least so few bits that for all practical purposes you are fully ignorant, and the honest thing to do is to admit ignorance.

  • Apologies for following this further off-topic, but Perry, I think there is a problem in your reasoning. Every human can examine himself and conclude that he is a mammal, hence the probability of that test is 1, but that doesn’t make it false. Similarly, if every human concludes “doom soon” but in fact it is true for the majority of humans (which is more-or-less predicted by the argument) then it can be said to be valid. Pick any phenomenon which has exponential growth but an abrupt ending (such as many fads) and you’ll find that the majority of participants would be correct to conclude “doom soon”.

  • Perry E. Metzger


    The issue is not with the direct result of such an experiment but with the conclusion afterward. If you look in the mirror and find you are a mammal, you are, indeed, a mammal. The direct result of noting your “birth number” is your birth number, and your “birth number” is indeed your birth number. The problem is not with the datum itself, it is with the conclusion you draw from it, which, in this case, is that you are most likely to be in the middle of the distribution.

    Now, it is indeed true (accepting the rest of the doom argument) that for the majority of humans (presuming nonlinear growth and a finite number of humans), “doom” is soon. The fallacy comes from assuming that you are one of that majority. You have no information either way with which to judge.

    An outside observer who had all members of the human race across all time in a giant urn (the same sort of urn we’re referred to in introductory classes on combinatorics, presumably) and who could select a representative sample from said urn could determine what the distribution of humans through time was and come to a conclusion about doom. When he conducts the experiment, he gets bits out the other end. If he selects 100 humans from the complete pool of N humans, the probability of picking any given human is not 1, and indeed, the probability of picking a human from a particular era is not 1. Said experimenter can even do things like T-tests to tell you how confident he should be in his conclusions about the distribution — he can give you actual numbers for his confidence.

    A person who is self sampling under Nick Bostrom’s “Strong Self Sampling Assumption”, however, gains no information that they didn’t already have. He places himself in an urn, reaches in, and selects himself with probability 1. He learns nothing new from self sampling, unlike the the man selecting from the urn with all humans. If he tries to do a T-test on his “sample” he can’t. That last bit, I think, is the key — he has a datum but no way at all to judge his confidence in it. He can’t say “doom with the following margin of error”, so he has no way to use the information to make further decisions. He can’t, say, rationally decide that he should take his vacation now rather than later because later there won’t be an opportunity.

    There are several equivalent ways to say the same thing here. We can say “although doom is not soon for every experimenter (clearly someone in 25AD should not have concluded “doom soon”), every experimenter will find “doom soon” with probability 1, so zero bits of Shannon information can be derived from the experiment”. We can say “the experimenter has no way of finding a confidence interval for the hypothesis from performing the experiment”. We can phrase this in terms of whether a reasonable person should alter their behavior based on the experiment. All, however, come to the same suspicious result — that you haven’t really learned anything from the exercise at all.

    I’ve seen other supposed paradoxes about probability in the past, and I have to say that, in general, people are very poor at reasoning about these things even if they’re very smart and well informed. Take, for example, how long it took to understand the importance of distinguishable vs. indistinguishable particles in statistical mechanics. The “Monty Hall Paradox” also comes to mind. I won’t claim to be special in this regard — I’m just like everyone else, and I find that I have to work through the whole thing carefully to make sure I really have all the bits accounted for. I’m inclined to be suspicious whenever I see an argument that hinges on how much information is available that something may be missed. The Monty Hall “paradox” hinges on how much information you have, as do the problems people had with quantum statistical mechanics. This has the same feel.

    The relevance, to me at least, of bringing all of this up is that Robin seems to be claiming that you always have information, and that does not seem correct. You do not, in fact, always know if the error is positive or negative. I encourage tracking the bits. If you don’t have them, you can honestly assert ignorance.

  • To be clear: When I say “estimate” I have in mind an expected value. When someone says they think they know the direction of a better estimate, that does not mean they are very confident the actual value is in that direction.

    I have in mind the idea that if forced to take an action that depended on an expected value, you would make a choice, and so in that sense you do have such an expected value in your mind. It is fine to be clear that your estimate of the variance (or any other moment) is very high, that your estimate would change easily in response to new info, or that you would defer to another source as offering a better estimate. You can also complain that someone else has too small a variance, or that their estimate would change too little in response to new info.

    But if someone else has stuck their neck out to state their estimate, then it seems wrong to complain that you think it is wrong, without your being willing to indicate at least the direction of your estimate.

  • conchis

    I wonder whether people are talking past each other here. I agree both with those who say that sometimes (when your guess is adding no information, and you have actually have the option of not making a decision/betting on outcomes), it’s best to just say “I don’t know”. However, I also agree with Robin, that if you’re going to claim that an estimate is biased, you should be able to give a sign. People seem to be treating these as inconsistent, but I don’t really see how they’re connected. The claim that someone should just say “I don’t know” seems to depend on the uninformativeness of their estimate rather than the existence of bias.

    (The likelihood that an estimate is going to be uninformative in any case depends on the context, (positively) on the amount of information already possessed by others and (negatively) on the richness of the information you can communicate. I don’t think it’s difficult to construct games where some individuals should not rationally contribute an estimate to group discussion. On the other hand, if you have to bet, you have to bet.)

  • Perry E. Metzger


    If you believe that you can always give the sign of the bias, you are in effect claiming you can build an oracle that can always give the sign, and in the end you are in effect claiming you know the exact correct value to any number of places. See my argument above for why this is true.


    You speak of information revealed through forced choice. Say I have no information and I say so. You then put a gun to my head and say “choose”. I flip a coin or use some other arbitrary method of selection and choose. What does this tell us? It says that I have no better choice to make than an arbitrary one. You would then, doubtless, claim that I am in fact stating that I think the correct expectation is 50% (or 33% if we have three choices etc.) That is true in some sense, but it is also uninteresting in the same way as being told “tomorrow, the price of AAPL will either rise, or will fall, or will stay the same” is uninteresting. Nothing is learned from the experiment, other than (perhaps) that I was not lying when I claimed ignorance.

  • Here is another way to frame the question. Imagine we are scoring people on their forecasts, but for some questions some people say “I don’t know” as their forecast. How should these answers be scored? One standard scoring rule is the log rule; a person’s score is then the sum of the logs of the probabilities they assigned to the correct answer (or the log of the joint probability they assigned, if any). How should this formula be adjusted; when should “I don’t know” get a higher score than some specific number?

  • I agree that if you are complaining that an estimate seems to be an incorrect value, you should be able to say which direction would make it more correct. However if your problem is that it is too precisely stated, or that it states (or implies) a range that is too wide or too narrow, then this is a different kind of complaint and you ought to be able to point to the kind of change that would remedy your complaint. You might say: it’s unreasonable to state this to 3 significant figures; 1 significant figure would be more appropriate.

    As far as the forecast-scoring question, I think it depends on your purpose. You could score “I don’t know” as a zero, worse than the most unreasonable guess, which would give people an incentive to come up with some kind of answer (assuming they sought high scores). OTOH you are discarding information that allowing a few “I don’t know”s might have given you, in terms of how certain people are of their estimates. You might allow people to skip a few questions and then not count those in computing their scores. This way the forecasts would come from people who knew at least a bit about the issue.

  • Perry E. Metzger


    To open, I’m not sure I entirely understand how your scoring rule works — an example or two would be enlightening.

    Second, one might ask what the purpose to which we intend to apply the score is. Presumably the desire is to be able to pick which prognosticator is most likely to be accurate so that one can decide, say, how to invest money (or how much to hedge or insure) based on the predictions. How does one apply the score you propose to assist with this task?

    Moving on, to the extent that I understand your proposal (which is to say, not fully), I’m not sure that it properly accounts for the difficulty of a prediction. Making an accurate claim with a narrow band about tomorrow’s oil price is much easier than making a claim of similar precision about oil prices in 2020. Perhaps scores should be weighted somehow by difficulty. (Or, perhaps not — it would depend on what role the scores serve in applications, as I mention above.)

    As for how to deal with “I don’t know”, once we’ve settled on an exact method for dealing with predictions that we feel gives us enough information to judge how seriously we should take a given prognosticator, we can likely figure out how to properly score “I don’t know”.

  • John Thacker

    I have in mind the idea that if forced to take an action that depended on an expected value, you would make a choice

    But not all choices depend on expected value in a linear– or even convex– way. See my temperature example. Consider an example of expected amount of rainfall or snow tomorrow. Suppose I prefer to wear a rain jacket if it rains at least .5″, to also bring an umbrella if it rains at least 2″, and wear galoshes if it rains 6″, and to do nothing if it barely rains.

    If the expected amount of rainfall is .4″, it makes a big difference how tight the estimate is in whether it makes sense to bring a rain jacket, regardless of how accurate that expectation is.

    Then it seems wrong to complain that you think it is wrong, without your being willing to indicate at least the direction of your estimate.

    Not necessarily. I think it’s legitimate to argue that the person is talking about the expected value of the wrong event, such as the expected value of total rainfall as opposed to the expected value of the probability of it raining at least .5″. Or the expected value of money as opposed to the expected value of utility. Uncertainty about one statistic, such as money, can change the expected value of any statistic which is a non-linear function of it, such as utility. Hence one can argue that the expected value is correct but not its utility, or, another way of saying it, that they’re considering the wrong statistic.

    There are certain restricted problems where your point applies, such as the survey problem you discuss in the last comment, but in general there are lots of situations where an increase in uncertainty in one statistic means a different recommended course of action, without changing that statistic’s expected value.

  • Robin Hanson says: “One standard scoring rule is the log rule; a person’s score is then the sum of the logs of the probabilities they assigned to the correct answer (or the log of the joint probability they assigned, if any). How should this formula be adjusted; when should “I don’t know” get a higher score than some specific number?”

    That standard scoring rule only applies to discrete probability spaces \Omega with n given mutually exclusive outcomes, and to probability spaces (of predictions) absolutely continuous with respect to the measure we’re judging against, where we then use the log of the density at the point rather than that of the exact probability. Generally in the continuous case it’s preferable to use something nice like Lebesgue measure, but it breaks down with a distribution with mixed discrete and continuous properties; e.g., rainfall, which has a point mass at zero and a continuous distribution of possible rainfalls above zero. There are a few (non-analytic) proposals for the mixed case, see

    But the question of scoring is a entirely different problem (one of decision theory) than what you originally proposed. It’s about elicitation of best estimates. The question of “we don’t have enough information” can, properly meant, mean something entirely different than the best estimate is wrong. It can, quite consistently, be an argument that one is considering the wrong problem (maximizing expected rainfall or temperature vs. utility).

  • conchis


    I’m not saying you can always give the sign of an *error*. What I’m saying that you should always be able to give the sign if you think there’s a *bias*. If you can’t give the sign, then you’re making a claim about error, which is not the same thing. (On the other hand, it’s claims about error that I think are most relevant to whether someone should, in any circumstance, say “I don’t know”, which is why I struggle to see the relevance of the whole argument about the sign of biases.)

    More generally, my sense of the argument for saying “I don’t know” is that it’s really only likely to apply in some sort of group decision-making process, or other situation where the point is not to make a prediction or a bet immediately, but to contribute to a future prediction or bet. (Again, if you have to bet now, saying “I don’t know” is just not an option.) In such a case, I would think the criterion should be something along the lines of “say ‘I don’t know’ if doing so will make the group’s predictions (however they are reached) more accurate than giving information”, and whether the criterion is satisfied will depend on the characteristics of the decision procedure, the amount of information possessed by other agents involved etc. My impression is that scoring rules are intended to function only in cases where agents must (or at least do) make actual predictions or bets, and so aren’t really relevant in situations where one could reasonably consider saying “I don’t know”. However, I confess to being somewhat out of my depth on this last point.

  • The scoring rule is designed to elicit probabilities; the “don’t knows” could give responses indicating a wider variance without having a different expected value, but that could indeed change the argument if we’re discussing optimizing the wrong expected value. If the “don’t knows” are unable to give any estimate of their distribution, though, that is indeed unhelpful.

  • John, yes of course not every decision depends directly on every expected value. I have been presuming that it is clear which expected value the two people are disagreeing about.

  • “I have been presuming that it is clear which expected value the two people are disagreeing about.”

    Yes, as that restricted problem. However, the post as you originally phrased it attempted to make a claim not supported by that argument, not restricted to that situation.

    “But saying “that’s wrong, because we just do not know” seems to me worse than useless.”

    There are situations where the correct response is not related in a linear way to the expected value alone, nor is hedging possible. “That’s wrong, because we just do not know” is a legitimate response in certain of those situations. Particularly if the person argues that our knowledge will improve in the future, and that acting incorrectly now would commit us to an irreversible course.

    For example, to take another example, one might agree with the expected temperature increase over the next hundred years, but think that there is a non-trivial (but fairly small) chance of the interglacial period ending and another ice age starting (with dramatic temperature drops). (By slightly increasing one’s belief in a larger temperature rise.) If the net costs of an ice age greatly outweigh the net costs of global warming, then increased uncertainty leads to very different suggested action for at least the near future.

    Hence, in my opinion, you are wrong to rule it out as a general starting point for an argument, since it nearly always means something about risk aversion. (Especially since most predictions are quoted in a straight expected value not expected utility manner, or otherwise subject to the St. Petersburg Paradox.)

  • If you loosened your comment to “tell us something about the distribution which is different” I would agree. However, attempting to reduce it to a single expected value statistic in an attempt to have a pithy “sign of the bias” phrase makes it inaccurate, I reckon.

    Yes, I know what you *mean*, but I also know what people (should, anyway) mean when they claim that “we don’t really know.”

  • Perry E. Metzger

    John does make an important point. The specifics of what is being predicted are crucial both to using the prediction as a basis for action and to assessing accuracy after the fact. The ideal prediction is a full probability distribution. It is hard, however, to know whether the probability distribution was correct without doing many, many trials, and in general the event being predicted is singular, so the results of only one trial are available.

    There is also the not insubstantial problem that for many of the things one might predict, a “distribution” is not really the outcome even if you could somehow do multiple trials. Some events really are deterministic, or at least for practical purposes deterministic. If one claims, say, that there is oil under a particular spot with 75% probability, what one is really saying is that in cases where the tests have results like current instance has, 75% of the time there will be oil beneath the field. However, whether there is oil beneath the field is fully determined before the tests are even done — it is either there or it isn’t. Sometimes this distinction is clear from context and sometimes it is not — I would argue that predictions must make such distinctions quite explicit if they are to be properly tested.

  • I agree that the phrase “You’re biased!” should be accompanied by a disagreement about a probability distribution – which does not equate to telling someone that a scalar estimator (representing what? the mean? the median?) is off by some particular sign.

    On the other hand, if we’re dealing with a single probability estimate – a scalar quantity, note – there’s not much to say except “This probability is wrong, here’s a better one”, from which you can implicitly extract the sign of the disagreement.

    You *will* assign some probability, whether you like it or not, whether you admit it or not, whether you associate it with a verbal description like “15 percent” or not, because in the real world, you have to choose actions based on degrees of anticipation. As Russell and Norvig say, “Every action (including inaction) is a kind of bet, and every outcome can be seen as a payoff of the bet. Refusing to bet is like refusing to allow time to pass.”

  • You can’t say something about your probability distribution without making a claim about an expected value; after all, the set of all expected values determines a probability distribution.

  • Perry E. Metzger

    I think that, in addition to all the other arguments, we’ve already seen the argument that, for example, a disagreement might be about a distribution and not an expectation value, or the disagreeing party might hold that only trivial predictions are possible. Anyway, I’m dropping out at this point.

  • “You can’t say something about your probability distribution without making a claim about an expected value; after all, the set of all expected values determines a probability distribution.”

    Okay, now you’re confusing terminology and changing your question again. What are you using “the set of all expected values” to mean? Certainly knowing the expected value of a single random variable does not determine the probability distribution of the random variable. That’s a totally wrong statement.

    On the other hand, if you mean by “the set of all expected values” the set of the expected values of all measurable functions of the random variable, then yes, that certainly determines the probability distribution. Similarly, knowing the value of the random variable on all members of the probability space, or all expected values of the random variable with the domain restricted to each of the members of the σ-algebra of the probability space, or the probability of the random variable lying in each Borel set (or even just each set unbounded below and bounded above only by each constant) determines the distribution. Also, I suppose that you could mean by “all the expected values” knowing all the moments, or, equivalently, the moment generating function.*

    *– Technically, knowing the last two formulations is only enough to know the probability distribution up to equality in distribution, which is not enough to specify the random variable completely, and which could be particularly important in the case of cross-correlations with other random variables. A and B might be equal in distribution, but for hedging purposes it is important whether they are perfectly correlated, perfectly anti-correlated, or whatever.

    I know of very few papers or meta-analyses that give the full moment generating function of the random variable that they study, or otherwise completely specify the distribution. Instead, they tend to give the best possible guess, whether mean, median, or even mode, sometimes a confidence interval or Bayesian credible interval, usually assuming or implying a normal distribution of some sort.

    However, in many of those cases the normal is implied by the fact that the average result of the experiments over many trials tends to a normal distribution by the Central Limit Theorem, not because the distribution in question being studied is actually a normal distribution.

    I will grant, surely, that any case where someone has fully specified the probability distribution, including by giving the expected value of all integer powers of the random variable in question, that the distribution is fully specified for the random variable and indeed for all measurable functions of it– provided that we aren’t interested in cross-correlations, since this only specifies the distribution, but not correlations with other random variables that might be of interest for hedging purposes.

  • “You can’t say something about your probability distribution without making a claim about an expected value; after all, the set of all expected values determines a probability distribution.”

    Consider the infinite family of probability distributions P_x where P_x(x+2) = P_x(-x) = 1/2 for all x > 0. Let the random variables A_x have distribution P_x for all x > 0. Clearly E[A_x] = 1 for all x.

    Now, obviously if you have *all* the other moments, or otherwise know the distribution via some of the ways I pointed out above, then you can specify the distribution.

    Again, there’s still room for pointing out when we do or do not know enough to hedge. Let random variables B_x and C_x also have the same distributions. Let B_x(\omega) = -A_x(\omega) for all \omega \in \Omega of the probability space. I.e., A_x and B_x are perfectly anti-correlated. Let C_x be uncorrelated with both A_x and B_x.

    Then, it is perfectly legitimate for someone to argue that an investment strategy should avoid C_x and invest equally in A_x and B_x in order to hedge and have 100% chance of expected value 1 (and avoid any loss), since “we don’t know enough about C_x.” We know its *distribution* perfectly, but still not enough about the random variable.

  • John, I wasn’t talking about investment strategies in particular. All said that if you have a complaint about someone else’s expected value of something, you should at least give a sign of some expected value of yours, relative to what you think they said. I didn’t say which random variable you should talk about; I explicitly listed several possibilities. I don’t see how you could give any info about your probability distribution which does not implicitly tell us something about the sign of some expected value of yours. Can you find a counterexample?

  • John Thacker

    I don’t see how you could give any info about your probability distribution which does not implicitly tell us something about the sign of some expected value of yours. Can you find a counterexample?

    What do you mean by “some expected value of yours?” Do you mean the expected value of some function of the random variable of interest, such as one of its moments?

    I gave one counterexample above of random variables with different distributions but the same expected value. Such a thing is trivial. Of course, yes, they do have different moments. (Expected value of the random variable raised to different powers.) I’m sorry, I don’t really understand what you mean by “some expected value of yours;” are you saying that one should able to give the difference in the expected value of some function of the original random variable? (Forgive me for coming at this from the perspective of a probabilist.)

    Of course, we can have random variables identical in distribution, but different because they have different values on different events. These leads to different values of cross-correlations and various joint statistics. If you expand to include all expected values of all possible random variables, including random variables not originally of interest, then I don’t suppose anyone could disagree.

    However, I don’t think that was how you originally phrased the problem. You talked about people not being able to say “we just don’t know” or “we don’t have enough information” without disagreeing about the expected value (or, as we’ve gone to, the distribution.) If it’s merely a case of me feeling that your original language was imprecise, then I apologize.

    I grant that there are natural meanings of “we just don’t know” that are precisely as you describe, and for which your objection is absolutely natural. However there are, in my opinion, some entirely natural meanings of “we just don’t know” that do not warrant your dismissal.

    The first is a reply of “we just don’t know the events which lead to the event very well, making it difficult to find other well-correlated events, making it difficult to hedge.”

    The second is a replay of “we just don’t know a great deal right now, but our information will improve in the future, so we should wait.”

    Now, in both cases it is of course possible to find some scalar expressing a belief, and that scalar will have an expected value which is different from what has previously been discussed. In both cases, however, the argument introduces a different random variable, a different statistic, or a different possible course of action than those that have previously been discussed. It is a way of changing the subject. I feel that “we just don’t know” is an acceptable way to introduce the argument “even if I grant yours as the best estimate for that particular problem, your figures contain a considerable amount of uncertainty and thus risk if we pursue your course of action; let me shift your attention to an alternative course of action for which we are able to hedge quite effectively and thus can guarantee doing some good.”

    I feel that you’ve stipulated the problem into something unrealistic, such as assuming that the people discussing are already considering *all* possible alternate course of actions. Not only would, IMO, the average person take your “sign of the bias” comment to mean that someone should offer a sign of the bias in the original prediction offered (rather than some measurable function of the original random variable or a complicated cross-correlation argument), quite often people make arguments having only considered a finite number of courses of action, or feel it sufficient to demonstrate that their proposal is better than the status quo but not all possible alternatives. Alternatively, it is extremely common to find arguments which deal with only the expected result based on current best forecasts, and which have failed to consider the risk or uncertainty involved. In the latter, a “tell me the sign of the bias” type comment will normally be interpreted as a request to offer a better expected value– such as a better expected price of oil, to use your original example, *not* a request to explain that due to a larger than expected variance, the risk premium is particularly high for the recommended course of action and thus an entirely different course of action that combines two hedging strategies (which may include waiting an amount of time until information improves, since committing now to a radical course of action carries high risk premium) and thus the expected utility of the originally recommended action is different. Most people, I reckon, would feel that the second argument, rather than best encapsulated by “tell me the sign of the bias of my estimate of the price of oil,” would be best summarized by “we just don’t know what the price of oil will be, so your suggestion is risky.”

    But YMMV.

  • In practical argument, I feel that “we just don’t know” also has a high chance of meaning “you haven’t properly considered the risks involved, you’ve only made a lot of assumptions and then done calculations based on your expected value for each assumption, when in reality if you included your full distribution at each step, you would obtain a resultant expected value which would give a very different answer than the one you’ve given.”

    People very frequently take random variable X, find its expected value, and then proceed to estimate X^2 or all sorts of other derived variables by pretending that E[X^2] = (E[X])^2. But that’s not true in general. Once you start throwing lots and lots of variables into the calculation, you can get some tremendously different answers.

  • John, “some expected value of yours” means the expected value of some random variable given your probability distribution. I don’t see why you would think that I am “assuming that the people discussing are already considering *all* possible alternate course of actions,” nor why you think it reasonable to interpret “you are wrong because we don’t know” as “let’s change the subject and talk about something else.”

  • John, “more variance than you anticipate” -> “You are assigning too much probability density to the mean/mode/center of your distribution.” That’s a signed bias on the probability mass for that set of points – too much, rather than too little. In plain-language strategic thinking, this would come out as “You’re focusing too much on what you think is the most likely outcome, and not thinking about possible exceptions.”

  • John Thacker

    “some expected value of yours” means the expected value of some random variable given your probability distribution. I don’t see why you would think that I am “assuming that the people discussing are already considering *all* possible alternate course of actions,” nor why you think it reasonable to interpret “you are wrong because we don’t know” as “let’s change the subject and talk about something else.””

    In that case, I think that your original phrasing of the post was essentially useless and misleading. If someone says “you are wrong,” then it’s an entirely natural consequence to assume that they’re saying that expected utility (or whatever) is lower by taking the other person’s advice. According to the trivial interpretation of your argument, that’s a “sign of the bias” right there. It thus seems to me useless to insist that someone who says “you’re wrong” in *any* way provide a “sign of the bias of some random variable given a probability distribution.” Merely by saying “you’re wrong,” we can assume that they’re claiming that expected utility would be maximized by taking some other option.

    I agree that the person claiming “you’re wrong” has an obligation to offer an alternate course of action, but if one is not mentioned I believe that it is reasonable to assume that the alternative is the status quo to the position offered by the other party. Thus it seemed to me that your objection is essentially contentless and redundant, at least in this trivial fashion.

    Since it’s obvious to me that any claim that someone is wrong involves a claim that their expected utility claims are wrong and do not actually maximize utility, to me the natural interpretation of your original point “tell me the sign of the bias” was a claim that someone had to offer a point of dispute with the original new piece of information offerred by the other party– i.e., dispute the price of oil itself or its distribution. I see that that’s not what you meant.

    “That’s a signed bias on the probability mass for that set of points – too much, rather than too little. In plain-language strategic thinking, this would come out as “You’re focusing too much on what you think is the most likely outcome, and not thinking about possible exceptions.””

    Yes absolutely, but my point, as above, is that it’s “worse than useless” pedantry to attempt to claim that “you are wrong,” especially as used in casual rhetoric, does not automatically assume a claim that *some* sort of expected value (esp. expected utility) is different. Again, in casual rhetoric, I believe that a reply of “show me where my bias is” is generally going to be taken as a request to focus on the particular measurements of the random variables brought up originally, especially on focusing on the original expected value offered, such as a price of oil.

    I think fundamentally we agree that “you’re wrong” must rationally contain a disagreement about some expected value of some random variable. Where we disagree is in what the natural interpretations of rhetorical discourse are. To me, “you’re wrong, because we just don’t know” should automatically indicate that I disagree about some expected value of a random variable and prefer some alternate course to the one you’re suggesting– if not explicitly stated, then the status quo. It may be that the point you’re attempting to make is so trivially obvious to me that to demand that people state it in rhetorical discourse seemed useless and redundant, so I incorrectly searched for an alternate meaning. To me, “you’re wrong, because we just don’t know” is not “worse than useless,” because I automatically interpret “you’re wrong” as implying the necessary claim that *some* expected value is different and then interpret “because we just don’t know” as an imprecise statement of one of several possibilities.

    One possibility is a statement that the other person’s variance is wrong, or that the first person is not considering other alternatives or that the random variable brought up by the first person had a random variable with poorly understand states that cause it, making it difficult to hedge against, unlike other alternatives. To me it is also reasonable to assume that the first person may have made one of several very common mathematical errors, especially that of deriving variables which are non-linear measurable functions of estimated random variables by using only the expected value of the estimates rather the entire distribution. In particular, if the function is complicated enough, then it can be too computationally difficult or intensive to properly derive the variable of interest (such as utility) using the entire distributions of the estimated variables. In which case, “we just don’t know” also has, to me, a natural interpretation of “in order to perform this calculation given our resources, we must make many simplifying assumptions (often including using expected or maximum likelihood values for estimated parameters). However, the equation of interest is highly non-linearly dependent on the estimated parameters and if the calculation were performed correctly without such mathematical simplifications, the results would be different.”

    Considering how poorly understood this mathematical point is, I think it’s a reasonable interpretation. (For example, much statistical literature recommends that people use an estimate for a standard deviation of a population based on a sample by using an estimate which is not unbiased, but rather the square root of an unbiased estimator for the variance, which is not the same, not that many people realize it. The entire concept of the unbiased estimator has problems in certain probability distributions and so too do maximum likelihood estimators in certain situations.)

  • John Thacker

    My claim then, is twofold:

    “You’re wrong” automatically implies that some expected value of some random variable is different. In the absence of explanation, there are generally obvious default ones such as expected utility.

    “We just don’t know” has several possible rational interpretations as used in rhetorical discourse as mentioned above, any one of which may be meant.

    It is of course preferable to fully explain what you mean in the course of subsequent argument, but I do not think that the summarized statement is “worse than useless.”

    It is also possible that the person making the argument means it in an irrational form; however, it seems to me that you stated that we’re already assuming that the participants are both rational and can agree on what random variables to measure. If you insist on translating the rhetorical into that particular irrational meaning, then I must agree with you that it is an impermissible statement. If that’s what your entire argument is, then I totally agree with you.

    But as a statement about actual argumentation and rhetoric, I disagree, as outlined above.

  • ChrisA

    Just to note that there is a difference between saying “I don’t know” and “no-one can know”. The “I don’t know” statement can be made for many different reasons and I agree is not terribly helpful in moving the debate forward. The “no-one can know” statement is more useful – it puts the problem into a particular class. There are things that everyone can agree that no-one can know the answer to – next week’s lottery numbers for instance, but there is a range from this to case of absolutely known probability function. An estimate of next’s weeks lottery numbers should be met by a response of “no one can know”. You can have a very useful debate about whether something can be partly or not all estimated or known.

    Any prediction market which is trying to estimate a “no-one can know” factor should be recognisable by having a very flat distribution, but not necessarily. So perhaps prediction markets should routinely include a “no one can know” option. This would pay off if the prediction market result was false (say outside a sigma or two). If this option is attracting a lot of money – it perhaps would say that the prediction made by the market is not very useful.

  • Interesting thread.

    ChrisA: The intent of having a “no one can know” option is interesting. Its just going to be very difficult to practically implement or realistically price. Nice theoretical concept though.

    My simplistic view is that “I don’t know” is equivalent to assigning equal odds to all outcomes. Solves Robin’s scoring rule and sign challenge.

  • Daniel Greco


    Sometimes there are technical problems with assigning equal odds to all outcomes. If I know that a cube factory produces cubes with edges that vary between 1 and 2 units of length, then I also know that the factory produces cubes whose volumes vary between 1 and 8 units cubed. Suppose that’s all I know about the factory. If my probability distribution for lengths is uniform, then my probability distribution for volumes is not uniform, and vice versa. For example, if I think that there’s a 0.5 chance that a randomly selected cube will have edge length > 1.5 units, then I must think that there’s a <0.5 chance that a randomly selected cube will have volume > 4 units cubed. I cannot consistently assign equal odds to all possible lengths, and assign equal odds to all possible volumes.

    We might think this sort of a case is a good candidate for an “I don’t know” response. If somebody else has the same information that I do, and he says his estimate that a randomly selected cube has an edge of length >1.5 is 0.5, I might say I think he’s doing something wrong. Namely, he’s decided to be uniform over length rather than volume for no reason. I would have the same objection if he had a degree of belief 0.5 that the volume of a random cube would be >4. You might think that in this situation, it would be inappropriate to have precise degrees of belief about the expected length, or the expected volume, of a random cube in the factory.

    If I were asked what my expected length for a random cube in the factory was, I think I’d just say that I expect that the length of a cube is between 1 and 2. I don’t think it would be reasonable to offer any particular expected length between 1 and 2.

  • ChrisA, “No one can know next week’s lottery numbers” is what fools say when they buy tickets; “You can’t know I *won’t* win.” Lottery numbers are something we have very exact probability distributions over, and anyone who departs from this probability distribution is fooling themselves and losing money in the process. Computing the expectation, or any other simple derived quantity, would be no trouble at all. “I don’t know, and no one can know” is a very dangerous thing to say about a lottery ticket; “That ticket has an exactly 1 in 28,203,400 chance of winning and if you bet at other odds you will lose money” is much more helpful to say to someone considering buying a ticket.

  • chrisA

    I agree, I was only using the example as something we can all agree we can’t know. If you would like an unarguable example where the distribution as well as the expected value is unknowable, how-about the number of intelligent life forms in a galaxy outside our light cone? My point was really that there are a range from things that we can know well to things that we can’t know at all. But when we get a distribution from someone how do we know how well or how much it is underpinned by real knowledge?

    If we look at, say, the global warming predictions, we get a range in possible rises in average temperatures – I have heard from 3 to 6 deg C. But how much faith should we put in this distribution? Clearly it is of worse quality than if the same distribution was provided for the temperatures in New York tomorrow. How could we “measure” or otherwise agree on this quality factor? Could the measure include whether the model that produced the distribution can be tuned by real feedback or not.

  • Wow, I missed this conversation by a year. Good comments, and I think those who defended “no one can know” got part of what I was saying.

    The other part was a simple reminder that “peak oil” cannot be directly measured. All we have as measurable data are price and current production data. The next step is always an extrapolation based upon an assumption. One starts, for instance, with the assumption that Hubbert’s method will hold for world production, and that a calculation done today will yield an accurate “high production” and “high production date.”

    How do you put error bars on that assumption, that Hubbert’s method, a heuristic, will hold?

    (And I might also comment that in the year since this post, the “Hubbert’s date” for Peak oil has moved and argued again and again.)