Respect Forecast Accuracy

The topic at Cato Unbound this month is “What’s Wrong with Expert Predictions.” Dan Gardner and Philip Tetlock’s lead essay points out a puzzling lack of interest in forecast accuracy:

Corporations and governments spend staggering amounts of money on forecasting, and one might think they would be keenly interested in determining the worth of their purchases and ensuring they are the very best available. But most aren’t. They spend little or nothing analyzing the accuracy of forecasts and not much more on research to develop and compare forecasting methods. Some even persist in using forecasts that are manifestly unreliable. … This widespread lack of curiosity … is a phenomenon worthy of investigation.

My response essay considers this puzzle. The editor summarizes:

Robin Hanson argues that most people aren’t interested in the accuracy of predictions because predictions often aren’t about knowing the future. They are about affiliating with an ideology or signaling one’s authority. … He suggests that one way to make predictions more accurate might be to lift both the social stigma and legal prohibitions against gambling.

Key quotes:

Even if disinterest in forecast accuracy is explained by forecasting being only a minor role for pundits, academics, and managers, might we still hope for reforms to encourage more accuracy? …

Hope … mainly comes from the fact that we pretend to care more about forecast accuracy than we actually seem to care. We don’t need new forecasting methods so much as a new social equilibrium, one that makes forecast hypocrisy more visible to a wider audience, and so shames people into avoiding such hypocrisy. …

It isn’t enough to devise ways to record forecast accuracy—we also need a new matching social respect for such records. Might governments encourage a switch to more respect for forecast accuracy? Yes: by not explicitly discouraging it!

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • Dan Hill

    There was a study done in the software industry a while back (sorry can’t remember the details) which basically found that project managers who forecast with narrower margins of error (“the software will be ready in 5 to 6 months”) were regarded as more competent and better rewarded than project managers whose forecasts were less precise (“the software will be ready in 8 to 12 months”) even though the observed results fell overwhelmingly into the ranges forecast by the latter group.

    This is consistent with everything I’ve observed in multiple industries. Forecasting is a political process designed to get approval. Executives and politicians reward confidence over competence.

    • http://hanson.gmu.edu Robin Hanson

      Perhaps you have this in mind.

  • nw

    I agree with the affiliation to authority theory. You’ve discussed this in your post “connections” vs “insights”. An prediction, even if accurate, shows the predictor doesn’t have the power to produce the conclusion they’re predicting. The conclusion is important, otherwise the predictor wouldn’t care, so the person capable of executing the conclusion is higher status than the predictor. The only way to make the prediction worthwhile to the predictor is affiliation with leading authorities.

    People like accuracy if works toward some end. So a Republican would tout the accuracy of a fiscal policy report if it supports tax cuts, but ignore the report completely it if it doesn’t support tax cuts.

    Also, forecast inaccuracy is useful if it helps discredit a conclusion you don’t like, whether or not its true. Banks only buy bond ratings if they’re positive.

    To most people, the truth is a means and not an end.

  • http://www.nancybuttons.com Nancy Lebovitz

    Additional twist to the theory– in addition to affiliation, predictions are mostly about lowering anxiety.

  • RandomReal[]

    The flip side of this process is planning. Planning is difficult work. The longer the time period that you have to plan, the more difficult it becomes. I suspect, like the general and the inaccurate monthly weather forecasts, that managers like one definite scenario to base their plans upon. Trying to plan for several different scenarios with varying probabilities can result in a complex set of instructions that few will comprehend or follow. Employees may ask, “What are we supposed to do?” It is much easier to say, “Do this, this and that.” — all according to a plan.

    Planners, nevertheless, need cover. A definite forecast is their security blanket, something to fall back upon to say, “How could I have known that could happen? It wasn’t in the forecast.” It’s sort of the ultimate in CYA.

    It’s also probably better that the forecaster is not part of the company when the predictions fail. And, just like cheap wine tastes better the more expensive the consumer thinks it is, the more expensive a forecast, the “better” it becomes. It’s no wonder there is a thriving industry in consultancy, given the signalling rewards that come from paying money to a respected firm.

    Ultimately it boils down to: set a goal, plan, muddle through.

  • Jeremy

    Perhaps forecast accuracy is largely ignored because the most accurate estimates are not necessarily the best estimates.The future holds such uncertainty that the more accurate estimates are probably so because of “luck” or fortuitous assumptions made by the forecaster. Such assumptions may have been baseless at the time of the forecast.

    To take an extreme example, suppose two forecasters are asked to predict next year’s oil price. One of them conducts rigorous statistical, geopolitical, and industry survey analysis, and decides the price will be $65/Bbl. The other has a hunch that there will be a new war in the middle east and also his grandmother just turned 76 years old, so he estimates the price will be $76/Bbl. If next year’s oil price turns out to be $78/bbl, what does that tell us about the quality of the two forecasts? Nothing.

    Managers should judge the forecast by HOW it was obtained and not its realized accuracy. The real-world outcome of the variable of interest should be irrelevant when judging its forecast. Otherwise, lottery-winners would be praised as genius strategists, even though when they played the number, their expected return was negative.

  • Douglas Knight

    How can we distinguish the affiliation theory from other theories mentioned in these comments? (eg, that specific predictions are bought to justify existing choices)

  • Drewfus

    Confidence in and the authority of high status individuals is crucial in any organization. Advertising confidence, by claiming to know the future, is one way society has devised to deal with what I call the “first party problem” – how do we make those at the top of a hierarchy accountable?

    Predicting the future with accuracy is key to this, and so is avoiding the dud predictions. The demand for prediction is fundamental, highly predictable in it’s own right (ironically). Consequently, there is no real need for accurate predictions. Just as long as those making the predictions have adequate credentials, any plausible prediction will do the job. The relationship between demand determinicity, and quality of supply is apparent:

    Determinicity of demand = Inverse quality of product

    Ceteris paribus.

  • http://daedalus2u.blogspot.com/ daedalus2u

    Most authorities only use forecasts to justify what they want to do anyway. If forecasts don’t tell them what they want to hear they ignore them and wave money around asking for forecasts that predict what they do want to hear.

    What was Ryan’s basis for estimating a growth rate of the economy of 5%? and unemployment of 2.5%? Nothing beyond wishful thinking to justify cutting taxes more. Why does every Republican forecast that lowering taxes will increase employment? Is there any evidence that lower taxes will increase employment? No, there isn’t.

    The problem the Cato Institute has with bad forecasts is that the Cato Institute only produces the forecasts that its benefactors want to hear. If you want to make forecasts you have to base them on reality and not wishful thinking. Why is the Cato Institute still denying global warming?

    http://www.politifact.com/truth-o-meter/statements/2009/apr/01/cato-institute/cato-institutes-claim-global-warming-disputed-most/

    Why? Because that is what those who are funding the Cato Institute want to hear. They don’t want to hear the truth, they don’t want accurate forecasts.

    Authorities have a tendency to kill the messenger that brings bad news. That doesn’t change reality, it just discourages honesty with authorities. If authorities are willing to kill messengers who bring bad news, what do they do with forecasters who forecast bad news?

    The problem is that it takes a leader who wants accurate forecasts and is willing to listen to accurate forecasts and such people rarely become leaders because they make accurate forecasts too and people don’t want accurate promises, they want wishful thinking and impossible promises like 5% growth and 2.5% unemployment, low taxes, high military spending.

    • http://hertzlinger.blogspot.com Joseph Hertzlinger

      Is this the same Cato Institute that used to be headed by William Niskanen, who was fired by the Ford Motor Company for going against what his alleged corporate masters wanted to hear? Is this the same Cato Institute that frequently mentions regulatory capture, a topic that real corporate shills will avoid?

      There are, of course, other instances of right-wing intellectuals going against the results their benefactors want.

  • cournot

    Jeremy is onto something. Has Robin considered that forecast accuracy is OVERVALUED in one field — stock or mutual fund picking? The leading magazines highlight the mutual funds that did well in recent months or years. These — usually lucky — outliers are rewarded disproportionately. Of course, you could say that’s because the stupid mags highlight the wrong sort of forecast accuracy. But it doesn’t matter. Enough people glom onto the “latest” thing that there are huge rewards to getting the market “right”
    for a few years and then closing up shop once you get reversion to the mean.

    This takes advantage of the fact that people don’t understand random walks and like to play the lottery.

    I’m not saying that persistent funds aren’t also rewarded. But the current system of highlighting short term winners seems to be a sort of perverse publicity that encourages stupid investors even more.

    Industries that wish to avoid this effect will allow prediction markets only with the greatest of care.

  • http://entitledtoanopinion.wordpress.com TGGP

    I thought John Cochran’s response was terrible. I agree with Scott Sumner that the EMH has a lot going for it and so price changes are hard to predict, but that doesn’t explain forecasters doing WORSE than Tetlock’s “dart-throwing-monkey”. Instead they should act like a dart-thrower themselves and be no better and no worse. Tetlock did a lot to show the superiority of the “fox” style of thinking at unconditional predictions, Cochrane gives very little evidence to back up his claims about the worth of “hedgehog” thinking for conditional predictions.

    • http://entitledtoanopinion.wordpress.com TGGP

      I might as well note that the final responder, Bruce Bueno de Mesquita, now has his essay up.

  • Pingback: Overcoming Bias : Why We Believe