13 Comments

I might as well note that the final responder, Bruce Bueno de Mesquita, now has his essay up.

Expand full comment

Is this the same Cato Institute that used to be headed by William Niskanen, who was fired by the Ford Motor Company for going against what his alleged corporate masters wanted to hear? Is this the same Cato Institute that frequently mentions regulatory capture, a topic that real corporate shills will avoid?

There are, of course, other instances of right-wing intellectuals going against the results their benefactors want.

Expand full comment

I thought John Cochran's response was terrible. I agree with Scott Sumner that the EMH has a lot going for it and so price changes are hard to predict, but that doesn't explain forecasters doing WORSE than Tetlock's "dart-throwing-monkey". Instead they should act like a dart-thrower themselves and be no better and no worse. Tetlock did a lot to show the superiority of the "fox" style of thinking at unconditional predictions, Cochrane gives very little evidence to back up his claims about the worth of "hedgehog" thinking for conditional predictions.

Expand full comment

Jeremy is onto something. Has Robin considered that forecast accuracy is OVERVALUED in one field -- stock or mutual fund picking? The leading magazines highlight the mutual funds that did well in recent months or years. These -- usually lucky -- outliers are rewarded disproportionately. Of course, you could say that's because the stupid mags highlight the wrong sort of forecast accuracy. But it doesn't matter. Enough people glom onto the "latest" thing that there are huge rewards to getting the market "right" for a few years and then closing up shop once you get reversion to the mean.

This takes advantage of the fact that people don't understand random walks and like to play the lottery.

I'm not saying that persistent funds aren't also rewarded. But the current system of highlighting short term winners seems to be a sort of perverse publicity that encourages stupid investors even more.

Industries that wish to avoid this effect will allow prediction markets only with the greatest of care.

Expand full comment

Most authorities only use forecasts to justify what they want to do anyway. If forecasts don't tell them what they want to hear they ignore them and wave money around asking for forecasts that predict what they do want to hear.

What was Ryan's basis for estimating a growth rate of the economy of 5%? and unemployment of 2.5%? Nothing beyond wishful thinking to justify cutting taxes more. Why does every Republican forecast that lowering taxes will increase employment? Is there any evidence that lower taxes will increase employment? No, there isn't.

The problem the Cato Institute has with bad forecasts is that the Cato Institute only produces the forecasts that its benefactors want to hear. If you want to make forecasts you have to base them on reality and not wishful thinking. Why is the Cato Institute still denying global warming?

http://www.politifact.com/t...

Why? Because that is what those who are funding the Cato Institute want to hear. They don't want to hear the truth, they don't want accurate forecasts.

Authorities have a tendency to kill the messenger that brings bad news. That doesn't change reality, it just discourages honesty with authorities. If authorities are willing to kill messengers who bring bad news, what do they do with forecasters who forecast bad news?

The problem is that it takes a leader who wants accurate forecasts and is willing to listen to accurate forecasts and such people rarely become leaders because they make accurate forecasts too and people don't want accurate promises, they want wishful thinking and impossible promises like 5% growth and 2.5% unemployment, low taxes, high military spending.

Expand full comment

Confidence in and the authority of high status individuals is crucial in any organization. Advertising confidence, by claiming to know the future, is one way society has devised to deal with what I call the "first party problem" - how do we make those at the top of a hierarchy accountable?

Predicting the future with accuracy is key to this, and so is avoiding the dud predictions. The demand for prediction is fundamental, highly predictable in it's own right (ironically). Consequently, there is no real need for accurate predictions. Just as long as those making the predictions have adequate credentials, any plausible prediction will do the job. The relationship between demand determinicity, and quality of supply is apparent:

Determinicity of demand = Inverse quality of product

Ceteris paribus.

Expand full comment

How can we distinguish the affiliation theory from other theories mentioned in these comments? (eg, that specific predictions are bought to justify existing choices)

Expand full comment

Perhaps forecast accuracy is largely ignored because the most accurate estimates are not necessarily the best estimates.The future holds such uncertainty that the more accurate estimates are probably so because of "luck" or fortuitous assumptions made by the forecaster. Such assumptions may have been baseless at the time of the forecast.

To take an extreme example, suppose two forecasters are asked to predict next year's oil price. One of them conducts rigorous statistical, geopolitical, and industry survey analysis, and decides the price will be $65/Bbl. The other has a hunch that there will be a new war in the middle east and also his grandmother just turned 76 years old, so he estimates the price will be $76/Bbl. If next year's oil price turns out to be $78/bbl, what does that tell us about the quality of the two forecasts? Nothing.

Managers should judge the forecast by HOW it was obtained and not its realized accuracy. The real-world outcome of the variable of interest should be irrelevant when judging its forecast. Otherwise, lottery-winners would be praised as genius strategists, even though when they played the number, their expected return was negative.

Expand full comment

The flip side of this process is planning. Planning is difficult work. The longer the time period that you have to plan, the more difficult it becomes. I suspect, like the general and the inaccurate monthly weather forecasts, that managers like one definite scenario to base their plans upon. Trying to plan for several different scenarios with varying probabilities can result in a complex set of instructions that few will comprehend or follow. Employees may ask, "What are we supposed to do?" It is much easier to say, "Do this, this and that." -- all according to a plan.

Planners, nevertheless, need cover. A definite forecast is their security blanket, something to fall back upon to say, "How could I have known that could happen? It wasn't in the forecast." It's sort of the ultimate in CYA.

It's also probably better that the forecaster is not part of the company when the predictions fail. And, just like cheap wine tastes better the more expensive the consumer thinks it is, the more expensive a forecast, the "better" it becomes. It's no wonder there is a thriving industry in consultancy, given the signalling rewards that come from paying money to a respected firm.

Ultimately it boils down to: set a goal, plan, muddle through.

Expand full comment

Additional twist to the theory-- in addition to affiliation, predictions are mostly about lowering anxiety.

Expand full comment

I agree with the affiliation to authority theory. You've discussed this in your post "connections" vs "insights". An prediction, even if accurate, shows the predictor doesn't have the power to produce the conclusion they're predicting. The conclusion is important, otherwise the predictor wouldn't care, so the person capable of executing the conclusion is higher status than the predictor. The only way to make the prediction worthwhile to the predictor is affiliation with leading authorities.

People like accuracy if works toward some end. So a Republican would tout the accuracy of a fiscal policy report if it supports tax cuts, but ignore the report completely it if it doesn't support tax cuts.

Also, forecast inaccuracy is useful if it helps discredit a conclusion you don't like, whether or not its true. Banks only buy bond ratings if they're positive.

To most people, the truth is a means and not an end.

Expand full comment

Perhaps you have this in mind.

Expand full comment

There was a study done in the software industry a while back (sorry can't remember the details) which basically found that project managers who forecast with narrower margins of error ("the software will be ready in 5 to 6 months") were regarded as more competent and better rewarded than project managers whose forecasts were less precise ("the software will be ready in 8 to 12 months") even though the observed results fell overwhelmingly into the ranges forecast by the latter group.

This is consistent with everything I've observed in multiple industries. Forecasting is a political process designed to get approval. Executives and politicians reward confidence over competence.

Expand full comment