Tetlock Wisdom

Both private- and public-sector prognosticators must master the same tightrope-walking act. They know they need to sound as though they are offering bold, fresh insights into the future not readily available off the street. And they know they cannot afford to be linked to flat-out mistakes. Accordingly, they have to appear to be going out on a limb without actually going out on one. That is why … they so uniformly appear to dislike affixing “artificially precise” subjective probability estimates to possible outcomes—the only reliable method we have of systematically tracking accuracy across pundits, methods, time and contexts. It is much safer to retreat into the vague language of possibilities and plausibilities—things that might or could happen if various difficult-to-determine preconditions were satisfied. The trick is to attach so many qualifiers to your vague predictions that you will be well positioned to explain pretty much whatever happens. China will fissure into regional fiefdoms, but only if the Chinese leadership fails to manage certain trade-offs deftly, and only if global economic growth stalls for a protracted period, and only if . . .

More here.  Hat tip to Henry at Crooked Timber.

Philip Tetlock seems to suggest that prognosticators are fooling us via this strategy, as if we would not tolerate such gaming if only we understood what they were up to.  I fear that instead we don’t much mind these games.  I suspect that we mostly want to affiliate with impressive folks, and reading their provocative forecasts gives us yet another excuse to do so.  As long no one else notices their failed forecasts enough to make them seem less impressive, we don’t really care if they were proved right or wrong.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Pingback: Looking for a good forecast? « Knowledge Problem()

  • Michael Bishop

    Prof. Hanson, the link to crooked timber is broken.

  • Robert Koslover

    I guess I’m an exception to the masses here. If I didn’t care whether the prognosticators were right or wrong, I wouldn’t pay attention to them. In the public fog of information and misinformation, wishy-washy predictions by would-be wisemen seriously reduce the signal to noise ratio. I find it even more annoying when these same noise sources later claim they were “right” even if a review of their actual predictions shows so much ambiguity that they could just as easily be judged “wrong.” I would prefer that the public more strongly demand of its wannabe sages that they take a clear and unambiguous stand on any topic in which they claim to have greater prognosticative power than the rest of us unwashed masses. Put up or shut up, I say. It’s certainly ok to be wrong some of the time. Those who are wrong more often deserve less respect than those with ojectively better track records. But those pundits who deliberately endeavor to hedge, obfuscate, and manipulate the public’s perceptions via faux-scholarship and trickery deserve to be shunned as the worthless con men that they are.

  • Eric Falkenstein

    One problem is that if you have a lot of value in your perceived expertise, you don’t want to expose yourself to a small number of bets, because your ‘estimated value’ will have a large standard error compared to your ‘true value’. Think Julian Simon and Paul Ehrlich entered in a famous wager in 1980, betting on a mutually agreed upon measure of resource scarcity over the decade leading up to 1990. Ehrlich ultimately lost the bet, and all five commodities that were selected as the basis for the wager have continued to trend downward since 1980. Did Ehrlich change his mind? No. He thinks the timescale was simply too short. He may be right (it’s not obviously wrong).

    There are an intrinsically small number of ‘factors’ related to your expertise, so say you believe in Global Warming, and most bets relevant over the next 5 years are highly correlated, all having a large error component. Alternatively, you may think immigration is great in general, but then dislike all the current immigration plans in practice. Any big policy, in practice, is co-mingled with other policies, and often you are comparing to the hypothetical ‘what would have happened if we did not do X’.

    One solution most people apply is to ignore these issues because they are nonempirical, but clearly this can be disastrous.

  • Ariel

    If you haven’t already come across this then I think you’ll find this both interesting and pertinent to the topic of the above post. I have difficulty figuring out by what methods the predictions are really being made, but it does seem that D. Sornette is in fact going out on a limb. This is definitely the the right forum to evaluate this.

    Dragon-Kings, Black Swans and the Prediction of Crises

    Common group dynamic drives modern epidemics across social, financial and biological domains

  • Pingback: Philip Tetlock and big questions for futurists and forecasters – Alex Soojung-Kim Pang, Ph.D.()