Elusive Conflict Experts

Recently published in Interfaces:

[Regarding] the decisions that adversaries will make, we compared the accuracy of 106 forecasts by experts [e.g., domain experts, conflict experts, and forecasting experts] and 169 forecasts by novices about [choices in] eight real conflicts. The forecasts of experts who used their unaided judgment were little better than those of novices, and neither group’s forecasts were much better than simply guessing. The forecasts of experts with more experience were no more accurate than those with less. The experts were nevertheless confident in the accuracy of their forecasts. … We obtained 89 sets of frequencies from novices instructed to assume there were 100 similar situations. Forecasts based on the frequencies were no more accurate than 96 forecasts from novices asked to pick the single most likely decision.

Maybe conflict games are full of mixed strategies?  Hat Tip to WSJ Online, via Tyler Cowen.

GD Star Rating
Tagged as:
Trackback URL:
  • Stuart Armstrong

    That whole article made me understand a bit why you like prediction markets so much, Robin — when confronted with experts who won’t take risk, won’t admit error, and won’t learn, there really is the urge to force them to put their money where their mouth is.

    Since markets have a bit of an image problem in some quarters (and this seems to be one of the factors blocking their use), are there some more ideologically neutral alternatives to prediction markets that we can try?

  • Stuart, reputational markets and bets can affect status but not affect wealth distribution. I think most ideological opponents to markets aren’t opposed to unevenly distributed status and reputation: they’re opposed to unevenly distributed wealth. So that could be one more ideologically neutral approach.

  • michael vassar

    Funny, Tetlock’s data showed experts to be much more accurate than novices, though still much less accurate than simple statistical regressions (which depend, of course, for experts to suggest measurable quantities to include)

  • Floccina

    doesn’t this: “The experts , who were asked not to use the aid of forecasting models or other formal techniques,* were right 32% of the time were right 32% of the time, barely beating out novices, who were right 29%, and random guessing, which should have yielded an average accuracy of 28% (the last varies because some questions had three choices, some four and one six). The paper was published in the journal Interfaces last week (here’s a draft version; the journal’s version isn’t free online).”

    Show that the experts bad as they were, were 5 times better than the novices?

  • Scott Armstrong (the co-author of the article Robin cites) and I have published a review of Tetlock’s book in International Journal of Forecasting. If interested, one can access our review via ScholarlyCommons at: http://repository.upenn.edu/marketing_papers/50/
    We don’t, however, address Robin’s conjectural query about conflict involving mixed strategies and so being hard to predict.

    As the WSJonline piece points out, Scott Armstrong does believe that experts can make better forecasts than amateurs or the uninformed when the experts make the forecasts in a structured, systematic way that forces them out of habitual patterns of thought.

  • Adrian, since you are so well-connected in this field, please feel free to post summaries here of relevant recent, or classic, results.

  • Adrian Tschoegl

    Robin: I have been meaning for some time to prepare a post on Tetlock and some related matters but was waiting for the review article to see print first. I have some academic stuff on the front burner right now (a revise & resubmit and 180 terms papers, inter alia), but rest assured that I will post on this.