Author Archives: Chris Hibbert

Philip Tetlock’s Long Now Talk

Philip Tetlock, author of the book Expert Political Judgment, gave a talk a couple of weeks ago for the Long Now Foundation’s Seminars about Long-Term Thinking.  I listened to the podcast.  Tetlock has been collecting expert’s judgements for a couple of decades in order to evaluate what makes it possible for people to make good predictions.

The only attribute of a person’s outlook that seems to correlate with better prediction ability is having an eclectic outlook; what Tetlock calls being a fox rather than a hedgehog from Archilochus’ phrase, "The fox knows many things, but the hedgehog knows one big thing." Foxes are less likely to make extreme predictions, but when they do, they’re better calibrated.

There were a couple of interesting questions, which are worth mentioning because you won’t find the material in the book.  At about 47 minutes, Stewart Brand asks whether any of the experts were notably better than the others.   Tetlock responds that no one in is sample was reliably right, and then volunteers that there are mechanisms that improve people’s performance.  Brand and Tetlock continue to talk about prediction markets and Surowiecki’s Wisdom of Crowds and how aggregation helps improve forecasts, particularly when there are significant numbers of Hedgehogs in the sample.

At the end of the Q&A, Brand pressed Tetlock pretty hard to give his assessment of the expert’s views of the state of events in Iraq.  Worth listening to, whatever your views.

GD Star Rating
loading...
Tagged as:

Fair betting odds and Prediction Market prices

The discussion about "agreeing to disagree" assumes ideal bayesians, and the preferred resolution requires that the disputants are willing to spend the time to reach agreement.  Prediction markets are one of the mechanisms used by imperfect bayesians to short-circuit the long discussion and find a reasonable compromise.  Markets seem to provide good estimates for outsiders to use as the updated value coming out of these disagreements.  In a recent conversation with Dan Reeves, I found another reason to doubt that, which seems to play into the discussion started by Manski on what prediction market odds mean.

Dan, in his Yootles system, has a facility that supports bets between two or more people.  When two people disagree, and want to subject the disagreement to a wager, they each submit their estimate of the correct odds to the system.  The system then uses the arithmetic mean of their percentage odds as the fair odds.  Dan argues, convincingly, that the arithmetic mean gives each party the same expectation of gain, and that is what fairness requires.

On the other hand, the way that bayesians would update their odds is to use the geometric mean of their odds.  (Robin Hanson points out that this is equivalent to the arithmetic mean of the log odds.)  With estimates in the range of 10% to 90%, it doesn’t make much difference which of these you use, but when one of the parties has an extreme view of the possibility of the event, the geometric mean is sensitive to changes in a way that the arithmetic mean is not.

If Alice believes that the chances of some event are 30%, (odds of 3 to 7) and Bob’s estimate is 80%, (4:1) the arithmetic mean is 55%, while the geometric mean of the odds is 57%; the results are quite close.  You start to see noticable differences when one estimate gets above 95% or below 5%.  One intuitive explanation for the difference is that the arithmetic mean is based on the percentage chances, which don’t have the resolution to change much once above 95%. Odds above 95% can still change from 1 in 20 to 1 in a hundred or one in a million, representing very significant differences in a visible way. This allows the geometric mean to be more sensitive in this range.

Another intuitive presentation is that when Bob’s estimate of the likelihood changes by a factor of 5 or 10, the bayesian combination of Bob’s and Ann’s estimates should move significantly (a factor of 2 or 3).  The arithmetic mean, used to compute the respective expected values of a bet, moves by at most a few percent when below 95%, and when above 99% (or below 1%) by less than a percentage.  But that’s where the most interesting changes in the individual estimates take place.

The implications of this difference between the odds that appear fair to bettors and the expectations of bayesian observers seem to touch on a few well-known conundrums.  The commonly observed drop-off in predictiveness when prediction market odds are above 90 or below 10 could be partly due in part to the participants’ lack of incentive to push the odds further towards the end points.  The fact that we mostly use percentage odds may also contribute: with whole number percentages, you can’t express odds more extreme than 99:1, with tenths, you can express up to 999:1.  BetFair’s use of odds rather than percents may actually be an advantage here.  (I usually complain that I find betting odds opaque; the increased resolution at the ends of the spectrum may be worth the confusion.)

I’m not sure how to integrate this into the discussionn, but this idea that the participants’ betting incentives don’t lead directly to bayesian updates may also have implications for the discussion started by Manski, and picked up by Wolfers and Zitzewitz and by Ottaviani and Sørensen.  If prediction market participants don’t have sufficient incentive to move the odds to the extremes they believe are true, then the market outcomes may have reduced fidelity in those ranges as well.

GD Star Rating
loading...
Tagged as: