Could Gambling Save Psychology?

A new PNAS paper:

Prediction markets set up to estimate the reproducibility of 44 studies published in prominent psychology journals and replicated in The Reproducibility Project: Psychology predict the outcomes of the replications well and outperform a survey of individual forecasts. … Hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%). … Prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications. (more; see also coverage at 538AtlanticScience, Gelman)

We’ve had enough experiments with prediction markets over the years, both lab and field experiments, to not be at all surprised by these findings of calibration and superior accuracy. If so, you might ask: what is the intellectual contribution of this paper?

When one is trying to persuade groups to try prediction markets, one encounters consistent skepticism about experiment data that is not on topics very close to the proposed topics. So one value of this new data is to help persuade academic psychologists to use prediction markets to forecast lab experiment replications. Of course for this purpose the key question is whether enough academic psychologists were close enough to the edge of making such markets a continuing practice that it was worth the cost of a demonstration project to create closely related data, and so push them over the edge.

I expect that most ordinary academic psychologists need stronger incentives than personal curiosity to participate often enough in prediction markets on whether key psychology results will be replicated (conditional on such replication being attempted). Such additional incentives could come from:

  1. direct monetary subsidies for market trading, such as via subsidized market makers,
  2. traders with higher than average trading records bragging about it on their vitae, and getting hired etc. more because of that, or
  3. prediction market prices influencing key decisions such as what articles get published where, who gets what grants, or who gets what jobs.

For example, imagine that one or more top psychology journals used prediction market chances that an empirical paper’s main result(s) would be confirmed (conditional on an attempt) as part of deciding whether to publish that paper. In this case the authors of a paper and their rivals would have incentives to trade in such markets, and others could be enticed to trade if they expected trades by insiders and rivals alone to produce biased estimates. This seems a self-reinforcing equilibrium; if good people think hard before participating in such markets, others could see those market prices as deserving of attention and deference, including in the journal review process.

However, the existing equilibrium also seems possible, where there are few or small markets on such topics off to the side, markets that few pay much attention to and where there is little resources or status to be won. This equilibrium arguably results in less intellectual progress for any given level of research funding, but of course progress-inefficient academic equilibria are quite common.

Bottom line: someone is going to have to pony up some substantial scarce academic resources to fund an attempt to move this part of academia to a better equilibria. If whomever funded this study didn’t plan on funding this next step, I could have told them ahead of time that they were mostly wasting their money in funding this study. This next move won’t happen without a push.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Jacob

    My guess is that the market would be so underpopulated that authors could always bet on replication with high odds and get their paper published. Maybe they would lose a bit of money but it’s likely worth it to get a publication out.

    It might be a good thing to do for high-profile studies getting a lot of press coverage (post-publication). Whenever that happens there’s always a lot of commentary about why the study is amazing/terrible, a prediction market would be a good way to quantify general opinion and sift through some of that noise.

  • lump1

    If there’s real money on the line, couldn’t this lead to certain replication attempts being fixed, like fights are sometimes fixed? It could be pretty easy to hide.

  • Alas even though the manipulation concern has been dealt with in great detail, everyone reinvents the concern as if it were new, not bothering to look or respond to the literature:

  • Dave Lindbergh

    How are traders in the prediction market supposed to evaluate the reproducibility of papers that haven’t yet been published?

    It seems there’s a chicken-and-egg problem here.

    Authors can’t publish until the prediction market approves, and the prediction market can’t approve something it hasn’t seen.

    Or am I missing something? Is publication in a “top psychology journal” intended as sign of prestige/approval/kudos/credit to the authors, instead of primarily distribution of the results? [Actual publication would be via unfiltered online preprint?]

    Or do you intend that the prediction market would evaluation reproducibility solely based on the claimed conclusions, without seeing the methodology and results?

    • The submitted papers can be made public for bettors to evaluate.

      • Dave Lindbergh

        So then “publication” in a journal is not really “publication” anymore – it’s reprinting something that was already public.

        Which is fine if you think of journals as selected “Reader’s Digests” of the “best” papers in a field.

        But perhaps a better reform would be just to have a reputation mechanism (perhaps driven by the prediction market) that votes up the “best” papers.

        Without having traditional journals at all.

      • In many fields papers already typically appear as “working papers” before being published in journals.

  • Christian Kleineidam

    I am afraid that setting up those markets will corrupt the replications. When there a lot of money at stake for a certain result not replicating, a researcher might sabotage his own experiment to stop replicating.

    • There’s not remotely a chance of that much money being at stake in these markets.

      • Christian Kleineidam

        How much money do you think there would need to be involved for it to corrupt researchers?

        How much money do you think should be involved in such a market to provide reasonable predictions?

  • Pingback: Links for November 2015 - foreXiv()

  • I understand that this is far from the biggest hurdle, but are there good phone apps for quickly creating prediction markets and allowing participants to bet with minimal fuss? There are a lot of people who might be willing to code one.

    • Code isn’t really the limiting factor, but the firms that write this software are small w/ few resources, so doing that well hasn’t risen to a high priority.