On October 15, I talked at the Rutgers Foundation of Probability Seminar on Uncommon Priors Require Origin Disputes. While visiting that day, I talked to Seminar host Harry Crane about how the academic replication crisis might be addressed by prediction markets, and by his related proposal to have authors offer bets supporting their papers. I mentioned to him that I’m now part of a project that will induce a great many replication attempts, set up prediction markets about them beforehand, and that we would love to get journals to include our market prices in their review process. (I’ll say more about this when I can.)
When the scheduled speaker for the next week slot of the seminar cancelled, Crane took the opening to give a talk comparing our two approaches (video & links here). He focused on papers for which it is possible to make a replication attempt and said “We don’t need journals anymore.” That is, he argued that we should not use which journal is willing to publish a paper as a signal of paper quality, but that we should use the signal of what bet authors offer in support of their paper.
That author betting offer would specify what would count as a replication attempt, and as a successful replication, and include an escrowed amount of cash and betting odds which set the amount a challenger must put up to try to win that escrowed amount. If the replication fails, the challenger wins these two amounts minus the cost of doing a replication attempt; if not the authors win that amount.
In his talk, Crane contrasted his approach with an alternative in which the quality signal would be the odds in an open prediction market of replication, conditional on a replication attempt. In comparing the two, Crane seems to think that authors would not usually participate in setting market odds. He lists three advantages of author bets over betting market odds: 1) Authors bets give authors better incentives to produce non-misleading papers. 2) Market odds are less informed because market participants know less that paper authors about their paper. 3) Relying on market odds allows a mistaken consensus to suppress surprising new results. In the rest of this post, I’ll respond.
I am agnostic on whether journal quality should remain as a signal of article quality. If that signal goes away, then we are talking about what other signals can be how useful. And if that signal remains, then we can be talking about other signals that might be used by journals to make their decisions, and also by other observers to evaluate article quality. But whatever signals are used, I’m pretty sure that most observers will demand that a few simple easy-to-interpret signals be distilled from the many complex signals available. Tenure review committees, for example, will need signals nearly as simple as journal prestige.
Let me also point out that these two approaches of market odds or author bets can also be applied to non-academic articles, such as news articles, and also to many other kinds of quality signals. For example, we could have author or market bets on how many future citations or how much news coverage an article will get, whether any contained math proofs will be shown to be in error, whether any names or dates will be shown to have been misreported in the article, or whether coding errors will be found in supporting statistical analysis. Judges or committees might also evaluate overall article quality at some distant future date. Bets on any of these could be conditional on whether serious attempts were made in that category.
Now, on the comparison between author and market bets, an obvious alternative is to offer both author bets and market odds as signals, either to ultimate readers or to journals reviewing articles. After all, it is hard to justify suppressing any potentially useful signal. If a market exists, authors could easily make betting offers via that market, and those offers could easily be flagged for market observers to take as signals.
I see market odds as easier for observers to interpret than author bet offers. First, authors bets are more easily corrupted via authors arranging for a collaborating shill to accept their bet. Second, it can be hard for observers to judge how author risk-aversion influences author odds, and how replication costs and author wealth influences author bet amounts. For market odds, in contrast, amounts take care of themselves via opposing bets, and observers need only judge any overall differences in wealth and risk-aversion between the two sides, differences that tend to be smaller, vary less, and matter less for market odds.
Also, authors would usually participate in any open market on their paper, giving those authors bet incentives and making market odds include their info. The reason authors will bet is that other participants will expect authors to bet to puff up their odds, and so other participants will push the odds down to compensate. So if authors don’t in fact participate, the odds will tend to look bad for them. Yes, market odds will be influenced by views others than those of authors, but when evaluating papers we want our quality signals to be based on the views of people other than paper authors. That is why we use peer review, after all.
When there are many possible quality metrics on which bets could be offered, article authors are unlikely to offer bets on all of them. But in an open market, anyone could offer to bet on any of those metrics. So an open market could show estimates regarding any metric for which anyone made an offer to bet. This allows a much larger range of quality metrics to be available under the market odds approach.
While the simple market approach merely bets conditional on someone attempting a replication attempt, an audit lottery variation that I’ve proposed would instead use a small fixed percentage of amounts bet to pay for replication attempts. If the amount collected is insufficient, then it and all betting amounts are gambled so that either a sufficient amount is created, or all these assets disappear.
Just as 5% significance is treated as a threshold today for publication evaluation, I can imagine particular bet reliability thresholds being important for evaluating article quality. News articles might even be filtered or show simple icons based on a reliability category. In this case the betting offer and market options would more tend to merge.
For example, an article might be considered “good enough” if it had no more than a 5% chance of being wrong, if checked. The standard for checking this might be if anyone was currently offering to bet at 19-1 odds in favor of reliability. For as long as the author or anyone else maintained such offers, the article would qualify as at least that reliable, and so could be shown via filters or icons as meeting that standard. For this approach we don’t need to support a market with varying prices; we only need to keep track of how much has been offered and accepted on either side of this fixed odds bet.
I agree that this approach isn't very attractive unless one can find simple, standard, and useful ways to decide how to replicate a paper.
I see that could be useful, but it doesn't seem especially well-suited to particular task discussed here of giving a quality measure on a replicable paper.