Mike Thicke of Bard College has just published a paper that concludes: The promise prediction markets to solve problems in assessing scientific claims is largely illusory, while they could have significant unintended consequences for the organization of scientific research and the public perception of science. It would be unwise to pursue the adoption of prediction markets on a large scale, and even small-scale markets such as the Foresight Exchange should be regarded with scepticism.
Sure in a world of frequently improved system versions, you sometimes want to skip versions to jump ahead, instead adopting every new version. But the situation of prediction markets in science aren't remotely like that.
It sometimes makes sense to hold out for a better solution.
Suppose that an organization is evaluating prediction markets, and regardless of whether it decides to use them, 10 years from now a better solution will be discovered. The better solution may face steeper barriers to adoption if prediction markets are adopted, because it is less better than prediction markets than than the status quo solution. Then if prediction markets are adopted now, and the better solution fails to be adopted 10 years later, we might say "too bad we decided to adopt prediction markets."
I think "comparing to perfection" makes sense in that it provides a way to decide whether a proposed solution is sufficiently better than the status quo to be worth adopting. If we can determine that an optimal solution satisfies some set of criteria and that the status quo satisfies, say, 5% of them and prediction markets satisfy, say, 20%, we may decide to hold out until we discover something that satisfies 50%. Depending on how likely we think we are to discover such a thing, that decision would be reasonable.
There is a third alternative: Question the need for institutions on a philosophical level.
While it's true that as long as we have a civilization on earth, we need to have some institutions, we can question whether we should really spend our own money to build a future spacefaring civilization or a far-future earth civilization after we're all dead.
For example, we could ask, "What institutions should govern a Mars colony?" and then either look to perfection or realistic human answers.
But instead, we could also question the very philosophical and personal value of building a Mars colony in the first place, or if we maybe want to save the hundreds of trillions of dollars and spend them on something that actually enhances our quality of life before we die. Same for far-future fanaticism of other types.
Elon Musk's fanboys have promised me 12 times that he doesn't want tax funds. A few weeks later, he goes on a stage and demands tax funds. Fun!
Not to mention, each of us gets very little individual say what institutions will actually be implemented. Even collectively, we don't really rule the dynamics that shape the acual outcomes. You can only look to the worst flaws and make some marginal shifts, e.g. recognizing the empirical fact that democracies are typically dysfunctional, but much less so than autocracies and dictatorships (duh).
Thanks for your comments. For those interested, you can download a preprint version of the article on my website here: http://www.mikethicke.com/r... . Aside from typos, it is the same as the published version.
First, I should say that I think we qualitatively agree about many of the problems with science as it currently operates. I was careful in my paper to argue that, if they fulfilled their promises, prediction markets would be useful tools for scientists, as one commenter put it. I also don't dispute that prediction markets can be useful in some contexts, such as the prediction market over replication studies in psychology.
Where we probably differ regarding the current operation of science is on the importance of those problems. Despite problems with peer review and potentially biased consensuses, I think science works pretty well. So an implicit premise of my paper is that any proposal to significantly alter the institutional structure of science needs to clearly demonstrate the advantages of doing so, especially as there could be many unanticipated consequences of making such changes.
In the negative part of my paper, I offer three arguments against large-scale adoption of prediction markets. You accurately quote those above. The first two are meant to counter an inference from the (relative) success of currently-operating prediction markets in (primarily) sports and politics to a generalized assumption of similar success in other domains, especially science. Current markets (or the ones with any liquidity anyways) are short-term and easily judicable. Much of science is neither short term nor easily judicable. Sure, there might be solutions to these problems, but I don't think it's fair to just assume that they will be effective or that prediction markets can be adapted to any and all questions or domains.
The third argument is meant to offer reasons why the bar should be high for large-scale adoption of prediction markets by entities such as the NSF, potentially supplanting the current grant-based method of funding scientific research. Other incursions of the market into science have had effects on scientific practice far greater than you'd expect given the relatively modest profits earned by most universities in, for example, licensing patents. I argue that subsidized prediction markets could have similar effects, and so we need very strong evidence that they would work well, which currently we don't have.
It's fair of you to argue that the problems I allege for prediction markets exist already in other institutions, including science. However, this isn't necessarily an argument in favor of adopting prediction markets if they would, while perhaps countering some biases, serve to exacerbate already existing problems. If science already over-emphasizes the short term, why should we adopt yet another incentive for short term research?
I'm sure this won't be entirely convincing to you, but hopefully it helps to clarify my position.
Let me count the ways ..
I used the form at his website to tell him about this post.
These prediction models are merely tools and why would a scientist not use a tool that, no matter how inaccurate, yields knowledge of understanding or efficiency?
Did you send Mike Thicke this blogpost and request him to respond here? I'm curious what he has to say in reply.
That should be the focus, and there's lots on that. Sad that a critique wouldn't focus on that.
If the relevant question is the comparison of prediction markets to existing institutions, shouldn't the focus of prediction market research be on that comparison? Don't advocates of prediction markets bear the burden of proof?