Compare Institutions To Institutions, Not To Perfection

Mike Thicke of Bard College has just published a paper that concludes:

The promise prediction markets to solve problems in assessing scientific claims is largely illusory, while they could have significant unintended consequences for the organization of scientific research and the public perception of science. It would be unwise to pursue the adoption of prediction markets on a large scale, and even small-scale markets such as the Foresight Exchange should be regarded with scepticism.

He gives three reasons:

[1.] Prediction markets for science could be uninformative or deceptive because scientific predictions are often long-term, while prediction markets perform best for short-term questions. .. [2.] Prediction markets could produce misleading predictions due to their requirement for determinable predictions. Prediction markets require questions to be operationalized in ways that can subtly distort their meaning and produce misleading results. .. [3.] Prediction markets offering significant profit opportunities could damage existing scientific institutions and funding methods.

Imagine that you want to travel to a certain island. Some else tells you to row a boat there, but I tell you that a helicopter seems more cost effective for your purposes. So the rowboat advocate replies, “But helicopters aren’t as fast as teleportation, they take longer and cost more when to go longer distances, and you need more expert pilots to fly in worse weather.” All of which is true, but not very helpful.

Similarly, I argue that with each of his reasons, Thicke compares prediction markets to some ideal of perfection, instead of to the actual current institutions it is intended to supplement. Lets go through them one by one. On 1:

Even with rational traders who correctly assess the relevant probabilities, binary prediction markets can be expected to have a bias towards 50% predictions that is proportional to their duration. .. it has been demonstrated both empirically and theoretically .. long-term prediction markets typically have very low trading volume, which makes it unlikely that their prices react correctly to new information. .. [Hanson] envisions Wegener offering contracts ‘to be judged by some official body of geologists in a century’, but this would not have been an effective criterion given the problem of 50%-bias in long-term prediction markets. .. Prediction markets therefore would have been of little use to Wegener.

First a predictable known distortion isn’t a problem at all for forecasts; just invert the distortion to get the accurate forecast. Second, this is much less of an issue in combinatorial markets, where all questions are broken into thousands or more tiny questions, all of which have tiny probabilities, and a global constraint ensures they all add up to one. But more fundamentally, all institutions face the same problem that all else equal, it is easier to give incentives for accurate short term predictions, relative to long term ones. This doesn’t show that prediction markets are worse in this case than status quo institutions. On 2:

Even if prediction markets correctly predict measured surface temperature, they might not predict actual surface temperature if the measured and actual surface temperatures diverge. .. Globally averaged surface air temperature [might be] a poor proxy for overall global temperature, and consequently prediction market prices based on surface air temperature could diverge from what they purport to predict: global warming. .. If interpreting the results of these markets requires detailed knowledge of the underlying subject, as is needed to distinguish global average surface air temperature from global average temperature, the division of cognitive labour promised by these markets will disappear. Perhaps worse, such predictions could be misinterpreted if people assume they accurately represent what they claim to.

All social institutions of science must deal with the facts that there can be complex connections between abstract theories and specific measurements, and that ignorant outsiders may misinterpret summaries. Yes prediction market summaries might mislead some, but then so can grant and article abstracts, or media commentary. No, prediction markets can’t make all such complexities go away. But this hardly means that prediction markets can’t support a division of labor. For example, in combinatorial prediction markets different people can specialize in the connections between different variables, together managing a large Bayesian network of predictions. On 3:

If scientists anticipate that trading on prediction markets could generate significant profits, either due to being subsidized .. or due to legal changes allowing significant amounts of money to be invested, they could shift their attention toward research that is amenable to prediction markets. The research most amenable to prediction markets is short-term and quantitative: the kind of research that is already encouraged by industry funding. Therefore, prediction markets could reinforce an already troubling push toward short-term, application-oriented science. Further, scientists hoping to profit from these markets could withhold salient data in anticipation of using that data to make better informed trades than their peers. .. If success in prediction markets is taken as a marker of scientific credibility, then scientists may pursue prediction-oriented research not to make direct profit, but to increase their reputation.

Again, all institutions work better on short term questions. The fact that prediction markets also work better on short term questions does not imply that using them creates more emphasis on short term topics, relative to using some other institution. Also, every institution of science must offer individuals incentives, incentives which distract them from other activities. Such incentives also imply incentives to withhold info until one can use that info to one’s maximal advantage within the system of incentives. Prediction markets shouldn’t be compared to some perfect world where everyone shares all info without qualification; such worlds don’t exist.

Thicke also mentioned:

Although Hanson suggests that prediction market judges may assign non-binary evaluations of predictions, this seems fraught with problems. .. It is difficult to see how such judgements could be made immune from charges of ideological bias or conflict of interest, as they would rely on the judgement of a single individual.

Market judges don’t have to be individuals; there could be panels of judges. And existing institutions are also often open to charges of bias and conflicts of interest.

Unfortunately many responses to reform proposals fit the above pattern: reject the reform because it isn’t as good as perfection, ignoring the fact that the status quo is nothing like perfection.

GD Star Rating
Tagged as:
Trackback URL:
  • If the relevant question is the comparison of prediction markets to existing institutions, shouldn’t the focus of prediction market research be on that comparison? Don’t advocates of prediction markets bear the burden of proof?

    • That should be the focus, and there’s lots on that. Sad that a critique wouldn’t focus on that.

  • Pingback: Rational Feed – deluks917()

  • SK

    Did you send Mike Thicke this blogpost and request him to respond here? I’m curious what he has to say in reply.

    • I used the form at his website to tell him about this post.

  • Steven Easley

    These prediction models are merely tools and why would a scientist not use a tool that, no matter how inaccurate, yields knowledge of understanding or efficiency?

  • asdf

    There is a third alternative: Question the need for institutions on a philosophical level.

    While it’s true that as long as we have a civilization on earth, we need to have some institutions, we can question whether we should really spend our own money to build a future spacefaring civilization or a far-future earth civilization after we’re all dead.

    For example, we could ask, “What institutions should govern a Mars colony?” and then either look to perfection or realistic human answers.

    But instead, we could also question the very philosophical and personal value of building a Mars colony in the first place, or if we maybe want to save the hundreds of trillions of dollars and spend them on something that actually enhances our quality of life before we die. Same for far-future fanaticism of other types.

    Elon Musk’s fanboys have promised me 12 times that he doesn’t want tax funds. A few weeks later, he goes on a stage and demands tax funds. Fun!

    Not to mention, each of us gets very little individual say what institutions will actually be implemented. Even collectively, we don’t really rule the dynamics that shape the acual outcomes. You can only look to the worst flaws and make some marginal shifts, e.g. recognizing the empirical fact that democracies are typically dysfunctional, but much less so than autocracies and dictatorships (duh).

  • Pingback: Overcoming Bias : More Prediction Market Criticism()

  • Hu Chu

    It sometimes makes sense to hold out for a better solution.

    Suppose that an organization is evaluating prediction markets, and regardless of whether it decides to use them, 10 years from now a better solution will be discovered. The better solution may face steeper barriers to adoption if prediction markets are adopted, because it is less better than prediction markets than than the status quo solution. Then if prediction markets are adopted now, and the better solution fails to be adopted 10 years later, we might say “too bad we decided to adopt prediction markets.”

    I think “comparing to perfection” makes sense in that it provides a way to decide whether a proposed solution is sufficiently better than the status quo to be worth adopting. If we can determine that an optimal solution satisfies some set of criteria and that the status quo satisfies, say, 5% of them and prediction markets satisfy, say, 20%, we may decide to hold out until we discover something that satisfies 50%. Depending on how likely we think we are to discover such a thing, that decision would be reasonable.

    • Sure in a world of frequently improved system versions, you sometimes want to skip versions to jump ahead, instead adopting every new version. But the situation of prediction markets in science aren’t remotely like that.