Prediction markets are financial markets, but compared to typical financial markets they are intended more to aggregate info than to hedge risks. Thus we can use our general understanding of financial markets to understand prediction markets, and can also try to apply whatever we learn about prediction markets to financial markets more generally.
With this in mind, consider the newly published paper Crowd prediction systems: Markets, polls, and elite forecasters, by Atanasov, Witkowski, Mellers, and Tetlock.
They use data from 1300 forecasters on 147 questions over four years from the Good Judgement Project’s entry in the IARPA ACE tournament, 2011-2015. (I was part of another team in that tournament.) They judge outcomes by averaging quadratic Brier-score accuracy over questions and time.
They find that:
Participants who used my logarithmic market scoring rule (LMSR) mechanism did better than those using a continuous double auction market (mainly better when few traders), and did about as well as those using a complex poll aggregation mechanism, which did better than simpler polling aggregation methods.
One element of the complex polling mechanism, an “extremization” power-law transformation of probabilities, also makes market prices more accurate.
Participants who were put together into teams did better than those who were not.
Accuracy is much (~14-18%) better if you take the 2% of participants most accurate in a year, and then using only these “elites” in future years. It didn’t matter which mechanism was used when selecting that 2% elite.
The authors see this last result as their most important:
The practical question we set to address focused on a manager who seeks to maximize forecasting performance in a crowdsourcing environment through her choices about forecasting systems and crowds. Our investigation points to specific recommendations. …Our results offer a clear recommendation for improving accuracy: employ smaller, elite crowds. These findings are relevant to corporate forecasting tournaments as well as to the growing research literature on public forecasting tournaments. Whether the prediction system is an LMSR market or prediction polls, managers could improve performance by selecting a smaller, elite crowd based on prior performance in the competition. Small, elite forecaster crowds may yield benefits beyond accuracy. For example, when forecasts use proprietary data or relate to confidential outcomes, employing a smaller group of forecasters may help minimize information leakage.
This makes sense for a manager who plans to ask ~>1300 participants ~>150 questions over ~>4 years, and who trusts some subordinate to judge how exactly to select this elite, and how to set the complex polling parameters, if they use a polling mechanism. But I’ve been mainly interested in using prediction markets as public institutions for cases where there’s a lot of distrust re motives and rationality. Such as in law, governance, policy, academia, and science. And in such contexts, I worry a lot more about the discretionary powers required to implement an elite selection system.
To see my concern, consider stock markets, whose main social function is to channel investment into the most valuable opportunities. More accurate stock prices better achieve this function, and the above results suggest that we’d get much more accurate stock prices by greatly limiting who can speculate in stock markets. Hold some contests where applicants compete with fake trades to grow small initial trading budgets, and only let the top, say, 2% of such such contestants make speculative price-influencing trades in real stock markets. Maybe also force them to join teams, instead of trading individually. (Forcing extremization seems unnecessary, as specialists can profit by making those price adjustments.) (Note: these are my suggestions; study authors didn’t discuss this.)
Yes, stock market speculators today are already far from randomly selected from the general population, and are thus already “elite” in that sense. Even so, while the 1300 forecasters in the above study were far random samples of the public, only letting the top 2% of them participate was a win in the above study. Thus only letting the top 2% of wannabe stock speculators trade in real stock markets is plausibly also a win for stock market price accuracy. (Note: the study authors admit they choose the 2% figure somewhat arbitrarily, so a wide range of selectivity, maybe 0.5% to 10%, might work about as well.)
To prevent others from speculating, we might force hedging trades to be made via long-delay call markets, which pushes out speculators with shorter-fuse info, or via regulators verifying their legit hedging needs (e.g, regular paycheck deposit, withdraw for retirement, or big medical expense). And we might insist that hedgers focus on general index funds unless they can show reasons to trade more specific assets.
One problem with this is that most of profits made by winning speculators today come from the losses of less elite speculators, not from hedge traders. And if we could better segregate hedge traders into call markets, elite speculator profits would be even smaller. Thus unless we could subsidize elite-only stock markets, perhaps via automated market makers, elite speculators would be fighting over a much smaller pool of profits than today, which would likely cut into price accuracy.
Another problem is that it would be hard to prevent speculator contestants from privately buying winning trades, turning this elite selection process into more of a monetary auction. Other professional credentialing processes today use schools and tests, where it is harder to just buy success.
But even if we can prevent contest cheating, and subsidize elite stock market speculation, I fear corruption of elite speculator certification. That is, the official organization in charge of deciding who qualifies as elite speculators may succumb to pressures to favor some groups, and to be overly restrictive to favor insiders. And once you imagine official consensus on legal, policy, or science questions being set by financial markets prices, you can imagine all the more possible pressures to control who is allowed to influence such prices.
For now, I recommend that robust public institutions built using financial markets let as many parties as possible trade in them, even foreigners. But yes, I remain open to the possibility that we could eventually learn well enough how to usefully constrain participation, and to prevent special interests from capturing selection powers. And if I were running a large set of markets for some private owner, I’d be more open to constraining participation to speculative elites.
I basically agree, you need the open market to find the elite 2% in the first place. You can’t just go hire the elite forecasters, that’s like saying, instead of having the stock market, simply let the good stock pickers allocate capital.
What this does seem like evidence for is that prediction markets should try hard to eliminate caps on activity. Ideally the elite 2% makes money and starts doing even more trading on the site. You could end up with the marginal dollar being allocated by one of the elite 2%. As opposed to the sort of market where each person has a similar amount of influence.
Mellers and Tetlock founded a company (Good Judgment Inc) that sells forecasting services to the private sector. As such they have an obvious financial interest in the outcome of this research. This doesn't appear to be disclosed in the "Declaration of competing interest" in the paper (although in fairness I can't read the entire statement).