Most democratic systems are pretty simple. To a first approximation, anyone can run for office, any adult citizen can vote, and voters can use all their usual ways to associate and talk to evaluate and coordinate on who to vote for in upcoming elections.
Imagine that some academics instead develop and advocate for a “curated” system of democracy. They research how democratic outcomes vary with the candidates, who votes on which candidates, and who talks to who about which candidates and topics. These academics say that if you put someone who knows this literature well in charge of “curating” democracy, you can get better outcomes.
Assume that these researchers have the usual level of academic competence at doing their research. They study a real phenomena and make real progress, but have the usual academic biases, such as playing usual games of hindering rivals via insider clubs and method fashions. Their results tend to be complex, though news media can sometimes offer deceptively simple summaries of them.
How eager are you to replace your simple democratic system with a voting system curated by an expert credentialed by these academics? That is, to put these curators in charge of who can run for office, who can vote on what, and who can talk to who how about what political topics? They wouldn’t suggest simple rules that we could then debate and choose whether to adopt. No, they’d make many detailed context-dependent choices that they couldn’t well explain to us; we’d just have to trust them.
Most of us wouldn’t trust them, and thus would be wary of such curated democracy. Because democracy is less about having a well-oiled machine and more about having a simple neutral system that we can trust when we don’t together trust any particular people that much to run our system.
This is how I feel about the forecasting systems and contests that are now popular among academics, relative to simple prediction markets. In a simple prediction market, you set up a topic on which to bet cash, and then let any individual or group bet cash there at any time, in any amount, and the current price is your best estimate. Biases are to be fixed by traders profiting from finding and correcting them. Yes, each market has some mechanical details, but those matter less when there is lots of trading, and it usually works okay to let people compete to pick details of the markets they pay to create.
In contrast, in curated prediction contests, the curators pick who participates on which questions, assign them to teams in which they work together, assign them each a weight in a final consensus function that they choose, say how and in what units each is rewarded as a function of their predictions and outcomes, adjust their consensus for various “biases” they see. Curators say that in their studies that this approach gives more accurate predictions.
Which may well be true. Except they don’t do the crucial test where a lot is at stake in the decisions that the markets influence, so much that interested parties try to corrupt the curators themselves. By bribing curators, threatening to get them fired, or just taking over the whole process by which they are trained and selected. The more details that curators control, and the harder to understand their reasons for making adjustments, the more room there’d be for curator corruption.
Institution/system/mechanism design is a very different problem between when (a) you can trust someone to run it, and make discretionary adjustments as needed, and (b) there is no one we can agree to trust, so we need to agree on something simple and clear that will run with few such adjustments. I’m most interested in that second kind of design problem.
What is a "contest"? What are these contests popular among academics?
Yes, simpler is better, so I prefer an open contest to a curated contest, but I'd expect anything that honestly deserves the name "contest" to be most of the way there. I think the most important details are keeping track records and actually being adversarial.
The context really matters. One of the big issues with curation is that the more influence someone's recommendations have on real world power etc the stronger the incentives to tilt your views to present a common front. Yes, in general the fear that your opponents are being tricksy so you can't afford to acknowledge or raise drawbacks in your positions as they surely wouldn't is a problem. However, the more you are part of the buisness of governing the relatively greater those incentives are since whether or not you tow the line has much more influence on both the laws passed and your own future.
So I'm happy using a curated prediction market in circumstances where I don't feel the curators have much stake in anything besides good outcomes. Tetlock's superforecasters is a good example as they predicting events the are largely abroad and he is no obvous way any position would benefit curators over others (apart from accurate pred). I think in some corporate contexts it would be like this...in many others and government it has all the features that raise risks you give a good description of
Note that I don't think it's quite that the stakes aren't high. Rather it's just that the defenses are strong against the threats. The CIA uses some of these methods so I'm sure ppl could benefit by affecting them but no more so than gov agents generally and it would be impractical to infer just what answers would benefit them.
So it's not just when things are low stakes. It's when it's hard to gain by offering corruption since defenses high, the outcomes you want predicted hard to identify and you have easy contact with experts.