Tyler against bets:
On my side of the debate I claim a long history of successful science, corporate innovation, journalism, and also commentary of many kinds, mostly not based on personal small bets, sometimes banning them, and relying on various other forms of personal stakes in ideas, and passing various market tests repeatedly. I don’t see comparable evidence on the other side of this debate, which I interpret as a preference for witnessing comeuppance for its own sake (read Robin’s framing or Alex’s repeated use of the mood-affiliated word “bullshit” to describe both scientific communication and reporting). The quest for comeuppance is a misallocation of personal resources. (more)
My translation:
Most existing social institutions tolerate lots of hypocrisy, and often don’t try to expose people who say things they don’t believe. When competing with alternatives, the disadvantages such institutions suffer from letting people believe more falsehoods are likely outweighed by other advantages. People who feel glee from seeing the comeuppance of bullshitting hypocrites don’t appreciate the advantages of hypocrisy.
Yes existing institutions deserve some deference, but surely we don’t believe our institutions are the best of all possible worlds. And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself. Yes it is possible that such squelching is all for the best, but isn’t it just as likely that some folks are trying to hide things for private, not social, gains? Isn’t this a major reason we often rightly mood-affiliate with those who gleefully expose bullshit?
For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better? Same for if you were trying to rate the effectiveness of a charity or government agency, or evaluate a paper for a journal. The more that people and habits seemed to be trying to hide something and evade incentives for accuracy, the more suspicious you would rightly be that something inefficient was going on.
Now I agree that people do often avoid speaking uncomfortable truths, and coordinate to punish those who violate norms against such speaking. But we usually do this when have a decent guess of what the truth actually is that we don’t want to hear.
If if were just bad in general to encourage more accurate expressions of belief, then it seems pretty dangerous to let academics and bloggers collect status by speculating about the truth of various important things. If that is a good idea, why are more bets a bad idea? And in general, how can we judge well when to encourage accuracy and when to let the truth be hidden, from the middle of a conversation where we know lots of accuracy has been being sacrificed for unknown reasons?
Contests for status are brutally important for those who are engaging in them. Some participants will be culled in each generation, and it's crucial not to be one of them.
"Consider the matter of status competition. Mr. Roberts, like so many before him, argues that conspicuous consumption is an unhappy zero-sum game. But this is of course true of most forms of competition: Most academics I know can rank-order everyone in the room at a professional conference with the speed and precision of a courtier at Versailles. Any competition, from looks to money to academic credentialing, both consumes a lot of resources and makes many of the participants feel bad about themselves. Why, then, does the literature on status competition always tell us that we should redistribute capital gains or inheritances and never tell us that we should redistribute academic chairs or book contracts?"-- Megan McArdle reviews "Shiny Objects" by James A. Roberts
"Warren Buffett is a recognized "prediction champion." "
How do we know that? Because he got rich? To distinguish between a game of chance and a game of skill you need to repeat the game many times. Major investment deals are too rare to repeat enough times during a human lifetime to get statistically significant results (it gets even worse when you realize many advisors are middle-aged at most). Resolutions of prediction markets will be equally rare.
"I'd say there is non-trivial evidence that Paul Ehrlich is poor at making predictions about natural resource shortages."
Keeping a record can point out idiots, but that does not prove it can point out champions. But really, how do we know Ehrlich wasn't right on a host of other subjects (they're just subjects that weren't so influential in history).
"Moreover, the entire point of a properly structured prediction market is that the "wisdom of crowds" tends to provide us with better information than would result from individual predictions,
Thus we have no need to fear, "policy based on this wishful championing starts leading to bad things." "
If we base decisions on the wisdom of the crowd we expose ourselves to market manipulation aimed at making us lean towards a certain side. If we base decisions on people who beat the market we might very well be listening to lucky idiots. Maybe we're better off getting used to the inherent fuzzyness of the social "sciences" and we should stop looking for crystal balls that do not exist, just like we don't take a meteorologist serious when he says he can predict the weather of august 18, 2050.