22 Comments

Contests for status are brutally important for those who are engaging in them. Some participants will be culled in each generation, and it's crucial not to be one of them.

"Consider the matter of status competition. Mr. Roberts, like so many before him, argues that conspicuous consumption is an unhappy zero-sum game. But this is of course true of most forms of competition: Most academics I know can rank-order everyone in the room at a professional conference with the speed and precision of a courtier at Versailles. Any competition, from looks to money to academic credentialing, both consumes a lot of resources and makes many of the participants feel bad about themselves. Why, then, does the literature on status competition always tell us that we should redistribute capital gains or inheritances and never tell us that we should redistribute academic chairs or book contracts?"-- Megan McArdle reviews "Shiny Objects" by James A. Roberts

Expand full comment

"Warren Buffett is a recognized "prediction champion." "

How do we know that? Because he got rich? To distinguish between a game of chance and a game of skill you need to repeat the game many times. Major investment deals are too rare to repeat enough times during a human lifetime to get statistically significant results (it gets even worse when you realize many advisors are middle-aged at most). Resolutions of prediction markets will be equally rare.

"I'd say there is non-trivial evidence that Paul Ehrlich is poor at making predictions about natural resource shortages."

Keeping a record can point out idiots, but that does not prove it can point out champions. But really, how do we know Ehrlich wasn't right on a host of other subjects (they're just subjects that weren't so influential in history).

"Moreover, the entire point of a properly structured prediction market is that the "wisdom of crowds" tends to provide us with better information than would result from individual predictions,

Thus we have no need to fear, "policy based on this wishful championing starts leading to bad things." "

If we base decisions on the wisdom of the crowd we expose ourselves to market manipulation aimed at making us lean towards a certain side. If we base decisions on people who beat the market we might very well be listening to lucky idiots. Maybe we're better off getting used to the inherent fuzzyness of the social "sciences" and we should stop looking for crystal balls that do not exist, just like we don't take a meteorologist serious when he says he can predict the weather of august 18, 2050.

Expand full comment

"I doubt that, I'd say that averaged over many prediction markets few, if any, people would emerge as prediction champions."

Warren Buffett is a recognized "prediction champion." I'd say there is non-trivial evidence that Paul Ehrlich is poor at making predictions about natural resource shortages.

The salient question is, "Would information about the patterns of prediction made by an individual add to our knowledge of that individual's judgment?" At present, as a society we tend to use academic reputation as a proxy measure for "good judgment," at least in the relevant academic field. As individuals we might also use our personal evaluations of particular combinations of evidence and reasoning to evaluate the quality of an individual's judgment.

Given the limitations of both of those approaches, how can it not improve our understanding of the quality of individual judgment to have records of a particular individual's predictions and the outcomes of those predictions?

Moreover, the entire point of a properly structured prediction market is that the "wisdom of crowds" tends to provide us with better information than would result from individual predictions, so that as a society we would more likely use the overall prediction market outcomes than the decisions made by particular individuals. Even if the result was a general decrease in respect for academic expertise, combined with a greater respect for market predictions, that would be a positive outcome.

Thus we have no need to fear, "policy based on this wishful championing starts leading to bad things."

And anytime an intellectual wanted to claim that the market outcomes are consistently worse than his or her own, then we can simply ask the intellectual to prove it - demonstrate that he or she can consistently outperform market-based predictions.

Given the sustained enthusiasm that intellectuals had for communism over a seventy year period, despite repeated mass famines and police states, it really would be hard to do worse.

Expand full comment

Contests for status are a misallocation of resources. This is really the main point.

Expand full comment

"If more professors were expected to make reputational bets, we would gradually see which ones were able to make decent judgments about reality and which were not."

I doubt that, I'd say that averaged over many prediction markets few, if any, people would emerge as prediction champions. Sure, we might see someone get it right three times in a row and we might be in awe of that but it really would not be statistically significant. Problem is human lifespan may very well be too short to get statistically significant results about a single person's prediction skills, so human nature takes over and we'll champion those who get it right a couple of times in a row and we'll only find out our mistake when policy based on this wishful championing starts leading to bad things.

Expand full comment

I'm very glad to see Robin persisting in this argument, and flabbergasted that Tyler doesn't see the value of betting as a way to improve the extent to which more accurate information becomes disseminated more quickly.

There are many topics on which elite academics have been mistaken for long periods of time, whereas those with less elite reputations were significantly more accurate. For instance, economists such as Milton Friedman and Peter Bauer who emphasized that market-oriented economies would grow more quickly than various statist economies in the 1950s, 60s, 70s, and 80s were widely regarded as "ideological." In the meantime, economists such as Samuelson and Galbraith predicted convergence of GDP per capita between communist and "capitalist" economies. Had predictions been a relevant factor in academic reputation, the views of Friedman and Bauer would have replaced those of Samuelson and Galbraith far more quickly than actually took place - and hundreds of millions, perhaps billions, of people might have escaped poverty more quickly.

This is not to imply that Friedman was always right nor that Samuelson was always wrong. Nor is this to claim that there is not a role for intellectual speculation removed from empirical prediction.

But I do see Robin's proposals as a significant improvement upon academic publishing alone as the basis for identifying which propositions about reality are more likely to serve as a sound basis for taking action. I would trust prediction markets over the opinions of "reputable scholars" in most of the social sciences most of the time.

The only reason we grant academia money and status is on the belief that academia, as an institution, is an efficient mechanism for identifying "truths." In the sciences, this assumption seems largely accurate. Outside the hard sciences, results may vary, to say the least. I see prediction markets and reputational bets as the best strategy for improving the signal to noise ratio that currently exists in academic social science.

If more professors were expected to make reputational bets, we would gradually see which ones were able to make decent judgments about reality and which were not. I suspect we would see little correlation between academic prestige and empirical insight. The ultimate result, sorely needed, would be a lowering of prestige for academics who exhibited little empirical insight and improved prestige for individuals who did - regardless of credentials.

Expand full comment

Some does, not all.

Expand full comment

But doesn't all the noise at the micro level cancel out?

Expand full comment

First, I'm really more concerned about arguers expressing their confidence rather than admitting their lack of confidence. If you have definite doubts, you should say so. That's just honesty. If you're unsure of a factual matter, you shouldn't pretend to confidence--not if you're interested in truth. But being confident of an intellectual position tells much more about the person than his belief. When the discussion is about _opinion_, appeals to authority are improper, and that's not because they're "uninformative." "Information" can serve as diversion rather than elucidation in an ongoing discourse.

Second, I didn't say in the comment you're responding to that there's no information in expressing confidence. I did say there's no "value" in doing so. But that's highly relative to the kind of discussion. In discussions of controversial abstract matters, it's usually a distraction.

Do you think scientific reports should include information about the researcher's [personal, not statistical] degree of confidence in his conclusions? Even if we had the technology to discover it objectively using brain scans or something, would that information be helpful? It's simply confusion to say, as Robin seems to, that such information helps make the belief itself clearer, which was my main point in the above comment. I would be frankly appalled to see such information in a research report. Why? Because rather than being some Bayesian advance on weighing evidence, it would signal a reversion to unsophisticated primal attitudes where intellectual controversy is really about a personal conflict involving the proponent. It creates a status issue where we should strive to minimize status as a consideration.

Expand full comment

Futarchy is an extremely strong claim, and let's say someone is (reasonably) skeptical that it will work. He's also reasonably skeptical that you actually believe this will work.

How do you reveal your true belief that this will work? Seems like "betting" on it won't work because your whole theory assumes betting works and that is just begging the logical question.

Expand full comment

An aggregation of bets could not give us info if each one did not give us info on average as well.

Expand full comment

Follow up post: I want to defend both Tyler and Robin by explaining why each one might be right, depending on whether we look at bets at a “micro” or individual level or a “macro” or aggregate level. In summary, if we look at a large number of bets on a given topic or question at a macro level, then Robin is right: all these bets in the aggregate tell us something, and this is why prediction markets are so powerful and should be legalized. But at the same time, each individual bet on a micro scale may not necessarily reveal all that much information, for the reasons Noah and others have given (and, I would add, because individuals might make inconsistent bets over time), so Tyler is right when we look at bets on a micro or individual scale ...

- See more at: http://marginalrevolution.c...

Expand full comment

"Consider that in a court of law, an attorney is prohibited from expressing his personal beliefs about the case." True, so why apply the Turing Test to law, see http://papers.ssrn.com/sol3...

Expand full comment

Surely there must be a distinction between "someone in control is hiding something" and "by and large the members of an institution conspire not to tell each other the truth".

This is why the restaurant example appears misleading to me: There is a clearly identifiable actor in possession of an (knowably accurate) piece of information actively suppressing it (keeping others from obtaining it). But most scenarios people have in mind when discussing prediction markets are very different from that situation: while there is a whole lot of information, it is distributed among many actors and of a certain degree of uncertainty. It is not even necessarily the case that anybody is holding information back consciously.

In particular the moral implications are quite different then: While in the restaurant case ther is something akin to lying, the more complex scenario is marked by something closer to (possibly fake) indifference.

Expand full comment

"And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself."

Is this really true? I know I am paraphrasing here but I recall a discussion by the Bush administration where they were defining some of their inaccuracies because they believed that they were "creating" the reality on the ground. Is "truth" really a positive good? For some people, (for example religious fanatics) I would think that their preferences re to actually deny the physical "truth" around them. One could say the same about the use of drugs (even just alcohol).

I am sure I am straying far off topic. I appreciate the high quality of discussion on this site. However, I felt compelled to write because your post really got me thinking about this.

Expand full comment

Tyler writes, "The quest for comeuppance is a misallocation of personal resources."

That's certainly not true when the contest is for status within the chattering class!

Expand full comment