7 Comments

This may not be the best type of contest to analyze this question. Everyone has equal access to all sorts of handicapping data for NFL games. To win this type of game, a contestant has to be at least as good as the conventional wisdom. There is no use in selecting the most "probable" probabilities because out of 1000s of contestants, many people will do the same thing, plus get lucky on a few other games. You need to start with the obvious games, then bet on several unlikely outcomes. In other words, with so many contestants some people will naturally end up out on the tails of the curve, and you need to take some chances to end up there too.

Coming in the middle of the pack wins you nothing, so taking some chances in order to come out on top costs you nothing. Consequently lots of contestants guess "improbable" probabilities on occasions, and the result is that 64% of them are below zero.

Averaging out all the "improbable" guesses people make (and home team biases) would be expected to place relatively high in the pack. But admittedly 7th place is pretty stunning.

Expand full comment

You might want to give yourself more credit than a crowd if you know you have better access to information than that crowd. For example, if you just saw the star player on a football team get hit by a car and the news hadn't gotten out yet, you could justly assume that your expectation of that team's performance would be better than the crowd that didn't know about it yet. Another example is stock trading based on non-public insider information, which is illegal in the United States.

Here's a final example. I was one of the top students in my first year Chemistry class at college. All the exams were multiple choice with five possible answers. One one exam, the average grade was a 40. I got something like an 85. (The professor graded on a curve, so that 40 would be a passing grade.) Also, when I was taking the final, there was a mistake on one of the multiple choice problems; none of the answers were correct. I was the only one who was confident enough in his own work to bring this to the attention of the professor. As it turned out, there was a typo in the problem.

Finally, I'd like to direct everyone to this article. It's a really good discussion of the issue, although I don't know how much it can be understood by someone who does not play Magic: The Gathering.Information Cascades in Magic

If you know that Magic: The Gathering is a game in which players first build a deck of cards by choosing from among the hundreds of different cards available, and that decks that win tournaments are routinely discussed and copied by other players, you'll have some idea what he's talking about. I don't have the time right now, but if you ask, I could try to work on a version that doesn't require any understanding of Magic at all, because it might be very significant to the topic of this blog.

Expand full comment

"The percentages can be interpreted as follows: Given access to, for example, nine other random assessments, you should ignore them only if you believe you are among the top five percent of participants. And only the cream of the cream -- 6 of the 2231 players -- outperformed the crowd: You are justified to ignore the crowd only if you believe you're in the 99.73 percentile of predictors."

This seems very iffy to me. Do we really expect those guys to outperform the crowd again if we run a new trial?

The std in these things should be fairly large and I could see those guys as simply being lucky rather than actually that much *better*.

Expand full comment

Daniel and David, you only compare two options: giving yourself all the weight or giving your self equal weight with a random person. The more interesting question is how much better than average you need to be to justify any particular intermediate weighting scheme, where you give yourself more weight than average but not all the weight.

Expand full comment

The narrow domain example self-selects those who are interested, motivated, and involved, which is a good thing for the sake of clarity; it eliminates (mostly) error from the ignorant, lazy, and apathetic. Like Doug S., I'm wondering how to apply this to fuzzier situations.

I, myself, am seriously trying to come up with a probability number for the following event: Islamic terrorists acquire a small number of nuclear devices (from Pakistan, from India, from North Korea, from Iran, from a former Soviet) and explode them within the U.S. How can I select an interested, motivated, knowledgeable crowd and correct for intrinsic political biases and blind spots?

The post implies I should trust the wisdom of the crowd; my gut tells me most crowds wouldn't have sufficient motivation or knowledge, and if I select the crowd, I'd probably just select people who thought the same way I did, and I'd be back where I started.

What to do?

(Yes, my mother called me lpdbw. She just pronounced it "John").

Expand full comment

That just leaves me with one question - how do you know when the crowds are likely to be wrong? There are some topics in which I believe that I might very well be in the top 99.97 percent of predictors when compared with the population at large. For example, if I asked some random person a question about the game Magic: the Gathering, most people wouldn't have any idea what I was talking about. If I happened to be asking people in Turkey, I'd find that less than one third of them accepted the fact - not "theory" - of evolution. One Ph.D geologist is more likely to give you a better estimate of the true age of the Earth than the average of a million religious fundamentalists. Heck, just consider the sheer numbers of people in the United States that believe in Santa Claus! Sure, they're generally young children, but a datum is a datum, right?

Expand full comment