The Wisdom of Crowds

James Surowiecki’s 2004 book The Wisdom of Crowds offers a somewhat contrarian view of what is typically seen as a widespread bias: the human tendency to follow the crowd and go along with what the majority says is true. Surowiecki argues that in many cases, this is actually a reasonable thing to do, as crowds and groups are often much smarter and more accurate than even their smartest members.

Of course there are many well known circumstances in which crowds have done poorly, and much of Surowiecki’s book attempts to tease out the various issues which affect the accuracy of the group consensus. But he begins by setting out examples which demonstrate his thesis. There have been many experiments performed over the 20th century in which individuals and groups were asked to make various estimations and predictions, and in which the averaged prediction turns out to be highly accurate, often more accurate than even the best individual prediction of the group. Classic examples include traditional "how many beans in the jar" guessing contests. On these kinds of straightforward factual questions there is ample evidence that groups can perform very well.

(I should note that Surowiecki’s book is more broad than deep, and presents much of its information anecdotally. It wasn’t until I found the Notes at the back that I was able to see how well founded many of his claims are. In fact I might almost recommend reading just the 20 page Notes and only dipping into the main text if you want to enjoy the story that he builds around the results.)

Surowiecki also cites the accuracy of prediction markets, including sports betting as well as the newer election markets, where the consensus odds have proven to be highly accurate. Google’s PageRank is also invoked as an example of successfully exploiting the wisdom of crowds.

But as that last example illustrates, success depends crucially on how the collective information is acquired and evaluated. PageRank is a highly artificial construction and one could imagine many alternative algorithms which would not have worked as well. Infrastructure and institution make the difference between a crowd which can bring its diverse knowledge to bear accurately, and one which only manages to exaggerate its own biases.

This last process is described as an "information cascade" and it is one of the most common traps that crowds can fall into. The problem is that recognizing the wisdom of crowds involves a paradox. The crowd can only be wise if the information and insights from all its members are incorporated. But if each person believes that the crowd is wiser than he is (as would typically be correct) then they will only echo back what they think is the crowd consensus, leading to "groupthink" and runaway. This is one way of explaining well known mob behavior such as investment bubbles. Each person changes his own beliefs about prices when he sees the crowd consensus, producing positive feedback and driving prices to unsustainable levels.

Surowiecki therefore advises that collective judgements are best made by at least initially acquiring estimates from each member before he has been exposed to the group consensus. In some circumstances it may then be appropriate to have group discussion or interaction, in order to attempt to reconcile conflicting views. But he also quotes cases where these kinds of interactions have been shown to be harmful, causing the group to move to more extreme positions. In the end, aside from the clear evidence that some institutional arrangements work better than others, and identifying a few common failure modes, it is still an open problem how best to benefit from collective judgement without falling prey to information cascades and other failures.

It’s also clear that the public has a number of false beliefs on specific issues, as discussed here earlier. Such errors appear to have another cause than information cascades, although no doubt people are influenced by the fact that their self-selected peers share their opinions on these matters. It’s tempting to say that people are wrong about, say, the U.S. foreign aid budget, because it doesn’t matter to them whether they are right or wrong, since they have effectively no influence over policy. However, in many of the classic laboratory experiments people are quite successful at making collective judgements about matters where being wrong would be equally unimportant.

When to believe the crowd, when to believe experts, how to cut through the maze of differing opinions and achieve a realistic approximation to the truth, all these are questions that I struggle with. I know many people who are smart, and whose solution is to try to become an "instant expert" on every issue, determining the truth by thinking for themself and weighing all the arguments on each side. To me, this is a spectacularly unlikely route to success! The mere fact that the process produces widely differing answers among different practitioners is a very bad sign. Further, if you delve deeply enough on most subjects of controversy, you find that the arguments and evidence turn out to be subtle and complex, and the superficial gloss which is the best a layman can hope to achieve is likely to miss the real issues.

Rather, I hope to find rules of thumb by which an extremely resource-constrained truth-seeker can judge which side is correct on various factual questions. Following the mob may well be the best thing to do in many circumstances. Surowiecki’s book offers tantalizing clues about when this can work, but there are still many open questions.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • I’m glad Surowiecki wrote his book, in part because it called attention to these issues. My main complaint is that in many readers minds his book framed the choice of prediction markets as a choice of crowd wisdom over expert wisdom. In fact, prediction markets can and do work well in situations with lots of expert wisdom and little crowd wisdom. By choosing a prediction market you choose a mechanism which should do well regardless of who knows more; the people who know much should self-select to participate, and there is little harm (and often benefit) from allowing those who know little to participate.

  • radish

    Interesting. So a useful question might be “does the accuracy of a crowd’s ‘crowdsense’ correlate with the way that its individuals perceive and analyse collective opinion?” Does Surowiecki discuss that? If you’re trying to predict the accuracy with which a particular crowd will answer certain questions, and you want to find (and eventually tweak) the simplest possible parameters that will enhance that accuracy, then measuring things like the weight that individuals give to the collective opinion seems like a good bet.

    An anecdotal look at the extreme cases would seem to support it… Accurate: people rave about the Deplhi Method, and that’s an expert-oriented prediction market, structured in a way that very much suppresses runaway positive feedback. Inaccurate: bubble markets, where the only significant market force is the credibility of collective opinion, but the collective opinion still fails to predict when the market will collapse.

  • Digg’s Fatal Flaw – Its Users

    Digg this, Digg that – crowdsourcing is the ultimate way to unearth what’s newsworthy. Or at least it would be – if its user-base actually did some real ‘digging’ and checked the facts behind a story (or at least the…

  • Earl Stevens

    Considering the opposite side of the issue, I highly recommend Cass Sunstein’s “Why Societies Need Dissent”. This book isn’t nearly as entertaining of a read as Surowiecki’s – it’s pretty much just a series of lectures strung together as chapters. Throughout, Sunstein presents and dissects the pros and cons of many group behaviors. Filled with lots of examples, and plenty of research citations. Check it out.

  • I found the Wisdom of Crowds a very stimulating book which raised a very important question: how does the WoC effect actually work, ie. what are the mechanisms? However the book made no attempt to address this obvious question, which I thought was strange.

    On the other hand, I have since utterly failed come up with any plausible answer myself. Any ideas? How about the ‘beans in a jar’ example (assuming the evidence is correct): how is it that group average guesses are accurate?

  • TGGP

    Bruce, I think the reason the “beans in a jar” example is supposed to work is that people do not have a systematic bias in any direction. I don’t know if that requires an explanation, but it would seem to me that biases are what demand them. People would often be in error, but their errors are random and centered around the true value. Averaging everything together causes these random errors to cancel each other out.

    I haven’t read the book, so it’s possible that you already knew that and were asking something different.

  • The Wisdom of Crowds Paradox

    Donn Parker is a security professional who wrote an article for ISSA Magazine a few months ago that asserted that risk management should be replaced by due diligence, compliance, and enablement (whatever that is). Of course, ignoring risk is simply one…

  • Nick Walker

    Well you won’t fall behind if you follow the crowd. “Runaways” substitute short term reputability for longer term status, erroneously assuming the crowd will appreciate the value of insight.

  • Pingback: Er befolkningen klogere på politik end Hans Engell? | In the short run...()