36 Comments

I said minimally informed people, which means, people who read the new york times.If you read Caplan's review, he points out that the "laymen" in Tetlock's study were students who would have been far better informed than the general public.

But their choices are often no better than flipping a coin in domains with larger uncertainties.Caplan also states in his review that Tetlock "loaded the dice", as they say, in favor of the randomly-guessing "chimps" and against experts.

Expand full comment

Let me put the challenge this way. Either there are indicators of who is more accurate or there are not. If there are no indicators, then everyone is equally likely to be accurate, and a simple average of everyone's views would be the best we could do. If there are indicators of who is more accurate, then you should defer to the opinions of people who score better according to those indicators, in addition to feeling that those who score worse should defer to you.

Expand full comment

minimally informed people, which means, people who read the new york times

If I might offer an alternative perspective: some say the Times is better at indoctrinating than informing.

Expand full comment

TGGP, let me also add, I think Caplan is right when he says experts can help frame questions properly, give choices, etc.. But their choices are often no better than flipping a coin in domains with larger uncertainties.

Expand full comment

TGGP: I said minimally informed people, which means, people who read the new york times. I think if you also reference the recent discussion on this blog and Gelman's blog about social science -- you'll realize Caplan's larger point is wrong. Social Science experts often have a tendency to forecast well outside their knowledge, which makes them look foolish, and as Tetlock points out, no more accurate than a new york times reader. On forecasts people actually care about (i.e. the media actually asks experts about), experts often aren't very good. If anything, I think this Tetlock's point is one of the main reasons I favor Robin's project, decision markets, than anything else.

Expand full comment

if you took a short list of standard candidates for "smartest person ever" based on IQ or genius accomplishments you would do worse in terms of finding accurate sets of beliefs than if you took, say, an average elite university natural scientist.

I'm sure some (most?) of this is due to selection from different time periods. If you took the same candidates and compared them to a randomly selected elite academic from the last 2500 years, or alternatively compared the modern scientist to some candidate smartest people of the 20th century, the "smartest" groups would look a lot better, and the "smartest based on accomplishments" group might even win. Also, given the context of this discussion, you might want to exclude self-evidently crazy people like Godel, since you would exclude them if looking for the most generally reliable people.

Expand full comment

komponisto: Perhaps Sabbagh thinks that an expert is just like a version of himself with a few more bits of data at hand; whereas in fact experts have much more than that: they have familiarity with long chains of reasoning that lead far away from our common everyday starting-points.

Maybe *some* experts. I seriously doubt that most experts can explain all the logical steps in reasoning, starting from layman's knowledge, that get to their current expertise (which, btw, is my criterion for establishing whether you understand something). In my admittedly limited experience, getting an expert to explain the context of their knowledge, and on what it is based, is like pulling teeth. They seem to operate in a sort of "Chinese room", manipulating symbols to the satisfaction of others in the field, but not understanding them.

Eliezer_Yudkowsky made a post a while ago explaining how it's just not practical for an expert to explain to a layman all those steps. And I agree. But they should be *capable* of doing it, and that's certainly the ideal, but I've just never seen it.

Expand full comment

I don't think that any theories even claim to know how g loaded accuracy of general beliefs is, but I think I made it clear from my posts that we don't know how to measure g at the very top in any event. It's obvious in a practical sense that, as I stated, Ed Witten is smarter than Maralyn Vos Savant regardless of what his IQ might be, and if you took a short list of standard candidates for "smartest person ever" based on IQ or genius accomplishments you would do worse in terms of finding accurate sets of beliefs than if you took, say, an average elite university natural scientist. Just look at the bios of the Mega society and the like, or of a list of important mathematicians (Newton, Godel...)

Expand full comment

Jor, laymen are not in fact just as good as experts as far as Tetlock shows. Bryan Caplan explained that in his review, which you can access through here.

Michael Vassar, do you believe the psychometric theories of a single general intelligence g are wrong or only that our metrics for measuring it fall short?

Expand full comment

And Robin, I have been pointing out, in response to Rolf's point, that we don't have any measures of domain general extreme "intelligence" that come even remotely close to corresponding to domain general extreme accuracy. As for experience and detailed knowledge, their relevance is also always still a judgment call.

Expand full comment

Hmm, interesting.

What the does 'hedgehog' metaphor have to say about Eliezer's fixed ideas regarding Bayesian methods?

Expand full comment

One guy says it all: Phil Tetlock.

Minimally informed people usually have as much expertise as "experts", in domains with large uncertainties.

Expand full comment

Sabbagh: What experts have that I don't are knowledge and experience in some specialized area...I now view the judgments of others, however distinguished or expert they are, as no more valid than my own .

I suspect we have another case of underestimating the size of inferential distances. Perhaps Sabbagh thinks that an expert is just like a version of himself with a few more bits of data at hand; whereas in fact experts have much more than that: they have familiarity with long chains of reasoning that lead far away from our common everyday starting-points.

Thus, Sabbagh might believe he has "just enough" expertise because he's under the impression that more specialized training would only move a person at most one or two inferential steps away from where he already is (and thus would hardly be worth the trouble).

Expand full comment

"Constant, in poker you know your opponents know their own cards. The fact that you can see your cards and you cannot see their cards is not a good basis for assuming you know more about this round of poker."

Funny you should use poker as an analogy. In poker, since you don't see the other guy's cards, you don't know whether he's bluffing about his hand. So you reasonably distrust the signals he sends out.

Expand full comment

I'm surprised at how many people let this point go without comment:

a mixture of friendly posts with a confrontational style of interaction

What a stupid criterion for trollishness. Haven't you ever heard of the loyal opposition? Ignoring all of the other problems with the criterion, that alone is enough to sink it.

Expand full comment

Michael, we have been talking about using clues such as experience, detailed knowledge, and intelligence to estimate each person's accuracy. Renaming accuracy as "reliability" doesn't get us very far. And simply deciding that people you agree with are "reliable" and therefore worth listening to isn't very useful either.

Expand full comment