53 Comments

Lubos, yes, a statement is not its own evidence, but the fact that a person claims the statement is evidence, even if sometimes weak evidence.

Expand full comment

On one hand, it is a very inspiring reading. On the other hand, would you agree that all these statements how much evidence XY requires to make us certain at a UV level can be expressed much more accurately, e.g. in terms of the formulae behind the Bayesian reasoning (which I don't particularly like, but OK)?

I didn't quite understand in what sense a statement itself is its own evidence.

A statement about children who died under some unusual circumstances needs some evidence. Of course, the evidence doesn't have to be directly related to the children and a good track record of someone saying true things can be strong enough.

Well, I can tell you a real story of this kind that happened to me, and what it means. One hour before my PhD defense, I woke up in my former office where I spent a night ;-), got a shower, and found a projector. Half an hour before the defense started, I read an e-mail in Czech. It was 9 a.m. and the e-mail contained an extraordinary statement that one - and 5 minutes later, two - large airplanes crashed into some rather well-known buildings 50 miles from the place where I defended.

It was extraordinary and a priori unlikely but I - probably in agreement with your analysis - immediately knew that the message was almost certain. But this belief was still based on some strong evidence, albeit indirect one: the message seemed to be copied from the Czech Press Agency. It sounded strange that the author would fake such a serious message because this exact kind of tough fake messages wasn't usual with him. And it sounded even more unlikely that the Czech Press Agency would create such a silly black joke. So I immediately decided that it was almost certainly true. And of course, it was.

On the other hand, I received a lot of similar extraordinary messages from some other sources during different days than 9/11/2001 that I didn't believe simply because they were extraordinary while the evidence (of the phenomena themselves or the integrity of the messenger) was not. And I was right.

Expand full comment

Well, I think we've located the disagreement, at least.

Of course (3) has to be tiny *enough* to compensate for a tiny (1). So saying that (3) is "too large" and (1) is "too small" is equivalent.

However, if we consider the absolute value, then I would hold that "Extraordinary claims require extraordinary evidence" is reflective of an unusually low (1), not an unusually high (3).

When we find that studies funded by industry sources tend to find conclusions which industry would prefer to be true (negative findings of detrimental effects, positive findings of beneficial effects), we say, "He who pays the piper calls the tune."

When someone publishes a paper claiming that human beings can predict an LED driven by thermal noise, we say, "Extraordinary claims require extraordinary evidence."

Expand full comment

Eliezer, my claim is instead that appropriate invocations of the proverb are in fact primarily when your (3) is unusually high. (1) and (3) are correlated to be sure, but it is (3) that drives the problem.

Expand full comment

Robin, there are three factors contributing to the posterior probability:

1) The prior probability of the hypothesis.2) The likelihood that someone would make the claim, if the hypothesis were true.3) The likelihood that someone would make the claim, if the hypothesis were false.

By thinking about whether or not someone seems especially likely to lie or self-deceive in a particular case, you're attending to (3), which is naturally important. But one must also attend to (1), and it is this, more than (3), that leads us to reject e.g. claims of low-level ESP detectable only by statistical means.

In particular, the proverb "Extraordinary claims require extraordinary evidence" is meant to be invoked when (1) is unusually low, not when (3) is unusually high.

Expand full comment

Eliezer, yes, conscious lying is less the problem that other mental tendencies to make dramatic claims without enough evidential support.

Expand full comment

I think my view here can be summed up in one line:

"Extraordinary claims are always extraordinary evidence but sometimes they are not extraordinary enough."

I think people are *less* likely to lie about conservation-violating coffee than about dead relatives - and in fact, I've never even heard of such a lie being told. *Nonetheless*, the prior probability is *so much* lower that the claim is no longer evidence *enough*. As the alleged facts themselves become more extraordinary, the claim may become more extraordinary evidence, but it becomes more extraordinary at a rate much less than the facts themselves.

Expand full comment

pdfs2ds:

Let 'I' denote the prior information that I flipped a 2-sided fair coin 100 times, and let 'S' indicate some string of heads and tails. Then p(S | I) = 2^-100. Also, let's say you consider it about 50% likely that I will truthfully and correctly report the string that I saw. We'll let R indicate the event that I come to you and report, 'the coin-flips produced the string S.' Then p(R | SI) = 0.5. So after I make the report R, how certain are you that S was the actual string?

By Bayes' Theorem: p(S | RI) = p(S | I) * p(R | SI) / p(R | I). We've already given values for p(S | I) and p(R | SI), so p(S | RI) = 2^-101 / p(R | I). That thing in the denominator is what many people have been missing in this thread. A maxentropy distribution would give p(R | I) = 2^-100, since there are 2^100 possible reports you could give. Thus p(S | RI) = 2^-101 / 2^-100 = 0.5.

Notice that (as Eli mentioned earlier), this is exactly equal to p(R | SI). If we knew something about the sorts of lies people tell, then these would not be exactly equal. For instance, you may consider it a priori fairly likely that I would claim to have gotten 100 heads, so when I do in fact make this claim, you don't consider it very strong evidence. That's because the denominator would become relatively large.

Expand full comment

Robin writes: "I would be very unlikely to make such claims in situations where I did not have good reasons to think them true."

While that might be true of the examples you gave, and even of the coin-flipping example introduced later, in most real-life situations where the rule of thumb applies, the "good reasons" are often good only in the view of the person making the claim. The person who claims he was abducted by aliens, or was spoken to by God, often does so for the good reason that it will make him feel better about himself, or attract attention, or for any number of other reasons which are only good from his point of view. Some of these reasons may even be held unconsciously.

So, in general, unless you know the "good reasons" you can't assume they carry any weight whatsoever.

Expand full comment

Doug, sure, the evidence would raise the probability of that particular coin-flip sequence to higher than it would otherwise be. But would it raise to from 2^-100 to .5? Or to 2^-80?

Expand full comment

Barkley, sure, whether I was grinning or seriously upset would be relevant information for whether you should believe a claim of mine. Bayesian analysis has little problem taking such things into account.

Expand full comment

Robin,

Upon cogitating upon this further, it strikes me that while in principle Bayesian analysis can be applied here, there will be several further stages or modifiers that will affect the priors, many of which would fall into the context of "framing effects" notorious in behavioral econ and psych, some of which will boil down to details of the presentation of the extraordinary claim.

So, once in a awhile I show up at GMU seminar. If you were to walk in and with your usual pixieish grin declare that you had just seen a black swan outside walking freely about, I might lower the prior given your history of prankish delight in posing philosophical conundra, and the notoriously widespread use of the existence or nonexistence of black swans in this or that location as a problem among philosophers.

However, if you were to walk in with blood streaming down your face and a very serious and upset look on your face and declared that you had just been attacked by a black swan that had been walking about on campus (perhaps after annoying it with asking it inane philosophical questions about its presence in front of you on the GMU campus), I might raise the prior, although I would be aware that you might be fully capable of really faking people out for something that you could post on this notorious blog by faking the blood and putting on an act, and so forth. No end to this one, I guess, maybe even an infinite regress into absurdity...

Expand full comment

The way I see it, we can justifiably assume that the coin flip sequence is accurate to the extent that we can accurately report coin flip sequences. There would be some errors, but we don't know what they are; for any given coin flip in the sequence, the reported outcome is still the far more likely to be correct than incorrect, so we would be justified in accepting the sequence as the best approximation to the real sequence that we can find.

Expand full comment

Robin, I think you're misunderstanding Eliezer.

"people are particularly likely to falsely claim violations of universal generalizations, compared to other false things they are likely to say."

This would imply that we should have higher prior probabilities of people lying about some things rather than other things. But Eliezer was saying that our prior probabilities in the things *themselves* should be different. Our belief in Newtonian mechanics should be so much stronger than our belief in a person's likely whereabouts or the conditions of a friend's children's death that verbal evidence (given the same strength in both cases) doesn't raise our probability estimation of the former enough to where it's even plausible, let alone probable.

I confess that I don't understand how this addresses the coin scenario. Given a coin-flip sequence long enough that the probability of a given sequence being true is roughly the same as the probability of Newtonian mechanics being wrong, how is it that verbal evidence is stronger in the one case than the other? I know you've tried to address this Eliezer, but I still don't get it.

Expand full comment

James, now I understand. If they just claimed "here is evidence, take it into account" there would be nothing to be skeptical about. But if they claimed "based on all the evidence so far, this weird effect looks real" then to treat that skeptically you'd have to believe people in that situation tend to be biased to claim more than their evidence justifies.

Doug, yes, the key problem is that on certain topics people too easily believe strange things. The more clearly we could identify such topics, the better we could adjust the evidence level we require to the topic.

Expand full comment

It's not so much that people who make "extraordinary" claims are lying as that they often believe things that are, well, wrong. Perhaps a person visits a "psychic medium" who performs a cold reading, and then tells you that his dead mother told him to follow his dreams. I simply conclude that the person was fooled by the "psychic medium" and was not deliberately being dishonest. It's not that hard to fool people, since, as you said, when someone says something happened, they usually are saying what they think actually did happen.

Example:

A: "I was abducted by aliens last night!"B: "No, you weren't. You were dreaming and your brain's mechanism for telling the difference between dreaming and waking failed."A: "But I saw them!"B: "Yes, you did, but what you saw wasn't real. If you read the literature on sleep paralysis and hallucinations, you'd agree with me."A: "What do scientists know, anyway? What happened to me was real!"B: "I could continue to argue with you, but it is clear that further discussion will be fruitless because we assign different weight to different kinds of evidence. Your method of weighing evidence is wrong, but I do not believe that I can convince you of that. Goodbye."A: "You're just one of those stupid scientists who refuses to have an open mind. Goodbye."

Sadly, this kind of conversation probably happens all the time...

Expand full comment