In January I claimed: An extraordinary claim is usually itself extraordinary evidence … I would be very unlikely to make such claims in situations where I did not have good reasons to think them true. The times to be more skeptical of unlikely claims are when there is a larger than usual chance that someone would make such a claim even if it were not true.
This discussion has many resonances with Eliezer's discussion in the next post of the "decay" of knowledge with transmission from generation to generation. In that case we are more or less forced into a linear chain with no way (in the case of e.g. God speaking of Moses) of going back and rechecking the phenomenon. So the decay is structurally inevitable.
Science of course promotes short chains by encouraging replication and black boxing. Nobody has to "trust" the PCR works -- they get PCR materials and equipment and depend on it every day in their lab. All of molecular biology is a distributed daily replication of PCR. And this pattern holds for essentially all major scientific results, although less dramatically.
At the risk of getting boring, this is another piece of (what should be) a general theory of judgment aggregation. The community around this blog seems to be stepping smartly in that direction.
Dagon, no doubt it is easier for networks to communicate info that can be cheaply and repeatedly generated at will by many dispersed people. Unfortunately the most interesting extraordinary claims are rarely of this sort.
Thinking more about this, it may be worth distinguishing a couple of different types of reporting mechanisms for different types of claims.
I was trying to say above that it's possible to reduce the effective length of the chain, by cutting out middlemen for claims that are repeatable. I'd like to reformulate my thoughts on the matter.
The vast majority of information is not passed in a chain, but in a network. Each node gets signals from many other nodes, and can make further connections on demand (and at some expense). This network carries claims (X is true) and metaclaims (some node claims X to be true).
I can believe network partitioning could cause this to behave similarly to the chain model. This could happen due to secrecy or institutional trust barriers, or for uninteresting claims just friction (it being more expensive to establish new connections than refreshing of the claim or metaclaim is worth to a given node).
If a claim is about a repeatable experiment or observation, it gets even easier - there can be many nodes independently asserting the claim, and you only need a sufficiently good path to one of them to benefit from the knowledge.
I think this network WILL be susceptible to some types of noise over some types of claims. Selection biases, for instance, are hard to avoid if you have a cost to evaluating each claim, and so want to keep your number of connections down.
I think that a variety of other factors are going to come into play that are not articulated in your setup definitively. Saying people are "close to rational" does not cover all the ground.
Thus, it depends on what kind of bureaucratic (or hierarchical or chain) setup they are in. Do they get punished for reporting things that seem to outlandish? Do they get rewarded for saving everybody from something unlikely but very dangerous? These alternatives are just a few of the extra elements that can come in to muddy the picture and either muffle or magnify the signal coming from an initially extraordinary report.
Dagon, the newspaper you read today has noise in its signal; for what fraction of the articles you read will you "refresh the claim by confirming it" yourself?
Fraction of the articles I read? Close to 0. Fraction of the articles that make extraordinary claims and interest me enough to pass on? maybe 60%, scaling inversely to ordinariness. We're only talking about extraordinary claims in which I participate in the reporting chain, right?
At the least, I tend to note the references of the story, determine what parallel sources may be available, and make it possible for links upstream of me to eliminate my added noise by seeking out the sources I used.
Barkley, it would be interesting to see what it would take for a model to predict magnification of a report. My model has everyone being close to rational.
Apologies. Focused on the part about people adding 10% at each stage.We certainly see that also, where an initial report gets magnified becauseit is dramatic, and then becomes more so with the retelling through the chains.
Hard to know why or how which effect will dominate, the noisy swallowing, orthe magnification.
Barkley, in my model the claim is downgraded and discounted as it moves through the chain - that is exactly the sense in which I meant it is swallowed.
OTOH, if a claim is really extraordinary, it may be disbelieved and getdiscounted as it moves through the chain. In that sense, the chain mightwell "swallow" extraordinary evidence in the sense of making it disappear,rather than magnifying it, as suggested here.
Eliezer, I actually didn't notice that until someone else pointed it out. Apparently my subconscious has a sense of humor.
I liked the "distorting bias term b s', which reduces honesty."
If people in the reporting chain are aware of the noise added by the chain itself (and I think most are), they can refresh the claim by confirming it themselves.
Unless the a single link in the chain is so noisy that it's not worth the cost to the next link to confirm it, or a link has a very large underestimation of the noise it adds itself, this should allow the claim to go an infinite distance.
This seems like a good reason that non-confirmable personal evidence tends not to carry the weight of repeatable observation. It's much harder to correct for errors in the reporting.
Really like your website debating about bias. Here is an article dealing with similar subject.Hope this will lead to a debate about propaganda vs news.