One of the most well-worn examples in introductions to Bayesian reasoning is testing for rare diseases: if the prior probability that a patient has a disease is sufficiently low, the probability that the patient has the disease conditional on a positive diagnostic test result may also be low, even for very accurate tests. One might hope that every epidemiologist would be familiar with this textbook problem, but
Steve Sailer claims that he almost died because doctors thought whooping cough was extinct, when they were actually thinking of whooping cranes. He thinks the same problem of words that sound similar is the case with Iran/Iraq and Sunnis/Shiites.
Institutional incentives would certainly help to avoid pseudo-epidemics, both by motivating doctors to expend mental effort searching for reasons not to proceed, and by allocating decision-making power to better Bayesians.
However, the example remains troubling in a world where constructing such incentives is costly, and it is very difficult to incentivize careful thinking across all domains. Doctors unable to generalize from their (mandatory!) study of statistics to an almost identical real life situation, like the scientist who is a mystic outside the laboratory (http://www.overcomingbias.c..., cannot be fully trusted in any but the most carefully structured and incentivized transactions, which are rare in medicine because of information asymmetries.
Their incentives, including liability, might also be a factor in biasing their conclusions; they may well suffer much more for failing to flag an outbreak than from falsely flagging one.
Steve Sailer claims that he almost died because doctors thought whooping cough was extinct, when they were actually thinking of whooping cranes. He thinks the same problem of words that sound similar is the case with Iran/Iraq and Sunnis/Shiites.
Institutional incentives would certainly help to avoid pseudo-epidemics, both by motivating doctors to expend mental effort searching for reasons not to proceed, and by allocating decision-making power to better Bayesians.
However, the example remains troubling in a world where constructing such incentives is costly, and it is very difficult to incentivize careful thinking across all domains. Doctors unable to generalize from their (mandatory!) study of statistics to an almost identical real life situation, like the scientist who is a mystic outside the laboratory (http://www.overcomingbias.c..., cannot be fully trusted in any but the most carefully structured and incentivized transactions, which are rare in medicine because of information asymmetries.
Their incentives, including liability, might also be a factor in biasing their conclusions; they may well suffer much more for failing to flag an outbreak than from falsely flagging one.