

Discover more from Overcoming Bias
It is tempting to react to unscientific methods of medical practice by rejecting any treatment that isn’t supported by rigorous scientific evidence. Here’s a parody of naive implementations of evidence-based medicine that demonstrates the pitfalls of doing so:
Smith GCS, Pell JP. (2003). Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ, 327(7429), 1459-1461.
From the paper:
Results We were unable to identify any randomised controlled trials of parachute intervention.
Conclusions As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
There are some interesting comments on the paper here and here.
Randomised Controlled Trials of Parachutes
Richard,Naive evidence-based-medicine does show signs of overemphasizing statistical knowledge, partly in overreaction to the use of poor causal models.Rigorously acquired statistical knowledge is generally better than causal models which exist only in the mind of doctor who has little incentive to avoid biases such as overconfidence. But "generally better" doesn't mean "always better", and the manner in which causal models are chosen can be improved.Our knowledge about the effects of hitting the ground at a particular velocity hasn't been rigorously tested. But for almost all medical treatments, the effects are much weaker, and the weaker an effect is the more rigorously we need to examine the evidence. I doubt that your causal model of how high velocity impacts cause injury is particularly well thought out. I think your confidence in your causal model is based largely on the absence of people who have any doubts about the predictions you have made so far using that causal model.
I doubt that it's important to read any particular book before reading Causality, but it does require a good deal of comfort with basic statistical theory and with the way math is taught in college level textbooks.
Judea Pearl makes the distinction between statistical knowledge and causal knowledge and argues that causal knowledge is much more useful. In fact, the main purpose of statistical knowledge in Pearl's way of looking at the world is to help a person or an intelligent agent acquire causal knowledge.
The evidence-based-medicine movement shows signs of overemphasizing statistical knowledge.
The parachute example is a good illustration of the superiority of causal knowledge because the reader knows that falling out of an airplane causes injury because of the velocity with which the person hits the ground. In the vocabulary developed by Pearl and his colleagues, the velocity at which the person hits the ground screens off the effect falling out of the airplane has on the injury.
Eliezer has written that the reader should read Pearl's Probabilistic Reasoning in Intelligent Systems before attempting Pearl's Causality book, and has advised students of AI to read Tom Mitchell's Machine Learning before reading Probabilistic Reasoning in Intelligent Systems.