Last week’s Time Magazine article on Evidence-Based Medicine seems to me to damn it with faint praise:
Evidence-based medicine, which uses volumes of studies and show-me skepticism to answer such questions, is now being taught–with varying degrees of success–at every medical school in North America. … Advocates believe that evidence-based medicine can go much further, reducing the reliance on expert opinion and overturning the flawed assumptions and even financial incentives that underlie many decisions. … But is such certainty possible–or even desirable? Medicine, after all, is a personalized service, one built around the uniqueness of each patient and the skilled physician’s ability to design care accordingly. …
Consider the case of Dr. Daniel Merenstein, a family-medicine physician trained in evidence-based practice. In 1999 Merenstein examined a healthy 53-year-old man who showed no signs of prostate cancer. As he had been taught, Merenstein explained … there is little evidence that early detection makes a difference in whether treatment could save your life. As a result, the patient did not get a PSA test. Unfortunately, several years later, the patient was found to have a very aggressive and incurable prostate cancer. He sued Merenstein for not ordering a PSA test, and a jury agreed–despite the lack of evidence that it would have made a difference. Most doctors in the plaintiff’s state, the lawyers showed, would have ignored the debate and simply ordered the test. Although Merenstein was found not liable, the residency program that trained him in evidence-based practice was–to the tune of $1 million.
Even champions of evidence-based practice acknowledge that the approach has limits. … There have never been randomized trials to show that giving electrical shocks to a heart that has stopped beating saves more lives than doing nothing, for example. Similarly, giving antibiotics to treat pneumonia has never been rigorously tested from a scientific point of view. It’s clear to everyone, however, that if you want to survive a bout of bacterial pneumonia, antibiotics are your best bet, and nobody would want to go into cardiac arrest without a crash cart handy. … All patients would probably benefit if their doctors were abreast of the latest data, but none would benefit from being reduced to one of those statistical points."
How long will schools teach evidence-based medicine if they are fined for telling doctors to act differently from other doctors, and if media enthusiasm is this weak? A similar depressing conclusion is suggested by Alan Gerber and Eric Patashnik’s "Sham Surgery: The Problem of Inadequate Medical Evidence" (in this book), which tells how surgeons recently successfully ignored randomized trials showing knee surgery to be useless.
I predict doctors will keep a vague "evidence-based" association to help their "scientific" image, but won’t allow it to much constrain their hunch-based practice. I’d feel a lot better about this if we had clear evidence of the effectiveness of hunches.
"There is no proof beyond reasonable doubt for any approach to treating advanced cancer today. In life or death situations, one must make judgements based upon preponderance of available evidence as opposed to proof beyond reasonable doubt." Greg, you just stated the basis of evidence-based medicine....the "preponderance of available evidence" is exactly correct.
J, what you are describing is similar to what is known in the literature as an "N of 1" clinical trial, in serial. There is nothing wrong with it as a method, except that: 1. as you pointed out, it is tremendously susceptible to bias 2. There needs to be consistency in determining outcomes and selecting who would get which treatment (i.e. who would be eligible to get either treatment and what were their baseline characteristics?) and it would be best to determine these independently of the practitioner, again raising the possiblity of bias 3. The results may be reasonably generalizable to the experimenter's practice....but not at all clear it would be useful in anyone else's practice without fairly large numbers of patients. If carotid endarterectomies prevent strokes in high risk patients in a study done at the Mayo Clinic, does that mean that you should get a carotid endarterectomy at Little Sisters of the Poor Memorial Hospital in North Platte, Idaho? Nope, not quite, because the determination of benefit is based not only on outcomes of the surgery, but surgical morbidity as well, and it's just possible that surgeons at Mayo, doing 200 procedures a year, might have better outcomes than those doing 5 or 10 procedures a year. You can't get this kind of information about an individual practice without large numbers, and that's why it's difficult to rely on individual "clinical experience" and make heads or tails out of it. Grouped data is just more reliable. It has it's limitations, but it's the single best alternative for reaching decisions.
none....not one...of the critics of EBM have provided an alternative way for clinical decision making.
I'm not a critic of EBM but I'll suggest an alternative.
First, "clinical judgement" in itself is not very useful. MDs can vary treatment based on subliminal cues that only their vast experience provides, but the benefit of the variation may be -- subliminal. And we need ways for experienced physicians to pass on their experience to younger ones or it dies with them. Showing individual victims to students and saying what they'd do isn't enough, the students may pick up on the wrong individual qualities. Etc. It might work by some sort of magic, but there's no particular reason for it to work. I knew a medtech who quit her job and studied chinese medicine, she learned acupuncture and diagnosis by putting a burning incense stick close to people's fingers and seeing which fingers the patient felt the most heat, and so on. She said that chinese herbs were better because they had multiple therapeutic compounds, sometimes hundreds, and when you purify individual compounds and test them you can't possibly know what the interactions will do. I asked her how she knew what the interactions did, and she said that ancient chinese wise men figured it all out and she didn't have to understand it, she just had to learn it. This isn't the kind of medicine I want.
If every individual case is different, how does experience with 500 previous individual cases help you with #501?
Now, here is a way that an individual clinician can improve his methods. He starts out with a method he's learned, and one way or another he gets an alternative. Until you have alternatives that you think might be better the idea doesn't work. Once you have an alternative, you wait until you get one patient who does worse than you find acceptable with the standard approach. Then you switch to the new method. You continue using the new method until you get one patient who does worse than you'd expect from the old method. Then you switch back. You keep doing this, keeping records, and if you notice that you're using one method considerably more than the other then that's the one to go with. Use that until you get another alternative to try.
There's lots of room for bias here but the better you avoid the biases the more likely you get improved methods. When it's a big improvement you find out pretty quick. When it's a small improvement you might miss it, and that's a small loss. So, when you're convinced the new way is better, you tell your fellow MDs about it. Some of them try it and if most of them also say it's better the word spreads. If there's a lot of disagreement whether it's better then it likely isn't a whole lot better and people might as well keep trying other alternatives looking for the big improvement.
Each physician has to use his judgement about individual patients to decide how much their special circumstances would result in improved response to treatment. How well you do that determines part of your bias, you might switch treatment when you shouldn't or vice versa. There's no alternative to making that judgement.
This way a bunch of individual MDs working with vague and erratic cooperation could tend to get the same results as careful clinical trials.
I don't know to what extent MDs actually do this, or how bad the flaws are if they do. But it's *possible* for them to improve quite well this way. Collectively they can try out many alternatives at once, quickly discarding those that are clearly worse, focusing quickly on the ones that are much better.
If they're doing this intuitively, it makes sense to me they might formalise the process and teach it coherently. Or if they're already teaching it then it would make sense for them to explain what they're doing when challenged, rather than point at individual differences to say why scientific method doesn't work.