Evidence-Based Medicine Backlash

Last week’s Time Magazine article on Evidence-Based Medicine seems to me to damn it with faint praise:

Evidence-based medicine, which uses volumes of studies and show-me skepticism to answer such questions, is now being taught–with varying degrees of success–at every medical school in North America. … Advocates believe that evidence-based medicine can go much further, reducing the reliance on expert opinion and overturning the flawed assumptions and even financial incentives that underlie many decisions. … But is such certainty possible–or even desirable? Medicine, after all, is a personalized service, one built around the uniqueness of each patient and the skilled physician’s ability to design care accordingly. …

Consider the case of Dr. Daniel Merenstein, a family-medicine physician trained in evidence-based practice. In 1999 Merenstein examined a healthy 53-year-old man who showed no signs of prostate cancer. As he had been taught, Merenstein explained … there is little evidence that early detection makes a difference in whether treatment could save your life. As a result, the patient did not get a PSA test. Unfortunately, several years later, the patient was found to have a very aggressive and incurable prostate cancer. He sued Merenstein for not ordering a PSA test, and a jury agreed–despite the lack of evidence that it would have made a difference. Most doctors in the plaintiff’s state, the lawyers showed, would have ignored the debate and simply ordered the test. Although Merenstein was found not liable, the residency program that trained him in evidence-based practice was–to the tune of $1 million.


Even champions of evidence-based practice acknowledge that the approach has limits. … There have never been randomized trials to show that giving electrical shocks to a heart that has stopped beating saves more lives than doing nothing, for example. Similarly, giving antibiotics to treat pneumonia has never been rigorously tested from a scientific point of view. It’s clear to everyone, however, that if you want to survive a bout of bacterial pneumonia, antibiotics are your best bet, and nobody would want to go into cardiac arrest without a crash cart handy. … All patients would probably benefit if their doctors were abreast of the latest data, but none would benefit from being reduced to one of those statistical points."

How long will schools teach evidence-based medicine if they are fined for telling doctors to act differently from other doctors, and if media enthusiasm is this weak?  A similar depressing conclusion is suggested by Alan Gerber and Eric Patashnik’s "Sham Surgery: The Problem of Inadequate Medical Evidence" (in this book), which tells how surgeons recently successfully ignored randomized trials showing knee surgery to be useless.

I predict doctors will keep a vague "evidence-based" association to help their "scientific" image, but won’t allow it to much constrain their hunch-based practice.   I’d feel a lot better about this if we had clear evidence of the effectiveness of hunches. 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://www.hedweb.com/bgcharlton Bruce G Charlton

    I’m the author of numerous critiques of self-styled Evidence Based Medicine – the most notorious of which is probably “The rise and fall of EBM” -

    http://qjmed.oxfordjournals.org/cgi/reprint/91/5/371.pdf

    In a nutshell, Evidence Based Medicine is an un-tested strategy for practicing medicine, and it is based on a number of false assumptions. One is the ‘ecological fallacy’ that the individuals in a group study reflect the averages of the groups. A second is that randomized trials provide evidence that is applicable to individual subjects – this is only very rarely true:

    http://trialsjournal.com/content/2/1/2

    In sum, EBM is itself one of the most pervasive and dangerous sources of bias in medicine today.

  • http://homepage.mac.com/redbird/ Gordon Worley

    Aside from the obvious rhetorical weakness of this quote, I think it’s going to take a major change in the educational system to see things like evidence-based practice become popular. Humans are statistically naive, and no real attempt is made to correct this until the college level, where some students are required to take a real statistics course (and even then they may not internalize the practice). Even if statistical calculations are deemed too difficult for elementary students, there’s no reason why we can’t teach an intuitive understanding of statistics from a young age. It may be a little traumatic for a child to learn that they’re not as unique as parents, teachers, and television shows tell them, but in the long run I think they’ll benefit, and, if you’re concerned about self esteem, be able to gain a more accurate picture of in what few ways they are unique from other humans.

  • Carl Shulman

    Here’s a gem from Fortune magazine:
    http://biz.yahoo.com/hftn/070207/020507_8400262.html?.v=1
    “Barry remains dubious. Zillow has Zestimates for 99 percent of all Phoenix homes and claims that 72 percent are accurate to within 10 percent. But Barry tells of a family who recently came to him believing their home was worth a lot more than it was. Zillow told them it would sell for $505,000; Barry and another agent each independently put the figure at $440,000.”

    Note the superfluous ‘but.’ Error calibration is simply not rewarded. http://www.overcomingbias.com/2006/12/bosses_prefer_o.html

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Bruce, is ordinary hunch-based medicine untested as well? Why doesn’t anyone test it vs. evidence-based medicine? This topic would be worth its own post.

    Gordon, what if students don’t learn statistics because they don’t want to learn they are not so unique?

  • http://www.hedweb.com/bgcharlton Bruce G Charlton

    Robin – Ordinary medicine cannot legitimately be described as ‘hunch-based medicine’. It involves various types of reasoning, such as physiological and pharmacological reasoning (from basic biological science), pathology, and a kind of informal ‘Bayesian’ reasoning, pattern recognition and lots of other stuff.

    By contrast EBM is (in its original form) almost exclusively based on epidemiology – indeed, EBM was originally called Clinical Epidemiology until it was re-launched (with a mass of dishonest spin about the previous kind of medicine being nothing more than hunches) in about 1994.

    That was the time when EBM-ers should have done some formal comparisons of their new idea with existing practice. They never did – preferring to rely on cult-building techniques (including misrepresenting opponents). http://www.hedweb.com/bgcharlton/journalism/ebm

    EBM has very little to do with science. If you ask any real medical scientists (who have made a substantive contribution to medicine) they will almost all disdain EBM (at least in private).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Bruce, whatever name we give it, my question is whether ordinary medicine better avoids the criticism you level at evidence based medicine, of being “an un-tested strategy for practicing medicine.”

  • JMG3Y

    Robin, I think you hit one of the keys. Track and publish the outcomes for individual physicians and hospitals. Because of the potential for selection and measurement bias to affect such assessments, doing this well is not a simple matter. Outcomes will always be better if patients are screened based on age, concurrent risk factors and SES, not admitting those at higher risk of adverse outcomes and if poor follow-up is done.

    Then we’ll see who is the best versus who is blowing smoke. Sans this outcome evidence, I’ll take EBM-based practice every time for my loved ones and myself when it really matters. I’ve got substantial credentials and experience on both ends of the stethoscope and as both RBCT designer and, less happily, enrollee. Yes, some of this backlash is because some of the initial EBM proponents were more than a little over the edge in their arrogance.

    IMO, a huge problem of medicine is the failure of the majority of physicians to understand the fundamental philosophy of science and the history of why it developed the way it has. Despite degrees with titles including the word science, many fail to understand that the current methods of science are the current best ways that we humans with our flawed brains have of understanding nature and how it works, including understanding the workings of our bodies in useful ways. This failure of understanding is the result of weaknesses in the training of physicians, both in their pre-med science classes and in their professional curriculum. The existence of most alternative/complementary medicine is a consequence of this failure as is much dogma in conventional medicine, which also has its warts. Another emerging problem is calling something “evidence-based” when it isn’t.

    For another perspective, take a look at “Friday Woo” on the academic surgeon’s blog “Respectful Insolence” over in ScienceBlogs that I mentioned in my comment to the post below that included this Time article.

  • Bruce G Charlton

    Robin -

    You are framing the question wrongly.

    EBM is a meta-theory of medical practice which is derived from one of its sub-disciplines, epidemiology, expanding to claim it is the _only_ legitimate form of practice.

    By analogy – supposing that suddenly macro-economic modelling claimed that it was the only valid method of practicing economics, and that every other economic approach was based on hunches insofar as it did not conform to the evidential and reasoning practices of macro-economics.

    That is what EBM is doing – it is one type of valid medical science claming to be the _only_ valid type of medical science.

    It is not intrinsically impossible that all of medical science can be jettisoned from clinical decision-making except epidemiology, but it is extremely implausible. It is up to the proponents of a new ultra-simplified version of medicine (or economics) to demonstrate that changing the basic paradigm (meta-theory) of the subject will offer some advantage (other than mere simplicity).

  • zzz

    Bruce G. Charlton:

    I suppose you meant “micro-economics” rather than “macro-economics”. This would be a fair description of current economic practice.

  • Jack

    Bruce– Great posts, but, bottom line: you argue that conclusions from aggregate studies of medical treatments (whatever one wants to call it), are not valid. Why not? Why would medicine be unique in that sense? You mention the ecological fallacy, but surely a meta-study need not focus on the mean, but rather describe a distribution of results, with a loss function, say, to weight risk aversion, or something like that.

  • http://www.hedweb.com/bgcharlton Bruce G Charlton

    Jack – Sorry but I can’t answer this question in a comment. This – http://trialsjournal.com/content/2/1/2 , including references -
    is about as briefly as I can manage.

  • http://homepage.mac.com/redbird/ Gordon Worley

    Robin: I don’t think that students not wanting to learn statistics to that they can continue to think that they are unique is not an issue. If you told someone who believed they were unique that learning statistics would make them see they were not unique, they wouldn’t believe you because they believe they are unique, and they’d learn it anyway. But putting such a nice rhetorical twist aside, I think the issue is not convincing people to learn it, since in general people aren’t given a choice in what they learn, especially in primary education, where I believe statistical training could begin. The greater problem will be teaching them such that they believe statistics is true and don’t act and think contrary to it. We already have enough PhDs who can write fine experimental research papers but still believe they need their doctor to run the test, even if it has less chance of revealing information than hurting them.

  • Douglas Knight

    GW: you are making a testable assertion, opposite to one many people on this and other threads have made. Do statistics classes reduce bias?

  • Rick Davidson

    As a practicing physician trained in epidemiology, I have taught EBM in medical schools for over 25 years. I was involved in the authorship of one of the very first textbooks of clinical epidemiology (and yes, I absolutely agree that EBM developed from clinical epidemiology, and I consider that a good thing). I also teach clinical decision-making and problem solving and chair the Curriculum Committee at a major medical school. I was present at major conferences in the late ’70s and ’80s when the concept took form, and I’ve written critical reviews for a number of evidence-based journals. Criticism of EBM is something I’ve been dealing with for longer than most, and is largely based on a misunderstanding of what it represents. Regardless of the number of publications or air of authority of those who criticize it, there are two clear responses to those criticisms:

    1. What precisely is your alternative?
    2. Why do you not pay attention to the real definition?

    The first requires no added comment from me. As far as the second, EBM is based on the concept of best AVAILABLE evidence. It is not based solely on data from clinical trials; it does not automatically dismiss expert opinion if adequate evidence from grouped data is not available. It insists on critical determination of the validity of the application of the evidence to the individual patient. Critics don’t seem to like two things about EBM….first, it implies that practicing in other ways is not based on evidence, which belittles the interpretive skills of the clinician; and second, it relies to some extent on the application of grouped data to the individual, which seems to be the major bee in someone’s bonnet on this blog. A patient does not have a 50% chance of having colon cancer; they either do have it or they don’t. This uncertainty makes physicians uncomfortable, because unlike epidemiologists, they have primary responsibility for their patients, while epidemiologists report about large populations to whom they have no responsibility. Can grouped data be applied to patient care? Of course it can. It is every day; it simply requires attention to the application of the results to the individual patient or practice. Do the critics suggest that well designed studies with valid conclusions be ignored? Thus, I return to question 2 above. If you’re not going to use grouped data and try and apply it to your patient, then exactly what is your paradigm for practice?

  • Rick Davidson

    By the way, I can’t ignore the comment that “real medical scientists….have disdain for EBM.”. If by “real medical scientists” you are talking about PhD bench researchers, I have no doubt you’re correct. And those are exactly the people I would rather NOT have making clinical decisions about my family. Why? One only has to look at where medical research is heading….away from the bench, and toward “translational research”. Why? Because great discoveries on the bench mean nothing unless the discoveries translate into improvements in patient outcomes, and that is beyond the realm of the bench scientist, and quite irritating to some that I know. However, it is clearly within the realm of the person with clinical epidemiology skills. The battlefield is littered with the remains of “great discoveries” on the bench that didn’t turn out so well in people. The “great medical scientists” I know, who also have responsbility for patients, embrace EBM as the best way to select treatments that are most likely to improve the health of their patients.

  • Gregory D. Pawelski

    Strong Evidence From Clinical Trials?

    In life or death situations, one must make judgements based upon preponderance of available evidence as opposed to proof beyond reasonable doubt. It seems obvious that “evidence-based medicine” proponents may fail to apply this common sense standard on a consistent basis.

    To cite an example in cancer medicine, a fraternal medical society establishes a policy recommending against the use of a diagnostic test as an aid to drug selection in cancer chemotherapy, based on reviews which specifically excluded from consideration studies reporting the predictive accuracy of the test, and including only studies relating to the efficacy of the test in improving treatment outcomes.

    This is especially curious, as predictive accuracy is the chief criterion traditionally used to validate all diagnostic laboratory tests currently in use in cancer medicine. Were proof of efficacy (particularly in prospective, randomized trials) to be the standard for evaluating laboratory tests, then clinical oncologists would have to abandon all the laboratory tests currently used in the management of cancer patients, as no tests would pass this standard.

    Clinical investigators have too often descended into an exhaustive study of hypotheses which are ultimately of limited importance. Many treatments are of such limited effectiveness that they do not deserve to be protected from the competition of other approaches which are well grounded in peer review science, but which have not yet met the most demanding standards of “evidence based medicine.”

    Evidence-based medicine is a trial-and-error process of a clinical trials to see what might “appear” to be improving survival. It is the mindset of rewarding academic achievement and publication over all else. There is this aurora that organizations, government agencies, scientists, researcher and even practitioners work together, sharing information for the benefit of patients.

    Each group has its own priorities and its own agenda. Moreover, the image of cooperation between these different groups only gives the illusion that reform isn’t needed. The present system exists to serve academic achievement and publication, but not to serve the best interests of people.

    Also, whatever clinical response that has resulted to the average number of patients in a randomized trial, is no indication of what will happen to an individual at any particular time. They are trying to identify the “best guess” treatment for the average patient. You cannot mate notoriously heterogeneous diseases into “one-size-fits-all” treatments.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Gregory, do you think it is fair of me to claim that the RAND experiment was a reasonable test of the average effect of doctors to use their judgment to decide which medical treatments to use when?

  • Gregory D. Pawelski

    I feel that in light of the precious little in the way of guidance from clinical trials with respect to best empiric treatment, which is based on medical journal articles, epidemiology and economics, physician’s decisions need to be based on personal experience, clinical insights, and medical training.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Gregory, that isn’t even close to answering my question.

  • Rick Davidson

    Sorry….as I said before, EBM is NOT restricted to clinical trials, but to best available evidence. Not sure why this is so hard for critics to grasp. Yes, it’s true that grouped data is problematic when applied to an individual patient, for the reasons I mentioned above. It’s just that using probabilities based on methodologically rigorous well done studies makes more sense than relying on “personal experience, clinical insights and medical training”. Even the most experienced oncologists do not have direct personal experience with more than several hundred patients with any particular kind of malignancy, as compared to, for example, a metaanalysis of thousands of patients with breast cancer. Additionally, EBM does not mandate a particular treatment….that decision is up to the individual clinician and based on his or her knowledge of the individual patient. EBM provides an opportunity to reach a decision based on science, instead of a hunch….but the decision is still that of the indvidual clinician, who bears the responsibility for the patient. You suggest that there isn’t much evidence in many of these situations, especially with relation to oncology. That is true, and always in the situation of a lack of good evidence from grouped data, expert opinion is not excluded….it’s just the weakest form of evidence, according to most authorities….USPSTF, Centre for EBM, etc. If you want someone taking care of you or your family basing their treatment plans on “clinical experience” instead of grouped rigorous data, have at it.

  • Gregory D. Pawelski

    As the Brase Report states, evidence-based medicine (population-based evidence) has become a euphemism for managed care, masquerading as science (or profit-maximizing in the guise of science). Control over medical decisions are being shifted from doctors to bureaucrats in big offices. Managed care organizations have used it to solidify their control over medical decisions and the practice of medicine. Instead of explaining their decision by saying the service is not necessary or not cost-effective, they can say it is not scientifically sound.

    Individual patients are not the focus of evidence-based medicine and its standardized practice guidelines. The guidelines are created by accessing private medical record data, aggregating the data, and synthesizing it into population-based treatment algorithms for all physicians to use on all patients. In other words, bureaucratized medical practice.

    Evidence-based medicine results in overly rigid standards of care by restricting medical practitioners’ professional freedom and judgment. It imposes personal agendas by those choosing which research to do, picking between the various studies and calling it evidence, while writing all the guidelines, as well as administrative bias by administrators interpreting the guidelines.

    The guidelines often fail to make explicit how recommendations are devised and they rapidly become outdated. Even the “evidence” is suspect. Researcher bias, disagreement in defining best evidence, incomplete reporting of research results, and conflicting findings are some of the problems with research relied on for determining “best practices” or evidence-based medicine.

    There are gaps and inconsistencies in the medical literature supporting one practice versus another, as well as biases based on the perspective of the authors, who may be specialists, general practitioners, payers, marketers, or public health officials. Evidence-based medicine changes what it considers to be science in order to suit the goals of its proponents.

    Evidence-based medicine is not an objective, purely scientific tool its name suggests. Instead, it is an intrusive encroachment on the patient-doctor relationship and the practice of medicine. An encroachment that policy makers are turning into legal requirements.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Greg, you are just repeating your previous claims. I’ll ask again:

    Do you think it is fair of me to claim that the RAND experiment was a reasonable test of the average effect of doctors to use their judgment to decide which medical treatments to use when?

  • Hopefully Anonymous

    Greg,
    I think you raise potentially valid concerns, but I don’t think we should corrupt the useful term “evidence-based medicine”. It sounds like you are saying that HMOs are falsely portraying cost-cutting techniques as “evidence-based medicine”. I think that’s a better way to put things than “Evidence-based medicine changes what it considers to be science in order to suit the goals of its proponents”, which I think corrupts the important idea of medicine based on evidence, where the best techniques are empirically derived.

  • Gregory D. Pawelski

    If you’re saying that HMO beauracrats have hijacked evidence-based medicine to suit goals of its proponents, I could agree. However, clinical investigators have too often descended into an exhaustive study of hypotheses which are ultimately of limited importance. Many “cancer” treatments are of such limited effectiveness that they do not deserve to be protected from the competition of other approaches which are well grounded in peer review science, but which have not yet met (or need to) the most demanding standards of evidence-based medicine. I wouldn’t give judgment to the Rand experiment one way or the other. I’m sure you guys are familiar with “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomized controlled trials.” PMID: 15768730

  • Hopefully Anonymous

    Gregory, you write “clinical investigators have too often descended into an exhaustive study of hypotheses which are ultimately of limited importance. Many “cancer” treatments are of such limited effectiveness that they do not deserve to be protected from the competition of other approaches which are well grounded in peer review science, but which have not yet met (or need to) the most demanding standards of evidence-based medicine.”

    What you describe is not optimized evidence based medicine. How does one best determine that a “cancer” treatment is “of such limited effectiveness that they do not deserve to be protected from the competition of other approaches which are well grounded in peer review science, but which have not yet met (or need to) the most demanding standards of evidence-based medicine”? It seems to me that itself is an empirical question (and an important one) best answered through empiricism and logical, critical analysis of the resulting data.

  • anon

    “Do you think it is fair of me to claim that the RAND experiment was a reasonable test of the average effect of doctors to use their judgment to decide which medical treatments to use when?”

    No, I think this is a very unreasonable conclusion. I have been looking at some of the health measures from the study and I am having trouble figuring out how some of them might even be slightly related to most healthcare. For the most part, visits to the doctor aren’t motivated by general health problems. In fact, I think that most visits are motivated by a specific health problem which doctors then attempt to treat. In addition, I would also think that the treatment of most of these health problems wouldn’t have much of an impact on general health measures…. and that’s not because I think doctors are incompetent and poor judges of administering medicine as you seem to suggest.

    Here’s the conclusion I would make… for most of the health measures chosen, free insurance did not have a significant positive impact. However, most treatments administered were not designed to impact these measures.

    Also, much of EBM and clinical trials is designed to examine the effect of a treatment on a particular measure for which it was actually designed to effect.

    In addition, you have to be really careful about the conclusions you make from a study, especially when patients play a role in their level of treatment received. Insurance level was randomized, actual medical care received was not, and thus you have to be very careful regarding conclusions made concerning the effect of increased medical care.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Anon, the main outcome measure was a “General Health Index”, which seems quite related to health. You always have to be careful when interpreting studies; I don’t see how this study is different.

  • anon

    Robin,

    I didn’t say that the measure wasn’t related to health.

    The point is that most treatments administered by doctors are not intended to have an effect on your general health index.

    In regards to interpreting studies, I didn’t say this study was different. The implication was that you aren’t being very careful with the conclusions you attempt to make from this study.

  • anon

    Robin,

    Why are your views of medicine so negative that you don’t even allow your mind for two seconds to explore the problems with the conclusions you are trying to make?

  • Hopefully Anonymous

    I read this as an argument against juries as finders of liability. What exactly did this jury make its decision based on?

    “Consider the case of Dr. Daniel Merenstein, a family-medicine physician trained in evidence-based practice. In 1999 Merenstein examined a healthy 53-year-old man who showed no signs of prostate cancer. As he had been taught, Merenstein explained … there is little evidence that early detection makes a difference in whether treatment could save your life. As a result, the patient did not get a PSA test. Unfortunately, several years later, the patient was found to have a very aggressive and incurable prostate cancer. He sued Merenstein for not ordering a PSA test, and a jury agreed–despite the lack of evidence that it would have made a difference. Most doctors in the plaintiff’s state, the lawyers showed, would have ignored the debate and simply ordered the test. Although Merenstein was found not liable, the residency program that trained him in evidence-based practice was–to the tune of $1 million.”

  • Gregory D. Pawelski

    What proof are so-called evidence-based clincial trials to the “individual?” What proof would a new gold standard be to Mary X and Sandy Y? Is Mary X part of the average in the clinical trial? Or is Sandy Y part of the average in the clinical trial? In other words, if it would help Mary X, whould it help Sandy Y?

    At present, clinical trials test drugs on general populations and then look for a clincial response and a treatment effect that is not likely to be a chance result. However, the side effect of this is inflexibility, some patients may unnecessarily be exposed to inferior experimental therapies.

    A problem with the empirical approach is it yields information about how large populations are likely to respond to a treatment. Doctors don’t treat populations, they treat individual patients. Because of this, doctors give treatments knowing full well that only a certain percentage of patients will receive a benefit from any given medicine. The empirical approach doesn’t tell doctors how to personalize their care to individual patients.

    The number of possible treatment options supported by completed randomized clinical trials becomes increasingly vague for guiding physicians. Even the National Cancer Institute’s December 7, 2006 official cancer information website states that no data support the superiority of more than 20 different regimens in the case of metastatic breast cancer, a disease in which probably more clinical trials have been done than any other type of cancer.

    More clinical trials have not produced more clear-cut guidance, but more confusion in this situation. It is more difficult to carry out clinical trials in early stage breast cancer, because larger numbers of patients are needed, as well as longer follow-up periods. But it is likely that more trials would lead to the identification of more equivalent chemotherapy choices for the average patient in early stage breast cancer and in virtually all forms of cancer as well.

    So, it would appear that published reports of clinical trials provide precious little in the way of “gold standard” guidance. Almost any combination therapy is acceptable in the treatment of cancer these days. Physicians are confronted on nearly a daily basis by decisions that have not been addressed by randomized clinical trial evaluation.

    My own personal preference to determine what cancer treatment has limited effectiveness and what cancer treatment would be the most beneficial for the patient would be Cell Function Analysis. As increasing numbers and types of anti-cancer drugs are developed, oncologists become increasingly likely to misue them in their practice. There is seldom a “standard” therapy which has been proven to be superior to any other therapy. When all studies are compared by meta-analysis, there is no difference. What may work for one, may not work for another.

    Cancer chemotherapy could save more lives if pre-testing were incorporated into clinical medicine. The respected cancer journals are publishing article that identify safer and more effective treatment regimens, yet few community oncologists are incorporating these synergistic methods into their clinical practice. Cancer patients suffer through chemotherapy sessions that do not intergrate all possibilities.

  • anon

    “What proof are so-called evidence-based clincial trials to the “individual?” ”

    “A problem with the empirical approach is it yields information about how large populations are likely to respond to a treatment. Doctors don’t treat populations, they treat individual patients.”

    Gregory, a statistician might say that if a doctor knows that on average, treatment A is better than treatment B, what is wrong with treating all of his patients with treatment A? It sounds like you are saying that the statistician is answering the wrong question? If that is what you are saying, then in some cases you may have a valid criticism.

    I believe one criticism that you are bringing up is the idea that the treatment may have an effect on some while not on others. The problem with addressing this type of question is that it gets into subgroup analyses… anyone who knows anything about statistics knows the potential problem with subgroup analyses. Subgroups aside, the question you wish to address of treatment decisions for individual patients is even more difficult to address with statistics.

    I don’t know anything about these synergistics methods, but a well/carefully designed trial should be able to address the effectiveness of the method.

  • Gregory D. Pawelski

    I think the statistician/physician in a clinical trial do ask the wrong questions. Cancer medicine is a personalized service, one built around the uniqueness of each patient and the skilled physician’s ability to design treatment accordingly. Cancer specialists can read the scientific literature, understand the statistics, but they don’t understand how that should influence their treatment of the “individual” in front of them.

    The frequentist approach tends to be rather unforgiving in terms of deviations from the original clinical design as it progresses. What this all comes down to is that as in all real-world situations of reliability, you have to try to pick the course with the least probability of failure, knowing all the time that the estimates of probability themselves have a probability of being wrong (the confidence-level).

    The technology behind these synergistics methods have been clinically validated for the selection of optimal chemotherapy regimens for individual patients. Individualized-directed therapy is based on the premise that each patient’s cancer cells are unique and therefore will respond differently to a given treatment. This is in stark contrast to standard or empiric therapy, which chemotherapy for a specific patient is based on average populatlion studies from prior clinical trials.

  • hchcec

    Pawelski:

    “The technology behind these synergistics methods have been clinically validated for the selection of optimal chemotherapy regimens for individual patients. Individualized-directed therapy is based on the premise that each patient’s cancer cells are unique and therefore will respond differently to a given treatment.”

    This is not true. Pawelski’s particular bias (“the statistician/physician in a clinical trial do ask the wrong questions”) comes from his desire to prove that cancer treatment must be individualized. It’s not like treating a UTI with a floxin.

    Unfortunately, for Pawelski, there is no evidence at all that individualized testing of cancer cells via live tumor analysis shows any benefit whatsoever. What’s most interesting is that the developers of these tests agree with me. Mr. Pawelski is the only individual that would have you believe that throwing out clinical trials is required because as he says, “the so-called respectable Journals (the ones that fail to adhere to guidelines on conflict of interest), won’t publish articles because they have a lock-up on information.” For Pawelski: “few drugs work the way we think and few physicians/scientists take the time to think through what it is they are using them for.”

    Sounds a bit conspiratorial to me (actually more than a bit :-)).

    “The number of possible treatment options supported by completed randomized clinical trials becomes increasingly vague for guiding physicians. Even the National Cancer Institute’s December 7, 2006 official cancer information website states that no data support the superiority of more than 20 different regimens in the case of metastatic breast cancer, a disease in which probably more clinical trials have been done than any other type of cancer.”

    This is not an indictment of the clinical trial at all. In fact, it’s just the opposite. It’s data, good data, and indicates that metastatic breast cancer is, whether we like it or not, difficult to treat, some regimens are less toxic than others and choices may be made based on this good data, and this area of clinical evaluation remains ongoing.

  • Gregory D. Pawelski

    To say that the developers of these “synergistic methods” agree with Michelle H. (hchcec) has got to be the biggest fallacy ever heard. It is totally wrong to portray her statement as something where there is unanimity of opinion. In point of fact, in every situation where a neutral panel of adjudicators had the opportunity to hear both sides of the story, the decision came down in favor of coverage of these methods as a reasonable and scientifically supportable medical service.

    The standard of retrospective correlations between treatment outcomes and laboratory results is sufficient in the case of “all” laboratory tests, while papers of this nature were excluded from analysis in “closed” oncology organizations’ evaluation of these laboratory tests. There is no proof beyond reasonable doubt for any approach to treating cancer today. There is only the bias of clinical investigators as a group and as individuals. The FDA regulates devices and not laboratory tests, but it does regulate test kits. And the criterion they use for each and every test kit they approve is “accuracy” not “efficacy.” But Michelle, in all her finite wisdom has not been able to grasp reality.

  • hchcec

    First of all, on the internet, everyone thinks you’re a dog. Not sure why you feel a need to have that “gotcha” moment, but my I signed my post “hchcec.” My name is Michele, like Taxol causes brain cancer.

    “in every situation where a neutral panel of adjudicators had the opportunity to hear both sides of the story, the decision came down in favor.”

    Well, of course there’s clear prejudice in this statement. “If they don’t see it my way and approve my chemotherapy cell assays (as yet totally unproven as the developers of these tests know), they are not neutral.” Talk about the need for “overcoming bias.”

    There is absolutely no support for these devices as they have yet to undergo anything controlled testing. Only now have those tests begun. If you do a scientific search you will find not a single study showing any benefit.

    “There is no proof beyond reasonable doubt for ANY approach to treating cancer today. There is ONLY THE BIAS of clinical investigators as a group and as individuals.”

    This is a absurd statement as is his following about accuracy vs. efficacy.

    ——-
    A Hierarchical Model of Efficacy

    Level 1: Technical efficacy

    Level 2: Diagnostic accuracy efficacy

    Level 3: Diagnostic thinking efficacy

    Level 4: Therapeutic efficacy

    Level 5: Patient outcome efficacy

    Level 6: Societal efficacy

    Pawelski is happy at level 2 and anything beyond level 2 is meaningless.

    The definition of a device is set forth at section 201(h) of the Federal Food, Drug and Cosmetic Act (the act) (21 U.S.C. 321(h)). It provides in relevant part: ‘‘The term ‘device’ * * * means an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including any component, part, or accessory, which is intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals.

    Evidence-based science, making use of the scientific method, despite Pawelski’s disdain, is required in the scientific community.

    The anti-science proponents, and other snake oil salesmen have not, and will not, win this one.

  • Gregory D. Pawelski

    I ask Michelle H. (hchcec) what data exist to prove that using the Estrogen Receptor of Her2/Neu assay improves therapeutic outcomes? What data exist to prove that Bacterial Culture and Sensitivity Testing improves therapeutic outcomes? What data exist to prove that doing panels of immunhohistochemical stains improves therapeutic outcomes? What data exist to prove that following metastatic cancer with CT, MRI, and PET scans during treatment to assess whether or not treatment should be switched to something else improves therapeutic outcomes?

    I ask Michelle H. (hchcec) why more than half of the nation’s oncologists are now using the Oncotype DX test without the slightest shred of proof that this improves therapeutic outcomes? Why do oncologists order EGFR tests and what data exist to prove that this improves outcomes? The FDA regulates devices and not laboratory tests, but it does regulate test kits. And the criterion they use for each and every test kit they approve is “accuracy” not “efficacy.” I believe she/he/it doesen’t know squat about it.

  • hchcec

    Perhaps a little education is in order for internet’s most notorious purveyor of “cancer therapy without unproven cell culture assay means doom” inaccuracy.

    Estrogen-Receptor Status and Outcomes of Modern Chemotherapy for Patients
    With Node-Positive Breast Cancer
    http://jama.ama-assn.org/cgi/content/full/295/14/1658

    Also, he cites Oncotype DX as a model for his cell culture assay for chemotherapy choice. Apples and oranges here.

    Oncotype DX informs the patient whether chemotherapy will be of ANY benefit. It’s a fine test for early and late stage breast cancer, but not yet intermediate.

    The test Pawelski is pushing, to determine WHICH chemotherapy protocol to use, is unproven to be of any benefit at this time.

    Pawelski is simply mistaken in suggesting that diagnostic accuracy indices (sensitivity, specificity, and positive and negative predictive values) are sufficient for establishing a test’s utility. As outlined by the Institute of Medicine, tests are clinically useful only if the information they produce leads to patient management changes that improve outcomes, such as longer survival, better quality of life, or fewer adverse events. Clinical utility can be determined by mapping a causal chain from diagnostic accuracy through changes in management to impact on outcomes. This is evident in the the numerous estrogen receptor and Her2 studies. And bacterial culture testing is approved for reasons of accuracy only? Ridiculous.

    A simple example:

    The U.S. Food and Drug Administration (FDA) has granted Diazyme 510(K) clearance to market its Enzymatic Total Bile Acids (TBA) Assay Kit for the quantitative determination of total bile acids in human blood samples.

    Must be because it’s “accurate,” his favorite, and only buzzword.

    But, is THAT why we examine bile acids, just to say “hey, we have bile acids here!”

    Well, nope.

    Total bile acids is a well known bio-marker for diagnosis of liver diseases. Serum total bile acids are elevated in patients with acute hepatitis, chronic hepatitis, liver sclerosis, and liver cancer. Total bile acids levels are found to be the most sensitive indicator for monitoring the effectiveness of interferon treatment of chronic hepatitis C patients. Moreover, total bile acids tests are also widely used to screen pregnant women for the condition of obstetric cholestasis, a disease that is caused by elevated total bile acids in the bloodstream of pregnant women. This poses and risks to the unborn baby including stillbirth, premature labor and bleeding. The frequency of obstetric cholestasis is found to be 1 in 100 pregnant European women, and 1 in 10 pregnant South American women. Cholestasis treatment includes the drug Urso.

    Well, look at that.

    -We have a disease process that can be identified by elevated bile acids.

    -We have an accurate tool to allow us to make decisions about treating diseases caused by elevated bile acids.

    -This can be monitored to look for resolution or worsening.

    -We have treatment for disease cause by elevated bile acids.

    So we have a test that monitors an established disease process WE KNOW that causes problems and WE KNOW that has solutions.

    Unfortunately, we see nothing like this at all from chemotherapy cell culture assays. Why? Because there is absolutely no evidence that his assays do anything whatsoever to legitimately, unerringly, diagnose a disease process and can be of any help in affecting the treatment protocol. We have no data to back up his claims that the disease process will be altered by his tests. We know addressing bile acids will because we know a lot about bile acids. We know it’s beneficial to have results WNL and it’s detrimental not to.

    We have no such evidence at all that his assays are of any benefit.

    He wants us to pay for his unproven tests. But, it’s not going to happen. It may in due time, but that will occur, as the real scientists know who work quietly in the background, away from the blogosphere, when scientists make the decision.

    One of the many logical fallacies evident, time and time again, is FALLACY OF PRESUMPTION.

    His posts can be categorized, in general, is a classic Fallacies of Presumption because they creates the presumption that the true premises are complete.

    Examples:

    Most dogs (assays) are friendly and pose no threat to people who pet them. Therefore, it would be safe to pet the little dog that is approaching us now.

    That type of car (one-size fits all) is poorly made; a friend of mine has one, and it continually gives him trouble.

    We sometimes see this fallacy committed in scientific research whenever someone focuses on evidence which supports their hypothesis but ignores data which would tend to disconfirm it. This is why it is important that all experiments can be replicated by others and that the information about how the experiments were conducted be released. Other researchers might catch the data which was originally ignored.

    Mr. Pawelski offers no evidence that assays offer any benefit greater than one-size-fits-all. What he does do, and this must be made clear, is simply attack one-size-fits all (all those cars are bad)and claim superiority for assays (“go ahead, pet my dog – he’s like all the others”).

    But, somehow, you probably knew that. The government knows it. Industry knows it. And the assay scientists know it.

    THAT is precisely why they won’t be approved nationally until they show some, any, benefit.

    Blogging it into existence ain’t gonna happen. (Like real scientists who do the hard work every day have time read years of intellectual dishonest posts from a blogger with no scientific background.)

    For example, google: Gregory + Pawelski + chemotherapy. Guess how many hits you’ll get?
    http://www.google.com/search?q=gregory+pawelski+chemotherapy&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a

  • Gregory D. Pawelski

    Thank you very much Michelle for your words of encouragement and your approval by adding exposure to my internet endeavors. I’ve obviously researched cancer medicine and related issues extensively. It is very sharp and intimidating, even to so-called experts like you. It is a no nonsense, sometimes harsh and honest writing style.

    Cancer patients need informed opinion good, bad, or indifferent. I believe in “measured” moral support and consider it important. But the overkill sugarfest that is usually professed by the powers that be in cancer medicine is useless and sometimes dangerous. Telling a cancer patient that chemotherapy and radiation treatment are the only hope of survival, strikes total fear by telling them they would surely die in a short time without it. Fear is the greatest tool to snare a fearful victim.

    There was a cartoon that showed a doctor with the initials AMA on his lapel, holding a syringe and standing next to a grave and a vulture with the initials FDA on it perched on the grave stone of a cancer patient that read, “Here lies Vic Tim, cured of cancer, died of side effects.”

    By the way. The same tortured syntax was evident when you stopped using the name Michelle and started using hchcec, sometime in the fall of last year. Everybody caught on.

  • hchcec

    Internet endeavors? Obviously researched cancer medicine?
    What we need is truthful discourse. Sharing knowledgeable information requires dispassionate, objective truth. One can emote in a blog entry, but the information should stand by itself.

    Pawelski claims that his cell culture assays have not been approved because:

    1. greed and certitude in the power of the dollar have once again clouded judgement.
    2.The persons making these decisions are smart and cynical enough to accept baksheesh from the very persons whose technology they are bent upon discrediting.
    3.It’s death by clinical trial, the academician’s weapon of choice, which they wield expertly.
    4.Resistance to change may be active, covert, or organized from vested interests.

    Blogging this thousands of times doesn’t make it any less false as there is no evidence that this is true and is a classical example of bias, the fallacy of repetition: repeating an opinion again and again with the thought that it seems to convince people that it is true – maybe because it simulates the effect of many people having that opinion. We see this with the intellectual dishonesty from creationists as they attack the “theory” of evolution. They have no understanding of what a scientists mean by theory.

    And, to blog it on sites where cancer patients look for support? Unconscionable.

    A clear, rational, concise, factual, thoughtful, fearless response to Pawelski:

    http://www.jco.org/cgi/content/full/23/15/3646

    (The focus on my name is getting a little creepy.)

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Gregory and hchec, I’m calling an end to this conversation here. It has moved too far from the overall topic and its tone has drifted too far from the ideal. I’ll delete any more comments from you two on this post.

  • Gregory D. Pawelski

    That’s okay Robin. Michelle H. (hchcec) has been an blog groupie for some time. Condemning the messenger instead of having any rational thought on the message does drift too far from the ideal. You can take all of my comments off this board. She’s not worth it.

  • Rick Davidson

    The interesting thing about this thread, which I haven’t reviewed for a while, is that none….not one…of the critics of EBM have provided an alternative way for clinical decision making….with the exception of cell function analysis, which while logical, has really never been adequately shown to provide better outcomes. If you don’t like EBM, how exactly would you like your physician to determine the best treatment for you or your family? No clinician likes dealing with percentages….patients don’t have a 50% chance of having colon cancer. They have either a 0% or 100%….but I’ve yet to hear a reasonable scientific alternative to using carefully evaluated grouped data for clinical decision-making. Critics never seem to notice the word “available”, as in “best available evidence”. This means that clinical trials are by no means required. So tell us all….what is the algorithm that should be used instead of EBM?

  • Gregory D. Pawelski

    Rick Davidson’s comment about critics of EBM have brought me to think more critically of it. Evidence-based medicine has morphed into Pharma-based medicine and HMO-based medicine. Once you can eradicate the later, you can renew credence in the former.

  • Rick Davidson

    Interesting comment, and I’d be much more impressed if you posted an answer to the questions asked. I am not even suggesting that decisions be made solely on the basis of clinical trials. I reject that totally. What I’m asking you to respond to is the simple question: how would you choose a therapy from among many? If the answer is personal experience, you must have a huge personal experience to have more data than grouped published information. The clinical epidemiology movement began in the mid-70′s with people like Al Feinstein at Yale and the Fletchers in Chapel Hill. This was long before managed care was widespread, and the concepts of using evidence to determine clinical efficacy are no more rooted in industry than they are in creationism. And just so you don’t get the wrong idea, none of this has anything to do with being a compassionate caring physician….but that kind of physician wants the best available care for their patient. Sometimes that means no care…and that’s the last step in EBM, determining if the data are applicable to your patient. Just give me a reasonable alternative, and it might make for an interesting discussion.

  • Gregory D. Pawelski

    In regards to choosing a cancer therapy among many, with the absence of effective laboratory tests to guide physicians, many patients do not even get a second chance at treatment when their disease progresses. Spending six to eight weeks to diagnose treatment failure often consumes a substantial portion of a patient’s remaining survival, not to mention toxicities and mutagenic effects.

    There are molecular and cellular tests available to weed out those cancer patients that chemotherapy wouldn’t have any benefit, what chemotherapy works the best for those that chemotherapy would benefit, and further monitor treatment success or disease progression.

    No matter how reliable a drug appears to be, there’s simply little hard evidence it would make a long-term difference in a person’s prolonged survival. Drugs are tested to show they are safe and effective before being approved by the FDA. But a clinical study is not the real world, and just because a drug leads to a statistically significant improvement doesn’t guarantee that the desired effect will follow. The physician is still left to make a decision based at least in part on faith, bias or educated guess.

    There is no proof beyond reasonable doubt for any approach to treating advanced cancer today. In life or death situations, one must make judgements based upon preponderance of available evidence as opposed to proof beyond reasonable doubt.

  • J Thomas

    none….not one…of the critics of EBM have provided an alternative way for clinical decision making.

    I’m not a critic of EBM but I’ll suggest an alternative.

    First, “clinical judgement” in itself is not very useful. MDs can vary treatment based on subliminal cues that only their vast experience provides, but the benefit of the variation may be — subliminal. And we need ways for experienced physicians to pass on their experience to younger ones or it dies with them. Showing individual victims to students and saying what they’d do isn’t enough, the students may pick up on the wrong individual qualities. Etc. It might work by some sort of magic, but there’s no particular reason for it to work. I knew a medtech who quit her job and studied chinese medicine, she learned acupuncture and diagnosis by putting a burning incense stick close to people’s fingers and seeing which fingers the patient felt the most heat, and so on. She said that chinese herbs were better because they had multiple therapeutic compounds, sometimes hundreds, and when you purify individual compounds and test them you can’t possibly know what the interactions will do. I asked her how she knew what the interactions did, and she said that ancient chinese wise men figured it all out and she didn’t have to understand it, she just had to learn it. This isn’t the kind of medicine I want.

    If every individual case is different, how does experience with 500 previous individual cases help you with #501?

    Now, here is a way that an individual clinician can improve his methods. He starts out with a method he’s learned, and one way or another he gets an alternative. Until you have alternatives that you think might be better the idea doesn’t work. Once you have an alternative, you wait until you get one patient who does worse than you find acceptable with the standard approach. Then you switch to the new method. You continue using the new method until you get one patient who does worse than you’d expect from the old method. Then you switch back. You keep doing this, keeping records, and if you notice that you’re using one method considerably more than the other then that’s the one to go with. Use that until you get another alternative to try.

    There’s lots of room for bias here but the better you avoid the biases the more likely you get improved methods. When it’s a big improvement you find out pretty quick. When it’s a small improvement you might miss it, and that’s a small loss. So, when you’re convinced the new way is better, you tell your fellow MDs about it. Some of them try it and if most of them also say it’s better the word spreads. If there’s a lot of disagreement whether it’s better then it likely isn’t a whole lot better and people might as well keep trying other alternatives looking for the big improvement.

    Each physician has to use his judgement about individual patients to decide how much their special circumstances would result in improved response to treatment. How well you do that determines part of your bias, you might switch treatment when you shouldn’t or vice versa. There’s no alternative to making that judgement.

    This way a bunch of individual MDs working with vague and erratic cooperation could tend to get the same results as careful clinical trials.

    I don’t know to what extent MDs actually do this, or how bad the flaws are if they do. But it’s *possible* for them to improve quite well this way. Collectively they can try out many alternatives at once, quickly discarding those that are clearly worse, focusing quickly on the ones that are much better.

    If they’re doing this intuitively, it makes sense to me they might formalise the process and teach it coherently. Or if they’re already teaching it then it would make sense for them to explain what they’re doing when challenged, rather than point at individual differences to say why scientific method doesn’t work.

  • Rick Davidson

    “There is no proof beyond reasonable doubt for any approach to treating advanced cancer today. In life or death situations, one must make judgements based upon preponderance of available evidence as opposed to proof beyond reasonable doubt.” Greg, you just stated the basis of evidence-based medicine….the “preponderance of available evidence” is exactly correct.

    J, what you are describing is similar to what is known in the literature as an “N of 1″ clinical trial, in serial. There is nothing wrong with it as a method, except that: 1. as you pointed out, it is tremendously susceptible to bias 2. There needs to be consistency in determining outcomes and selecting who would get which treatment (i.e. who would be eligible to get either treatment and what were their baseline characteristics?) and it would be best to determine these independently of the practitioner, again raising the possiblity of bias 3. The results may be reasonably generalizable to the experimenter’s practice….but not at all clear it would be useful in anyone else’s practice without fairly large numbers of patients. If carotid endarterectomies prevent strokes in high risk patients in a study done at the Mayo Clinic, does that mean that you should get a carotid endarterectomy at Little Sisters of the Poor Memorial Hospital in North Platte, Idaho? Nope, not quite, because the determination of benefit is based not only on outcomes of the surgery, but surgical morbidity as well, and it’s just possible that surgeons at Mayo, doing 200 procedures a year, might have better outcomes than those doing 5 or 10 procedures a year. You can’t get this kind of information about an individual practice without large numbers, and that’s why it’s difficult to rely on individual “clinical experience” and make heads or tails out of it. Grouped data is just more reliable. It has it’s limitations, but it’s the single best alternative for reaching decisions.

  • Pingback: Mash of Links | feed on my links