48 Comments

"There is no proof beyond reasonable doubt for any approach to treating advanced cancer today. In life or death situations, one must make judgements based upon preponderance of available evidence as opposed to proof beyond reasonable doubt." Greg, you just stated the basis of evidence-based medicine....the "preponderance of available evidence" is exactly correct.

J, what you are describing is similar to what is known in the literature as an "N of 1" clinical trial, in serial. There is nothing wrong with it as a method, except that: 1. as you pointed out, it is tremendously susceptible to bias 2. There needs to be consistency in determining outcomes and selecting who would get which treatment (i.e. who would be eligible to get either treatment and what were their baseline characteristics?) and it would be best to determine these independently of the practitioner, again raising the possiblity of bias 3. The results may be reasonably generalizable to the experimenter's practice....but not at all clear it would be useful in anyone else's practice without fairly large numbers of patients. If carotid endarterectomies prevent strokes in high risk patients in a study done at the Mayo Clinic, does that mean that you should get a carotid endarterectomy at Little Sisters of the Poor Memorial Hospital in North Platte, Idaho? Nope, not quite, because the determination of benefit is based not only on outcomes of the surgery, but surgical morbidity as well, and it's just possible that surgeons at Mayo, doing 200 procedures a year, might have better outcomes than those doing 5 or 10 procedures a year. You can't get this kind of information about an individual practice without large numbers, and that's why it's difficult to rely on individual "clinical experience" and make heads or tails out of it. Grouped data is just more reliable. It has it's limitations, but it's the single best alternative for reaching decisions.

Expand full comment

none....not one...of the critics of EBM have provided an alternative way for clinical decision making.

I'm not a critic of EBM but I'll suggest an alternative.

First, "clinical judgement" in itself is not very useful. MDs can vary treatment based on subliminal cues that only their vast experience provides, but the benefit of the variation may be -- subliminal. And we need ways for experienced physicians to pass on their experience to younger ones or it dies with them. Showing individual victims to students and saying what they'd do isn't enough, the students may pick up on the wrong individual qualities. Etc. It might work by some sort of magic, but there's no particular reason for it to work. I knew a medtech who quit her job and studied chinese medicine, she learned acupuncture and diagnosis by putting a burning incense stick close to people's fingers and seeing which fingers the patient felt the most heat, and so on. She said that chinese herbs were better because they had multiple therapeutic compounds, sometimes hundreds, and when you purify individual compounds and test them you can't possibly know what the interactions will do. I asked her how she knew what the interactions did, and she said that ancient chinese wise men figured it all out and she didn't have to understand it, she just had to learn it. This isn't the kind of medicine I want.

If every individual case is different, how does experience with 500 previous individual cases help you with #501?

Now, here is a way that an individual clinician can improve his methods. He starts out with a method he's learned, and one way or another he gets an alternative. Until you have alternatives that you think might be better the idea doesn't work. Once you have an alternative, you wait until you get one patient who does worse than you find acceptable with the standard approach. Then you switch to the new method. You continue using the new method until you get one patient who does worse than you'd expect from the old method. Then you switch back. You keep doing this, keeping records, and if you notice that you're using one method considerably more than the other then that's the one to go with. Use that until you get another alternative to try.

There's lots of room for bias here but the better you avoid the biases the more likely you get improved methods. When it's a big improvement you find out pretty quick. When it's a small improvement you might miss it, and that's a small loss. So, when you're convinced the new way is better, you tell your fellow MDs about it. Some of them try it and if most of them also say it's better the word spreads. If there's a lot of disagreement whether it's better then it likely isn't a whole lot better and people might as well keep trying other alternatives looking for the big improvement.

Each physician has to use his judgement about individual patients to decide how much their special circumstances would result in improved response to treatment. How well you do that determines part of your bias, you might switch treatment when you shouldn't or vice versa. There's no alternative to making that judgement.

This way a bunch of individual MDs working with vague and erratic cooperation could tend to get the same results as careful clinical trials.

I don't know to what extent MDs actually do this, or how bad the flaws are if they do. But it's *possible* for them to improve quite well this way. Collectively they can try out many alternatives at once, quickly discarding those that are clearly worse, focusing quickly on the ones that are much better.

If they're doing this intuitively, it makes sense to me they might formalise the process and teach it coherently. Or if they're already teaching it then it would make sense for them to explain what they're doing when challenged, rather than point at individual differences to say why scientific method doesn't work.

Expand full comment

In regards to choosing a cancer therapy among many, with the absence of effective laboratory tests to guide physicians, many patients do not even get a second chance at treatment when their disease progresses. Spending six to eight weeks to diagnose treatment failure often consumes a substantial portion of a patient's remaining survival, not to mention toxicities and mutagenic effects.

There are molecular and cellular tests available to weed out those cancer patients that chemotherapy wouldn't have any benefit, what chemotherapy works the best for those that chemotherapy would benefit, and further monitor treatment success or disease progression.

No matter how reliable a drug appears to be, there's simply little hard evidence it would make a long-term difference in a person's prolonged survival. Drugs are tested to show they are safe and effective before being approved by the FDA. But a clinical study is not the real world, and just because a drug leads to a statistically significant improvement doesn't guarantee that the desired effect will follow. The physician is still left to make a decision based at least in part on faith, bias or educated guess.

There is no proof beyond reasonable doubt for any approach to treating advanced cancer today. In life or death situations, one must make judgements based upon preponderance of available evidence as opposed to proof beyond reasonable doubt.

Expand full comment

Interesting comment, and I'd be much more impressed if you posted an answer to the questions asked. I am not even suggesting that decisions be made solely on the basis of clinical trials. I reject that totally. What I'm asking you to respond to is the simple question: how would you choose a therapy from among many? If the answer is personal experience, you must have a huge personal experience to have more data than grouped published information. The clinical epidemiology movement began in the mid-70's with people like Al Feinstein at Yale and the Fletchers in Chapel Hill. This was long before managed care was widespread, and the concepts of using evidence to determine clinical efficacy are no more rooted in industry than they are in creationism. And just so you don't get the wrong idea, none of this has anything to do with being a compassionate caring physician....but that kind of physician wants the best available care for their patient. Sometimes that means no care...and that's the last step in EBM, determining if the data are applicable to your patient. Just give me a reasonable alternative, and it might make for an interesting discussion.

Expand full comment

Rick Davidson's comment about critics of EBM have brought me to think more critically of it. Evidence-based medicine has morphed into Pharma-based medicine and HMO-based medicine. Once you can eradicate the later, you can renew credence in the former.

Expand full comment

The interesting thing about this thread, which I haven't reviewed for a while, is that none....not one...of the critics of EBM have provided an alternative way for clinical decision making....with the exception of cell function analysis, which while logical, has really never been adequately shown to provide better outcomes. If you don't like EBM, how exactly would you like your physician to determine the best treatment for you or your family? No clinician likes dealing with percentages....patients don't have a 50% chance of having colon cancer. They have either a 0% or 100%....but I've yet to hear a reasonable scientific alternative to using carefully evaluated grouped data for clinical decision-making. Critics never seem to notice the word "available", as in "best available evidence". This means that clinical trials are by no means required. So tell us all....what is the algorithm that should be used instead of EBM?

Expand full comment

That's okay Robin. Michelle H. (hchcec) has been an blog groupie for some time. Condemning the messenger instead of having any rational thought on the message does drift too far from the ideal. You can take all of my comments off this board. She's not worth it.

Expand full comment

Gregory and hchec, I'm calling an end to this conversation here. It has moved too far from the overall topic and its tone has drifted too far from the ideal. I'll delete any more comments from you two on this post.

Expand full comment

Internet endeavors? Obviously researched cancer medicine?What we need is truthful discourse. Sharing knowledgeable information requires dispassionate, objective truth. One can emote in a blog entry, but the information should stand by itself.

Pawelski claims that his cell culture assays have not been approved because:

1. greed and certitude in the power of the dollar have once again clouded judgement.2.The persons making these decisions are smart and cynical enough to accept baksheesh from the very persons whose technology they are bent upon discrediting.3.It's death by clinical trial, the academician's weapon of choice, which they wield expertly.4.Resistance to change may be active, covert, or organized from vested interests.

Blogging this thousands of times doesn't make it any less false as there is no evidence that this is true and is a classical example of bias, the fallacy of repetition: repeating an opinion again and again with the thought that it seems to convince people that it is true - maybe because it simulates the effect of many people having that opinion. We see this with the intellectual dishonesty from creationists as they attack the "theory" of evolution. They have no understanding of what a scientists mean by theory.

And, to blog it on sites where cancer patients look for support? Unconscionable.

A clear, rational, concise, factual, thoughtful, fearless response to Pawelski:

http://www.jco.org/cgi/cont...

(The focus on my name is getting a little creepy.)

Expand full comment

Thank you very much Michelle for your words of encouragement and your approval by adding exposure to my internet endeavors. I've obviously researched cancer medicine and related issues extensively. It is very sharp and intimidating, even to so-called experts like you. It is a no nonsense, sometimes harsh and honest writing style.

Cancer patients need informed opinion good, bad, or indifferent. I believe in "measured" moral support and consider it important. But the overkill sugarfest that is usually professed by the powers that be in cancer medicine is useless and sometimes dangerous. Telling a cancer patient that chemotherapy and radiation treatment are the only hope of survival, strikes total fear by telling them they would surely die in a short time without it. Fear is the greatest tool to snare a fearful victim.

There was a cartoon that showed a doctor with the initials AMA on his lapel, holding a syringe and standing next to a grave and a vulture with the initials FDA on it perched on the grave stone of a cancer patient that read, "Here lies Vic Tim, cured of cancer, died of side effects."

By the way. The same tortured syntax was evident when you stopped using the name Michelle and started using hchcec, sometime in the fall of last year. Everybody caught on.

Expand full comment

Perhaps a little education is in order for internet's most notorious purveyor of "cancer therapy without unproven cell culture assay means doom" inaccuracy.

Estrogen-Receptor Status and Outcomes of Modern Chemotherapy for PatientsWith Node-Positive Breast Cancerhttp://jama.ama-assn.org/cg...

Also, he cites Oncotype DX as a model for his cell culture assay for chemotherapy choice. Apples and oranges here.

Oncotype DX informs the patient whether chemotherapy will be of ANY benefit. It's a fine test for early and late stage breast cancer, but not yet intermediate.

The test Pawelski is pushing, to determine WHICH chemotherapy protocol to use, is unproven to be of any benefit at this time.

Pawelski is simply mistaken in suggesting that diagnostic accuracy indices (sensitivity, specificity, and positive and negative predictive values) are sufficient for establishing a test’s utility. As outlined by the Institute of Medicine, tests are clinically useful only if the information they produce leads to patient management changes that improve outcomes, such as longer survival, better quality of life, or fewer adverse events. Clinical utility can be determined by mapping a causal chain from diagnostic accuracy through changes in management to impact on outcomes. This is evident in the the numerous estrogen receptor and Her2 studies. And bacterial culture testing is approved for reasons of accuracy only? Ridiculous.

A simple example:

The U.S. Food and Drug Administration (FDA) has granted Diazyme 510(K) clearance to market its Enzymatic Total Bile Acids (TBA) Assay Kit for the quantitative determination of total bile acids in human blood samples.

Must be because it's "accurate," his favorite, and only buzzword.

But, is THAT why we examine bile acids, just to say "hey, we have bile acids here!"

Well, nope.

Total bile acids is a well known bio-marker for diagnosis of liver diseases. Serum total bile acids are elevated in patients with acute hepatitis, chronic hepatitis, liver sclerosis, and liver cancer. Total bile acids levels are found to be the most sensitive indicator for monitoring the effectiveness of interferon treatment of chronic hepatitis C patients. Moreover, total bile acids tests are also widely used to screen pregnant women for the condition of obstetric cholestasis, a disease that is caused by elevated total bile acids in the bloodstream of pregnant women. This poses and risks to the unborn baby including stillbirth, premature labor and bleeding. The frequency of obstetric cholestasis is found to be 1 in 100 pregnant European women, and 1 in 10 pregnant South American women. Cholestasis treatment includes the drug Urso.

Well, look at that.

-We have a disease process that can be identified by elevated bile acids.

-We have an accurate tool to allow us to make decisions about treating diseases caused by elevated bile acids.

-This can be monitored to look for resolution or worsening.

-We have treatment for disease cause by elevated bile acids.

So we have a test that monitors an established disease process WE KNOW that causes problems and WE KNOW that has solutions.

Unfortunately, we see nothing like this at all from chemotherapy cell culture assays. Why? Because there is absolutely no evidence that his assays do anything whatsoever to legitimately, unerringly, diagnose a disease process and can be of any help in affecting the treatment protocol. We have no data to back up his claims that the disease process will be altered by his tests. We know addressing bile acids will because we know a lot about bile acids. We know it's beneficial to have results WNL and it's detrimental not to.

We have no such evidence at all that his assays are of any benefit.

He wants us to pay for his unproven tests. But, it's not going to happen. It may in due time, but that will occur, as the real scientists know who work quietly in the background, away from the blogosphere, when scientists make the decision.

One of the many logical fallacies evident, time and time again, is FALLACY OF PRESUMPTION.

His posts can be categorized, in general, is a classic Fallacies of Presumption because they creates the presumption that the true premises are complete.

Examples:

Most dogs (assays) are friendly and pose no threat to people who pet them. Therefore, it would be safe to pet the little dog that is approaching us now.

That type of car (one-size fits all) is poorly made; a friend of mine has one, and it continually gives him trouble.

We sometimes see this fallacy committed in scientific research whenever someone focuses on evidence which supports their hypothesis but ignores data which would tend to disconfirm it. This is why it is important that all experiments can be replicated by others and that the information about how the experiments were conducted be released. Other researchers might catch the data which was originally ignored.

Mr. Pawelski offers no evidence that assays offer any benefit greater than one-size-fits-all. What he does do, and this must be made clear, is simply attack one-size-fits all (all those cars are bad)and claim superiority for assays ("go ahead, pet my dog - he's like all the others").

But, somehow, you probably knew that. The government knows it. Industry knows it. And the assay scientists know it.

THAT is precisely why they won't be approved nationally until they show some, any, benefit.

Blogging it into existence ain't gonna happen. (Like real scientists who do the hard work every day have time read years of intellectual dishonest posts from a blogger with no scientific background.)

For example, google: Gregory + Pawelski + chemotherapy. Guess how many hits you'll get?http://www.google.com/searc...

Expand full comment

I ask Michelle H. (hchcec) what data exist to prove that using the Estrogen Receptor of Her2/Neu assay improves therapeutic outcomes? What data exist to prove that Bacterial Culture and Sensitivity Testing improves therapeutic outcomes? What data exist to prove that doing panels of immunhohistochemical stains improves therapeutic outcomes? What data exist to prove that following metastatic cancer with CT, MRI, and PET scans during treatment to assess whether or not treatment should be switched to something else improves therapeutic outcomes?

I ask Michelle H. (hchcec) why more than half of the nation's oncologists are now using the Oncotype DX test without the slightest shred of proof that this improves therapeutic outcomes? Why do oncologists order EGFR tests and what data exist to prove that this improves outcomes? The FDA regulates devices and not laboratory tests, but it does regulate test kits. And the criterion they use for each and every test kit they approve is "accuracy" not "efficacy." I believe she/he/it doesen't know squat about it.

Expand full comment

First of all, on the internet, everyone thinks you're a dog. Not sure why you feel a need to have that "gotcha" moment, but my I signed my post "hchcec." My name is Michele, like Taxol causes brain cancer.

"in every situation where a neutral panel of adjudicators had the opportunity to hear both sides of the story, the decision came down in favor."

Well, of course there's clear prejudice in this statement. "If they don't see it my way and approve my chemotherapy cell assays (as yet totally unproven as the developers of these tests know), they are not neutral." Talk about the need for "overcoming bias."

There is absolutely no support for these devices as they have yet to undergo anything controlled testing. Only now have those tests begun. If you do a scientific search you will find not a single study showing any benefit.

"There is no proof beyond reasonable doubt for ANY approach to treating cancer today. There is ONLY THE BIAS of clinical investigators as a group and as individuals."

This is a absurd statement as is his following about accuracy vs. efficacy.

-------A Hierarchical Model of Efficacy

Level 1: Technical efficacy

Level 2: Diagnostic accuracy efficacy

Level 3: Diagnostic thinking efficacy

Level 4: Therapeutic efficacy

Level 5: Patient outcome efficacy

Level 6: Societal efficacy

Pawelski is happy at level 2 and anything beyond level 2 is meaningless.

The definition of a device is set forth at section 201(h) of the Federal Food, Drug and Cosmetic Act (the act) (21 U.S.C. 321(h)). It provides in relevant part: ‘‘The term ‘device’ * * * means an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including any component, part, or accessory, which is intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals.

Evidence-based science, making use of the scientific method, despite Pawelski's disdain, is required in the scientific community.

The anti-science proponents, and other snake oil salesmen have not, and will not, win this one.

Expand full comment

To say that the developers of these "synergistic methods" agree with Michelle H. (hchcec) has got to be the biggest fallacy ever heard. It is totally wrong to portray her statement as something where there is unanimity of opinion. In point of fact, in every situation where a neutral panel of adjudicators had the opportunity to hear both sides of the story, the decision came down in favor of coverage of these methods as a reasonable and scientifically supportable medical service.

The standard of retrospective correlations between treatment outcomes and laboratory results is sufficient in the case of "all" laboratory tests, while papers of this nature were excluded from analysis in "closed" oncology organizations' evaluation of these laboratory tests. There is no proof beyond reasonable doubt for any approach to treating cancer today. There is only the bias of clinical investigators as a group and as individuals. The FDA regulates devices and not laboratory tests, but it does regulate test kits. And the criterion they use for each and every test kit they approve is "accuracy" not "efficacy." But Michelle, in all her finite wisdom has not been able to grasp reality.

Expand full comment

Pawelski:

"The technology behind these synergistics methods have been clinically validated for the selection of optimal chemotherapy regimens for individual patients. Individualized-directed therapy is based on the premise that each patient's cancer cells are unique and therefore will respond differently to a given treatment."

This is not true. Pawelski's particular bias ("the statistician/physician in a clinical trial do ask the wrong questions") comes from his desire to prove that cancer treatment must be individualized. It's not like treating a UTI with a floxin.

Unfortunately, for Pawelski, there is no evidence at all that individualized testing of cancer cells via live tumor analysis shows any benefit whatsoever. What's most interesting is that the developers of these tests agree with me. Mr. Pawelski is the only individual that would have you believe that throwing out clinical trials is required because as he says, "the so-called respectable Journals (the ones that fail to adhere to guidelines on conflict of interest), won't publish articles because they have a lock-up on information." For Pawelski: "few drugs work the way we think and few physicians/scientists take the time to think through what it is they are using them for."

Sounds a bit conspiratorial to me (actually more than a bit :-)).

"The number of possible treatment options supported by completed randomized clinical trials becomes increasingly vague for guiding physicians. Even the National Cancer Institute's December 7, 2006 official cancer information website states that no data support the superiority of more than 20 different regimens in the case of metastatic breast cancer, a disease in which probably more clinical trials have been done than any other type of cancer."

This is not an indictment of the clinical trial at all. In fact, it's just the opposite. It's data, good data, and indicates that metastatic breast cancer is, whether we like it or not, difficult to treat, some regimens are less toxic than others and choices may be made based on this good data, and this area of clinical evaluation remains ongoing.

Expand full comment

I think the statistician/physician in a clinical trial do ask the wrong questions. Cancer medicine is a personalized service, one built around the uniqueness of each patient and the skilled physician's ability to design treatment accordingly. Cancer specialists can read the scientific literature, understand the statistics, but they don't understand how that should influence their treatment of the "individual" in front of them.

The frequentist approach tends to be rather unforgiving in terms of deviations from the original clinical design as it progresses. What this all comes down to is that as in all real-world situations of reliability, you have to try to pick the course with the least probability of failure, knowing all the time that the estimates of probability themselves have a probability of being wrong (the confidence-level).

The technology behind these synergistics methods have been clinically validated for the selection of optimal chemotherapy regimens for individual patients. Individualized-directed therapy is based on the premise that each patient's cancer cells are unique and therefore will respond differently to a given treatment. This is in stark contrast to standard or empiric therapy, which chemotherapy for a specific patient is based on average populatlion studies from prior clinical trials.

Expand full comment