23 Comments

Nope, I know it doesn't happen on a large scale. It doesn't even happen in the academic medical center where I practice, at least much of the time. Just read the article quoted in the other EBM thread about Dan Merenstein if you have any doubts...but since my job is to convince medical students that they need to practice that way, all I can do is point them in the right direction. Introduce them to evidence-based clinical guidelines, teach them how to critically read and be skeptical...the fact is, right or wrong, that variability in practice may be disappearing for the wrong reason...through issues of managed care and cost control. But decreasing the variability of practice patterns, using the Dartmouth index effectively, and paying attention to current literature will improve the quality of medical care. It's the "one day" I'm aiming for...but I do believe that the situation has improved a great deal in the 25 years I've been teaching. In my course we use small group teaching for half the content, reading current articles. Among my small group teachers are the head of the lung transplant program, the division chief of GI, a former dean of the medical school, the department chair of family medicine, and assorted hematologists, rheumatologists, a urologist, and a medicine chief resident. Not one of these teachers have had formal training in EBM....they all are committed to critical reading of the literature and implementing it in their practice. That has an effect on students regarding the relevance of the content. I frequently have graduates come up and tell me that they have continued reading in their practices. I think things are changing.

Expand full comment

I'm sorry, were you laboring under the impression that the actual practice of medicine by physiciansis based on science, evidence, and empirical thinking? That would be a revolution. In practice,what a doc does to determine how to treat a patient does not involve a hardcore look into the dataor results of the clinical studies. I mean, there's some communication pipeline that tells docs howthey should diagnose things and treat things under certain circumstances, but it is not a criticallens around the clinical trials. Did you really think that that happens on a large scale? Are youjoking? Maybe one day.

Expand full comment

I suppose that if you could find a large enough number of non-randomized trials, or unblinded trials, you could construct some kind of summary "error"...and in fact, just like the financial support issue, the results would likely show a systematic error, as opposed to a random one. I remember studies done at least twenty years ago that showed these errors with controlled vs uncontrolled studies, and they ALWAYS show a greater benefit without controls. Interesting concept....not sure how feasible, but it would be interesting to try.

Expand full comment

Rick, your comments better address correlations of results with funding source than correlations of results with room for fudging, or the high rate of failure to replicate.

Expand full comment

To the best of my knowledge, I published the first study to look at the association of industry funding with the outcome of published studies (http://www.ncbi.nlm.nih.gov.... I have spoken on this issue for years at workshops and academic health centers. From the practical standpoint, the best I can do is alert students to the significant risk of bias. The literature has improved since the mid '80's in terms of required acknowledgments and more careful attention to the problem. There are alternative explanations for the associations noted, other than purposeful (or unconcious) bias. Many drug studies involve drugs that have already been investigated in other countries and have been found to be effective. Studies that appear to be heading in the wrong direction may be discontinued by the company before the sample size is adequate to make the results publishable. There is always the possibility of publication bias that keeps negative studies out of the literature. Many investigators who undertake drug studies for companies have no particular interest in publishing the results. And, there are documented examples of investigators who were pressured by companies to not publish negative results. Trying to put a corrective factor on the amount of bias involved is an interesting suggestion....and could probably be determined for a large number of studies. The results, just like all grouped results, could not determine if an individual study is biased in this manner, but could provide an estimate for general use.

Expand full comment

moved the significance level to 0.001

I think a suggestion like that is pretty much meaningless in isolation. The result of such a change would be heavily dependent on how it came about. Two failure modes are increased fraud and the cessation of medical research publication (and/or FDA approval). Done right, it could improve things. But simply convincing people that such significant studies on things people think they know would be an improvement, without touching the bad studies.

Expand full comment

"Bruce, for any biased estimate of anything, if you know something about the sign of the bias you should be able to correct it to produce a less biased estimate."

You also have to have some idea about the magnitude of the bias. Otherwise you have no idea whether you're overcorrecting or not.

Expand full comment

Yeah, one in twenty medical studies will produce statistically significant results purely by chance.

Expand full comment

A friend who spent most of his career in science at Cambridge advocates the rule of thumb "Medical research is rubbish".

Expand full comment

One gets the impression that an awful lot of sociological problems in science would go away if we moved the significance level to 0.001. Yes, I know it'd be just as arbitrary, but still.

Expand full comment

Is it reasonable to think that industry funded studies are just as valid, and merely less likely to be published (or even funded in the first place) if the results are unfavorable (or likely to be unfavorable)? That would have implications on a meta-analysis, but there wouldn't be anything to adjust for in any particular study.

Expand full comment

Robin said: "Bruce, for any biased estimate of anything, if you know something about the sign of the bias you should be able to correct it to produce a less biased estimate."

A corporation-funded effect-size study would be expected to exaggerate the benefits and minimize the disdvantages of a drug. But if the study is examining a question such as "what is the effect of drug X on the human body" then the answer doesn't really have a sign.

Expand full comment

David, yes of course if you read the study carefully you may come to a detailed judgment. But there is a great need and demand for summary estimates available to a wider audience, to inform their medical choices.

Expand full comment

So how many apples, oranges, and cumquats are being compared here? I have the same guess as William Newman that the issues studied in beverage studies funded by the industry are probably not the same issues as in other studies.

The rest remind me of papers I read in the seventies about how much higher the rate of positive results were in uncontrolled studies than in controlled studies. There is a temptation to say that the rate in controlled studies is right, subtract that from the rate in uncontrolled studies and say that the difference is how many uncontrolled studies were wrong. But that's not valid. Surely there's a bias against publishing negative, uncontrolled studies plus the things you could get published even then as an uncontrolled study had some poorly defined limit. It makes sense to say it's better to do a controlled study, but to put some sort of correction rate onto uncontrolled studies, for example saying you can expect 40% of uncontrolled pilot studies to still be positive when controlled is just a guess, not a broadly applicable estimate.

Likewise of limited use would be any estimate of how close subsequent studies will match some big, highly publicized intial study of some treatment. Regression to the mean might mean that there is a trend for later studies to do less well, but I'm sure that would be quite variable. And that ignores what important issues of differences in study design might exist between studies. Just because a study fails to be replicated once doesn't mean the second study is right. Who's being studied? What's the exact treatment? How is the outcome being measured? It may be that some of those 49 earlier studies being cited are generally believed to be superior studies to later ones that had different results. There are many such reasons why few would take a correction estimate seriously. Until I read through the actual study myself, why should I believe anything? Then when I have read the details, I make a lot of judgments that are beyond making a global summary number about bias. I don't think there's a good way around making such informed, individual judgments.

Expand full comment

Only 37% of soft drink, juice, and milk studies untainted by industry funding had negative findings? Garcon, top me off!

Expand full comment

Hanson Gets Empirical

Robin Hanson describes four interesting meta-studies on medical research that ought to make you less confident about the latest study...

Expand full comment