Medical Study Biases

Medical studies are seriously biased by interested funders and by tolerance for sloppy methods.  Here are four examples.

1.  A recent PLoS Medicine looked at 111 studies of soft drinks, juice, and milk that cited funding sources.

22% had all industry funding, 47% had no industry funding, and 32% had mixed funding. … the proportion with unfavorable [to industry] conclusions was 0% for all industry funding versus 37% for no industry funding .

2.  Last February the Canadian Medical Association Journal reported that in 487 studies, those whose method left more room for fudging "found" higher accuracy of diagnostic tests:

The quality of reporting was poor in most of the studies.  We found significantly higher estimates of diagnostic accuracy in studies with nonconsecutive inclusion of patients … and retrospective data collection … Studies that selected patients based on whether they had been referred for the index test, rather than on clinical symptoms, produced significantly lower estimates

3.  In 1995, the Journal of American Medical Association reported that of 250 studies of treatments, those with easier fudging similarly "found" stronger effects:

Compared with trials in which authors reported adequately concealed treatment allocation, … Odds ratios were exaggerated by 41% for inadequately concealed trials and by 30% for unclearly concealed trials … Trials that were not double-blind also yielded … odds ratios being exaggerated by 17%

4.  In 2005, the Journal of American Medical Association found that of medical studies since 1990 cited 1000 times or more, 1/3 were contradicted by replications, and 1/4 had no replication attempts: 

Of 49 highly cited original clinical research studies, 45 claimed that the intervention was effective. Of these, 7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged. Five of 6 highly-cited nonrandomized studies had been contradicted or had found stronger effects vs 9 of 39 randomized controlled trials (P = .008).

The obvious question is:  how can we produce medical estimates that correct for such biases?  And why don’t we? 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://www.nationalgeographic.com/traveler/extras/blog/blog.html travelina

    Wow! Zero percent negative conclusions about soft drinks when the studies were funded by industry! Who would have guessed?

  • dnfrd_chrs

    how can we produce medical estimates that correct for such biases?

    Wait, what is a “medical estimate”? Are you asking how the authors of those biased studies could have done their work so that the observed fudging didn’t happen?

    Or are you asking how readers could estimate the amount of fudging that is in the paper?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Dnfrd (did your mommy really call you that?), I have in mind better institutions for rewarding or summarizing medical research.

  • William Newman

    travelina, note the interpretation isn’t necessarily quite as horrible as you make it sound. I’d guess that among cases where industry thinks it’s worth funding the study, you’d find a concentration of cases where knowledgeable people are confident the honest conclusion is in favor of industry.

    E.g., I (ex-biologist with no particular conflict of interest) was extremely skeptical during the scares over weak low-frequency EM fields causing cancer (power lines, cell phones). Had I been in a position to advise a cell phone company or a power company, and had someone come in with a proposed study which would massively reduce the uncertainty, I’d’ve been able to tell the company that they’d be very likely to find a result they liked, because barring new physics or extremely weird biology, the marginal statistical results which had justified the scares must have been random flukes, not signs of a stable pattern. Given advice like that, you could get nuclear power companies studying “does 60Hz EM radiation cause cancer?” more often than “do gamma rays cause cancer?” And you can get pretty consistently favorable-to-industry results, and still advance knowledge about issues relevant to policy.

    Of course, you can get all sorts of industry-funded dishonest garbage. But you can also get industry reacting to dishonest garbage — remember _Supersize Me_? To the extent that the industry soft drink studies are on things like “no, in point of fact diet soda doesn’t cause so much cancer that the customers’ life expectancy falls to pre-industrial-era levels,” the favorable-to-industry results of the studies that industry chose to fund may not be worth being cynical about.

  • Bruce G Charlton

    Robin –

    Well, I don’t think talking in terms of correcting estimates makes much sense from a scientific perspective – truth does not have a distribution. There could, in theory, be corrections for effect size estimates – however, the data would need to be retrospective, and the field is changing fast, and for the worse.

    For instance, the problem of Ghost Authoring of medical research papers, by professional agencies who are employed by (for example) the phamaceutical industry has been highlighted by David Healy of the University of Wales in relation to psychiatric drugs. Br J Psychiatry. 2003 Jul;183:22-7

    The highly-prestigious author’s name from a famous institution which appears on the paper (which may be, often is, published in the highest status medical journals) may never have seen the primary data, but only the statistical selections and summaries provided by the ghostwriter (who, naturally, has a marketing agenda, rather than a scientific one).

    In such a messy and changing situation, the idea of a statistical correction for bias seems remote. We have to rely on replication, I feel.

  • http://www.subsolo.org/gustibus/archives/2007/02/index.html#005008 De Gustibus Non Est Disputandum

    Assuntos

    Falei disto em alguma das minhas turmas, outro dia. Espero sempre poder passar a mensagem clara de que pesquisa também envolve ética. Obviamente, ética não é tudo. Mas ajuda. Como dizia um amigo meu, “pelo menos eu posso dormir tranqüilo…

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Bruce, for any biased estimate of anything, if you know something about the sign of the bias you should be able to correct it to produce a less biased estimate.

  • http://econlog.econlib.org/archives/2007/02/hanson_gets_emp.html EconLog

    Hanson Gets Empirical

    Robin Hanson describes four interesting meta-studies on medical research that ought to make you less confident about the latest study…

  • mobile

    Only 37% of soft drink, juice, and milk studies untainted by industry funding had negative findings? Garcon, top me off!

  • http://openandwilling.blogspot.com DavidD

    So how many apples, oranges, and cumquats are being compared here? I have the same guess as William Newman that the issues studied in beverage studies funded by the industry are probably not the same issues as in other studies.

    The rest remind me of papers I read in the seventies about how much higher the rate of positive results were in uncontrolled studies than in controlled studies. There is a temptation to say that the rate in controlled studies is right, subtract that from the rate in uncontrolled studies and say that the difference is how many uncontrolled studies were wrong. But that’s not valid. Surely there’s a bias against publishing negative, uncontrolled studies plus the things you could get published even then as an uncontrolled study had some poorly defined limit. It makes sense to say it’s better to do a controlled study, but to put some sort of correction rate onto uncontrolled studies, for example saying you can expect 40% of uncontrolled pilot studies to still be positive when controlled is just a guess, not a broadly applicable estimate.

    Likewise of limited use would be any estimate of how close subsequent studies will match some big, highly publicized intial study of some treatment. Regression to the mean might mean that there is a trend for later studies to do less well, but I’m sure that would be quite variable. And that ignores what important issues of differences in study design might exist between studies. Just because a study fails to be replicated once doesn’t mean the second study is right. Who’s being studied? What’s the exact treatment? How is the outcome being measured? It may be that some of those 49 earlier studies being cited are generally believed to be superior studies to later ones that had different results. There are many such reasons why few would take a correction estimate seriously. Until I read through the actual study myself, why should I believe anything? Then when I have read the details, I make a lot of judgments that are beyond making a global summary number about bias. I don’t think there’s a good way around making such informed, individual judgments.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    David, yes of course if you read the study carefully you may come to a detailed judgment. But there is a great need and demand for summary estimates available to a wider audience, to inform their medical choices.

  • http://www.hedweb.com/bgcharlton Bruce G Charlton

    Robin said: “Bruce, for any biased estimate of anything, if you know something about the sign of the bias you should be able to correct it to produce a less biased estimate.”

    A corporation-funded effect-size study would be expected to exaggerate the benefits and minimize the disdvantages of a drug. But if the study is examining a question such as “what is the effect of drug X on the human body” then the answer doesn’t really have a sign.

  • Dan Luu

    Is it reasonable to think that industry funded studies are just as valid, and merely less likely to be published (or even funded in the first place) if the results are unfavorable (or likely to be unfavorable)? That would have implications on a meta-analysis, but there wouldn’t be anything to adjust for in any particular study.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    One gets the impression that an awful lot of sociological problems in science would go away if we moved the significance level to 0.001. Yes, I know it’d be just as arbitrary, but still.

  • dearieme

    A friend who spent most of his career in science at Cambridge advocates the rule of thumb “Medical research is rubbish”.

  • Doug S.

    Yeah, one in twenty medical studies will produce statistically significant results purely by chance.

  • http://pdf23ds.net pdf23ds

    “Bruce, for any biased estimate of anything, if you know something about the sign of the bias you should be able to correct it to produce a less biased estimate.”

    You also have to have some idea about the magnitude of the bias. Otherwise you have no idea whether you’re overcorrecting or not.

  • Douglas Knight

    moved the significance level to 0.001

    I think a suggestion like that is pretty much meaningless in isolation. The result of such a change would be heavily dependent on how it came about. Two failure modes are increased fraud and the cessation of medical research publication (and/or FDA approval). Done right, it could improve things. But simply convincing people that such significant studies on things people think they know would be an improvement, without touching the bad studies.

  • Rick Davidson

    To the best of my knowledge, I published the first study to look at the association of industry funding with the outcome of published studies (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&dopt=AbstractPlus&list_uids=3772583&query_hl=1&itool=pubmed_DocSum). I have spoken on this issue for years at workshops and academic health centers. From the practical standpoint, the best I can do is alert students to the significant risk of bias. The literature has improved since the mid ’80’s in terms of required acknowledgments and more careful attention to the problem. There are alternative explanations for the associations noted, other than purposeful (or unconcious) bias. Many drug studies involve drugs that have already been investigated in other countries and have been found to be effective. Studies that appear to be heading in the wrong direction may be discontinued by the company before the sample size is adequate to make the results publishable. There is always the possibility of publication bias that keeps negative studies out of the literature. Many investigators who undertake drug studies for companies have no particular interest in publishing the results. And, there are documented examples of investigators who were pressured by companies to not publish negative results. Trying to put a corrective factor on the amount of bias involved is an interesting suggestion….and could probably be determined for a large number of studies. The results, just like all grouped results, could not determine if an individual study is biased in this manner, but could provide an estimate for general use.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Rick, your comments better address correlations of results with funding source than correlations of results with room for fudging, or the high rate of failure to replicate.

  • Rick Davidson

    I suppose that if you could find a large enough number of non-randomized trials, or unblinded trials, you could construct some kind of summary “error”…and in fact, just like the financial support issue, the results would likely show a systematic error, as opposed to a random one. I remember studies done at least twenty years ago that showed these errors with controlled vs uncontrolled studies, and they ALWAYS show a greater benefit without controls. Interesting concept….not sure how feasible, but it would be interesting to try.

  • david

    I’m sorry, were you laboring under the impression that the actual practice of medicine by physicians
    is based on science, evidence, and empirical thinking? That would be a revolution. In practice,
    what a doc does to determine how to treat a patient does not involve a hardcore look into the data
    or results of the clinical studies. I mean, there’s some communication pipeline that tells docs how
    they should diagnose things and treat things under certain circumstances, but it is not a critical
    lens around the clinical trials. Did you really think that that happens on a large scale? Are you
    joking? Maybe one day.

  • Rick Davidson

    Nope, I know it doesn’t happen on a large scale. It doesn’t even happen in the academic medical center where I practice, at least much of the time. Just read the article quoted in the other EBM thread about Dan Merenstein if you have any doubts…but since my job is to convince medical students that they need to practice that way, all I can do is point them in the right direction. Introduce them to evidence-based clinical guidelines, teach them how to critically read and be skeptical…the fact is, right or wrong, that variability in practice may be disappearing for the wrong reason…through issues of managed care and cost control. But decreasing the variability of practice patterns, using the Dartmouth index effectively, and paying attention to current literature will improve the quality of medical care. It’s the “one day” I’m aiming for…but I do believe that the situation has improved a great deal in the 25 years I’ve been teaching. In my course we use small group teaching for half the content, reading current articles. Among my small group teachers are the head of the lung transplant program, the division chief of GI, a former dean of the medical school, the department chair of family medicine, and assorted hematologists, rheumatologists, a urologist, and a medicine chief resident. Not one of these teachers have had formal training in EBM….they all are committed to critical reading of the literature and implementing it in their practice. That has an effect on students regarding the relevance of the content. I frequently have graduates come up and tell me that they have continued reading in their practices. I think things are changing.