7 Comments

Causality likely sometimes goes either way, but given the costs of doing medical studies or anything involving bigger surveys one needs it is likely that funding is needed before the study. Running a tiny pilot might be possible and indeed lead to selecting a favorable source of funding, but here it might be just as much researcher bias - if the researcher "knows" what the outcome is likely to be he might search funding in the appropriate direction, hinting at it.

Calling for more rigorous, verified research is a nice idea, but in practice it may be hard to do in many fields. What is a rigorous social survey?

In a clear case of bias, I'm now noticing lots of new papers on publication bias. PLoS Medicine has two interesting ones (and a commentary):

http://medicine.plosjournal..."Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles"

http://medicine.plosjournal..."Ghost Authorship in Industry-Initiated Randomised Trials"

http://medicine.plosjournal..."Authors, Ghosts, Damned Lies, and Statisticians"

The first paper demonstrates funding bias in nutrition. The odds ratio was 7.61, and no unfavorable findings were reported at all from the industry funded studies (vs. 37% for the rest). The second shows that a lot of ghost authors, mostly statisticians, appear in industry sponsored papers.

Expand full comment

seeing that a paper was funded by Acme Corporation tells us to adjust for funding bias, but if it was funded by the NIH we don't know if there is a bias or not.

Maybe a historical study is the way to go here? Take issues that are dead and settled (such as "radioactivity is bad for you", "asbestos is causes cancer" and even "smoking causes lung cancer"), and see what biases were induced by various funding bodies when these issues were "hot"?

But I don't see how we can easily generalise from history, seeing the miriads of research policies and governments that have changed over the years since then. Its very possible that public funding is biases in different ways, for different issues, at different times.

Maybe the best is to accept that there will be a bias, that we can't overcome it a priori, and just call for more rigorous, verified research and let the scientific method work its magic? The question of bias is the most usefull to highlight those areas where this extra research will be needed.

Expand full comment

Could the causality run from findings to funding rather than funding to findings. Same end result, but slightly different mechanism. For instance, I muck about with some experiments, getting some preliminary results. If they come out pro-smoking, I write up a proposal and talk my prelim results to the grantor. If they come up negative I drop the matter as there is no point going on without funding if I have a lab to support. Even if I get positive results and there is no support forthcoming I continue to publication in the hope of future funding.

Expand full comment

I just encountered another report on funding bias, this time on cell-phone safety:http://www.ehponline.org/me...Anke Huss, Matthias Egger, Kerstin Hug, Karin Huwiler-Müntener and Martin Röösli, Source of Funding and Results of Studies of Health Effects of Mobile Phone Use: Systematic Review of Experimental Studies, Environ Health Perspect 115:1–4 (2007).

The odds ratio was 0.11 this time of finding some problem when the studie had industry funding compared to studies not funded, although the confidence interval was fairly broad. However, they recognize that some of the non-industry studies might be biased the other way because they might be done by researchers with environmentalist agendas funded by public sources. Mixed funding studies appeared to be of the highest quality.

This observation points out an interesting problem with journal policies of having to state funding sources: seeing that a paper was funded by Acme Corporation tells us to adjust for funding bias, but if it was funded by the NIH we don't know if there is a bias or not. Maybe mixed funding is the way to go?

Expand full comment

There is also the distinction to bear in mind between (a) picking research topics that are likely to yield desired outcomes (e.g. investigating possible nootropic or weight-loss effects of nicotine rather than effects on cancer risk) to fill the headlines with; and (b) skewing the findings on a given topic to yield desired result. The latter seems more vile. That a given group of researchers find many positive results is not necessarily a bad sign - it might just mean that their intuitions about what experiments to do and what effects to test for were very unusually good.

Expand full comment

Like the saying goes, whose bread I eat, his song I sing.

Expand full comment

Obviously it is not a simple matter to combine many possibly biased studies into an aggregate estimate. So the question is what institution promotes publicizing the best aggregates. If industry can fund biased studies, industry can fund biased meta-analysis to get an aggregate they like too.

Expand full comment