

Discover more from Overcoming Bias
Funding bias occurs when the conclusions of a study get biased towards the outcome the funding agency wants. A typical example from my own field is Turner & Spilich, Research into smoking or nicotine and human cognitive performance: does the source of funding make a difference? Researchers declaring having tobacco industry funding more often detected neutral or positive cognitive enhancement effects from nicotine than non-funded researchers, who were more evenly split between negative, neutral and positive effects.
There have been some surveys of funding bias. Bekelman, Li & Gross find that 25% of investigators in their material had industry funding sources. Doing a meta-analysis of 8 articles themselves evaluating 1140 original studies they got a 3.6 odds ratio of industry favourable outcomes when there was industry sponsorship compared to no sponsorship. There are also problems with data sharing and publication bias. An AMA 2004 Council Report also points out that sponsered findings are less likely to be published and more likely to be delayed.
A case study of co-authoring a study with the tobacco industry by E. Yano describes both how the industry tried to fudge the results (probably more overtly than in most cases of funding bias) and how the equally fierce anti-tobacco campaigners then misrepresented the results; the poor researcher was in a no-win scenario.
Looking at these studies it seems that there is a general tendency even for unsponsored researchers to get industry-positive findings. My guess is that this is a form of publication bias: positive results are easier to publish, and in many fields there might be a correlation between positive results and them being positive for the industry (e.g. testing whether drugs work on various conditions).
Maybe there is also an anti-industry bias here? Leaving aside obvious cases of non-industry bias such as from government sponsored agencies seldom finding faults with government policies there could be signalling happening here. Do non-corporate funding agencies favour researchers not accepting corporate funding? Such researchers would be less likely to leave for corporate funding, making it more likely a long-term relationship could be built up. They would also be more interested in researching areas the agency finds relevant and in general be more “loyal”. This might be important if there is competition between funding agencies for “good” research projects (that will bring publicity, status, relevance and possibly advance political goals). To a researcher it might hence be rational to distance oneself from corporate funding in order to ensure more non-corporate funding.
I wonder whether the funding outcome bias is worse than the funding publication bias, and whether the 25% of research funding might not make up for some of the bias?
Warning, back-of-envelope calculation done before going to bed follows (this really should be modelled more properly, for example by simulating independent trials and summing them with Fisher’s method).
Unbiased studies get positive findings with probability f, and then get published with probability P. Negative findings get published with probability p (< P). Biased studies have a slightly higher positive finding probability kf (1/f > k>1), a higher publishing probability lP (l>1) and a lower negative finding publishing probability mp (m<1). The total amount of reported positive findings will be [0.75+0.25(kl)]Pf, and negative findings [0.75(1-f)+.25(1-kf)m]p.
The odds ratio between sponsored and unsponsored studies would be
OR = (kflP/(1-kf)mp) / (fP/(1-f)p) = (kl/m)(1-f)/(1-kf).
If f=0.5, a finding bias of k=1.56 would explain the JAMA findings (assuming no other biases). If f=0.1 (an area where positive results are very hard to get), it would have to be 2.8. Pumping it up equally with l and m is harder (a very low m is probably the best bet and hardest for outsiders to notice). Differentiating, I get the following sensitivities to changes in k,l and m: OR/k+kOR/(1-kf), OR/l and -OR/m. So for f=0.5, OR=3.6 and k,l,m around unity the sensitivity of k becomes 10.5, and the others have just 3.5. A small change of finding bias can produce a sizeable change in the OR, so that may be the best bet of where the main bias source would lie.
This also implies that funding bias might be hard to get rid of without cutting funding totally. Experimenter bias is hard to get rid of, and just the emotion of thankfulness might be enough to induce a slight k. Encouraging the publication of negative results (increasing p) or forcing the publication of trials (increasing m) has relatively little effect.
The variance of the Bernouilli-distributed trials f(1-f), so estimating f from the studies would have variance f(1-f)/N where N is the number of trials. If N is reduced by 25% by removing all sponsored trials the variance increases by 33%. It seems that this would actually outweigh the benefits of removing the funding bias if the number of studies in a field are modest or the bias is not too large. So maybe just having researchers to declare their competing interests and then taking them into account when evaluating the research field might be the best way of getting a truth estimate?
Supping with the Devil
Causality likely sometimes goes either way, but given the costs of doing medical studies or anything involving bigger surveys one needs it is likely that funding is needed before the study. Running a tiny pilot might be possible and indeed lead to selecting a favorable source of funding, but here it might be just as much researcher bias - if the researcher "knows" what the outcome is likely to be he might search funding in the appropriate direction, hinting at it.
Calling for more rigorous, verified research is a nice idea, but in practice it may be hard to do in many fields. What is a rigorous social survey?
In a clear case of bias, I'm now noticing lots of new papers on publication bias. PLoS Medicine has two interesting ones (and a commentary):
http://medicine.plosjournal..."Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles"
http://medicine.plosjournal..."Ghost Authorship in Industry-Initiated Randomised Trials"
http://medicine.plosjournal..."Authors, Ghosts, Damned Lies, and Statisticians"
The first paper demonstrates funding bias in nutrition. The odds ratio was 7.61, and no unfavorable findings were reported at all from the industry funded studies (vs. 37% for the rest). The second shows that a lot of ghost authors, mostly statisticians, appear in industry sponsored papers.
seeing that a paper was funded by Acme Corporation tells us to adjust for funding bias, but if it was funded by the NIH we don't know if there is a bias or not.
Maybe a historical study is the way to go here? Take issues that are dead and settled (such as "radioactivity is bad for you", "asbestos is causes cancer" and even "smoking causes lung cancer"), and see what biases were induced by various funding bodies when these issues were "hot"?
But I don't see how we can easily generalise from history, seeing the miriads of research policies and governments that have changed over the years since then. Its very possible that public funding is biases in different ways, for different issues, at different times.
Maybe the best is to accept that there will be a bias, that we can't overcome it a priori, and just call for more rigorous, verified research and let the scientific method work its magic? The question of bias is the most usefull to highlight those areas where this extra research will be needed.