Funding bias occurs when the conclusions of a study get biased towards the outcome the funding agency wants. A typical example from my own field is Turner & Spilich, Research into smoking or nicotine and human cognitive performance: does the source of funding make a difference? Researchers declaring having tobacco industry funding more often detected neutral or positive cognitive enhancement effects from nicotine than non-funded researchers, who were more evenly split between negative, neutral and positive effects.
There have been some surveys of funding bias. Bekelman, Li & Gross find that 25% of investigators in their material had industry funding sources. Doing a meta-analysis of 8 articles themselves evaluating 1140 original studies they got a 3.6 odds ratio of industry favourable outcomes when there was industry sponsorship compared to no sponsorship. There are also problems with data sharing and publication bias. An AMA 2004 Council Report also points out that sponsered findings are less likely to be published and more likely to be delayed.
A case study of co-authoring a study with the tobacco industry by E. Yano describes both how the industry tried to fudge the results (probably more overtly than in most cases of funding bias) and how the equally fierce anti-tobacco campaigners then misrepresented the results; the poor researcher was in a no-win scenario.
Looking at these studies it seems that there is a general tendency even for unsponsored researchers to get industry-positive findings. My guess is that this is a form of publication bias: positive results are easier to publish, and in many fields there might be a correlation between positive results and them being positive for the industry (e.g. testing whether drugs work on various conditions).
Maybe there is also an anti-industry bias here? Leaving aside obvious cases of non-industry bias such as from government sponsored agencies seldom finding faults with government policies there could be signalling happening here. Do non-corporate funding agencies favour researchers not accepting corporate funding? Such researchers would be less likely to leave for corporate funding, making it more likely a long-term relationship could be built up. They would also be more interested in researching areas the agency finds relevant and in general be more “loyal”. This might be important if there is competition between funding agencies for “good” research projects (that will bring publicity, status, relevance and possibly advance political goals). To a researcher it might hence be rational to distance oneself from corporate funding in order to ensure more non-corporate funding.
I wonder whether the funding outcome bias is worse than the funding publication bias, and whether the 25% of research funding might not make up for some of the bias?
Warning, back-of-envelope calculation done before going to bed follows (this really should be modelled more properly, for example by simulating independent trials and summing them with Fisher’s method).
Unbiased studies get positive findings with probability f, and then get published with probability P. Negative findings get published with probability p (< P). Biased studies have a slightly higher positive finding probability kf (1/f > k>1), a higher publishing probability lP (l>1) and a lower negative finding publishing probability mp (m<1). The total amount of reported positive findings will be [0.75+0.25(kl)]Pf, and negative findings [0.75(1-f)+.25(1-kf)m]p.
The odds ratio between sponsored and unsponsored studies would be
OR = (kflP/(1-kf)mp) / (fP/(1-f)p) = (kl/m)(1-f)/(1-kf).
If f=0.5, a finding bias of k=1.56 would explain the JAMA findings (assuming no other biases). If f=0.1 (an area where positive results are very hard to get), it would have to be 2.8. Pumping it up equally with l and m is harder (a very low m is probably the best bet and hardest for outsiders to notice). Differentiating, I get the following sensitivities to changes in k,l and m: OR/k+kOR/(1-kf), OR/l and -OR/m. So for f=0.5, OR=3.6 and k,l,m around unity the sensitivity of k becomes 10.5, and the others have just 3.5. A small change of finding bias can produce a sizeable change in the OR, so that may be the best bet of where the main bias source would lie.
This also implies that funding bias might be hard to get rid of without cutting funding totally. Experimenter bias is hard to get rid of, and just the emotion of thankfulness might be enough to induce a slight k. Encouraging the publication of negative results (increasing p) or forcing the publication of trials (increasing m) has relatively little effect.
The variance of the Bernouilli-distributed trials f(1-f), so estimating f from the studies would have variance f(1-f)/N where N is the number of trials. If N is reduced by 25% by removing all sponsored trials the variance increases by 33%. It seems that this would actually outweigh the benefits of removing the funding bias if the number of studies in a field are modest or the bias is not too large. So maybe just having researchers to declare their competing interests and then taking them into account when evaluating the research field might be the best way of getting a truth estimate?