Is most research a waste?

Over at 80,000 Hours we have been looking into which research questions are most important or prone to neglect. As part of that, I was recently lucky enough to have dinner with Iain Chalmers, one of the founders of the Cochrane Collaborations. He let me know about this helpful summary of reasons to think most clinical research is predictably wasteful:

“Worldwide, over US$100 billion is invested every year in supporting biomedical research, which results in an estimated 1 million research publications per year

a recently updated systematic review of 79 follow-up studies of research reported in abstracts estimated the rate of publication of full reports after 9 years to be only 53%.

An efficient system of research should address health problems of importance to populations and the interventions and outcomes considered important by patients and clinicians. However, public funding of research is correlated only modestly with disease burden, if at all.6–8 Within specific health problems there is little research on the extent to which questions addressed by researchers match questions of relevance to patients and clinicians. In an analysis of 334 studies, only nine compared researchers’ priorities with those of patients or clinicians.9 The findings of these studies have revealed some dramatic mismatches. For example, the research priorities of patients with osteoarthritis of the knee and the clinicians looking after them favoured more rigorous evaluation of physiotherapy and surgery, and assessment of educational and coping strategies. Only 9% of patients wanted more research on drugs, yet over 80% of randomised controlled trials in patients with osteoarthritis of the knee were drug evaluations.10 This interest in non-drug interventions in users of research results is reflected in the fact that the vast majority of the most frequently consulted Cochrane reviews are about non-drug forms of treatment.

New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence. Many researchers do not do this—for example, Cooper and colleagues 13 found that only 11 of 24 responding authors of trial reports that had been added to existing systematic reviews were even aware of the relevant reviews when they designed their new studies.

New research is also too often wasteful because of inadequate attention to other important elements of study design or conduct. For example, in a sample of 234 clinical trials reported in the major general medical journals, concealment of treatment allocation was often inadequate (18%) or unclear (26%).16 In an assessment of 487 primary studies of diagnostic accuracy, 20% used different reference standards for positive and negative tests, thus overestimating accuracy, and only 17% used double-blind reading of tests.17

More generally, studies with results that are disappointing are less likely to be published promptly,19 more likely to be published in grey literature, and less likely to proceed from abstracts to full reports.2 The problem of biased under-reporting of research results mainly from decisions taken by research sponsors and researchers, not from journal editors rejecting submitted reports.20 Over the past decade, biased under-reporting and over-reporting of research have been increasingly acknowledged as unacceptable, both on scientific and on ethical grounds.

Although their quality has improved, reports of research remain much less useful than they should be. Sometimes this is because of frankly biased reporting—eg, adverse effects of treatments are suppressed, the choice of primary outcomes is changed between trial protocol and trial reports,21 and the way data are presented does not allow comparisons with other, related studies. But even when trial reports are free of such biases, there are many respects in which reports could be made more useful to clinicians, patients, and researchers. We select here just two of these. First, if clinicians are to be expected to implement treatments that have been shown in research to be useful, they need adequate descriptions of the interventions assessed, especially when these are non-drug interventions, such as setting up a stroke unit, off ering a low fat diet, or giving smoking cessation advice. Adequate information on interventions is available in around 60% of reports of clinical trials;22 yet, by checking references, contacting authors, and doing additional searches, it is possible to increase to 90% the proportion of trials for which adequate information could be made available.22

Although some waste in the production and reporting of research evidence is inevitable and bearable, we were surprised by the levels of waste suggested in the evidence we have pieced together. Since research must pass through all four stages shown in the fi gure, the waste is cumulative. If the losses estimated in the fi gure apply more generally, then the roughly 50% loss at stages 2, 3, and 4 would lead to a greater than 85% loss, which implies that the dividends from tens of billions of dollars of investment in research are lost every year because of correctable problems.”

His assessment was that the research profession could not be expected to fix up these problems internally, as it  had not done so already despite widespread knowledge of these problems, and had no additional incentive to do so now. It needs external intervention and some options are proposed in the paper.

There is a precedent for this. The US recently joined a growing list of countries who have helped their researchers coordinate to weaken the academic publishing racket, by insisting that publicly-funded research be free and openly available within a year. So long as academics are permitted to publish publicly-funded research in pay-for-access journals, established and prestigious journals can earn big rents by selling their prestige to researchers – to help them advance their careers – in exchange for copyright on their publicly-funded research. Now that researchers aren’t permitted to sell that copyright, an individual who would refuse to do so out of principle won’t be outcompeted by less scrupulous colleagues.

Likewise, rules that require everyone receiving public money to do the public-spirited thing, for instance by checking for systematic reviews, publishing null results,  pre-registering their approach to data analysis, opening their data to scrutiny by colleagues, and so on, would make it harder for unscrupulous researchers to get ahead with corner-cutting or worse chicanery.

GD Star Rating
a WordPress rating system
Tagged as:
Trackback URL:
  • Anonymous

    If most research really is a waste, the blame should fall mostly to the perverse incentives due to modern research funding model.

    Most of the research money is hidden behind grants which give you funding for one to five years. To get this grant in the first place and to guarantee that you will also get a grant five years from now, you are incented to do two things. First is to choose your research so that the grant committee will like it, i.e. choose some fashionable branch of your field which already has too many people doing research on it. Second is to publish as much as you can without caring too much about the quality or importance of your research. It’s better to publish self-evident results on a fashionable subject than to try to do something truly original (and potentially more important) which has a large chance of failure. Also your research methodology doesn’t matter that much since everyone who could potentially spot the faults are playing the same game.

    In a nutshell: for a risk-averse, rational scientist, doing incremental, self-evident and sloppy research is better strategy in today’s funding system than doing big, original research and checking your results carefully.

  • John Goulden

    Speaking as a STEM professor in a private liberal-arts college: yes, much published research is crap. These publications exist because publication is required for tenure. For proof, just look at citation indices. Much published research is never cited by anyone other than the publication’s authors. It’s actually not so bad in STEM but, God, the humanities…

  • Robert Koslover

    Is most research exploring whether most research is a waste a waste?
    Let’s fund some research to answer that!

  • http://juridicalcoherence.blogspot.com/ srdiamond

    Two kinds of obviously wasteful research: “research” on the “Singularity” and on “Ems.”

  • nate

    I see the other commenters’ points about bad incentives; I have another idea. 

    Isn’t making predictions extremely difficult?  Wouldn’t it make sense that predicting the efficacy/usefulness of your research could be difficult? 

    I know people can point to poetry analysis or whatever to poke fun at research papers, but why should research be exempt from the difficulty in making predictions? 

    How can people anticipate the effects of basic research?  Think of people who said that no one will watch television etc.    

  • Douglas Knight

    This seems a bit abstract. What does “waste” mean? From whose perspective? Compared to what alternative? At times, you write as if you are talking about publicly funded research, while at others, you (or Chalmers and Glasziou) write as if you controlled all research. 

    Chalmers and Glasziou say that there is too much drug research. Presumably this is funded by drug companies, not public agencies. In what sense is this a waste? From the point of view of funding agencies, it isn’t under their control, so it isn’t a waste. Presumably it is economically efficient; companies do this because they can monetize the research. This seems like a problem with the other types of research, not with the drug companies or their research. The revenue on the drugs pays not only for the drug research, but also for promulgating information about the treatment. The drug companies make large studies, make meta-analyses and make doctors know about them, fighting other kinds of research waste Chalmers and Glasziou mention. Drug companies make money by changing medical practice; this clear goal allows them to avoid many forms of waste, unlike specialists trying to add a line to a CV or at most brag that they have made a great discovery. There is often an insinuation that corporate research is more corrupt. It certainly is corrupt and all forms of research corruption should be fought. If it is more corrupt, this might inflate (or deflate!) the amount of drug research. Solving this problem might reduce the ratio of drug to non-drug research, but it will remain high because drug companies can make money off of drugs. In principle, it could be a problem that drug research drowns out the non-drug research, but Chalmers and Glasziou say that the patients want non-drug treatments. If they know that they want non-drug research, the existence of drug research should not interfere with their ability to find the non-drug literature. Yes, the drug reps have the ear of their physicians, but it’s not like the drug reps are competing with people trying to tell them about non-drug treatments. (I don’t mean to say that monetizing non-drug treatments is the only solution, but someone has to pay for not just basic research, but synthesizing it and communicating it to doctors. Eliminating drug research won’t solve that problem.)

    As to your proposals, the devil is the details. Pre-registration of trials is fairly simple and also fairly widely adopted. Requiring people to publish their data is something that many journals claim to do, but in practice they don’t do it. Demanding that people read the literature seems very hard to formalize or enforce.

    I am curious about Chalmers and Glasziou’s claim “The problem of biased under-reporting of research results mainly from decisions taken by research sponsors and researchers, not from journal editors rejecting submitted reports.” Source 20 appears to be this. I find the insinuation of research sponsors not justified by the source. It does say that journal editorial decisions are not creating more publication bias, although that is a pretty weak claim, because the authors have already created publication bias, perhaps because they have taken into account editorial bias. Indeed, since the authors submit negative results to inferior journals, the apparent lack of bias by the editors is really evidence that authors correctly assess said bias. (The correct conclusion depends on a lot of details, but I think that’s it.) The lower rate of bias on larger studies is also mildly suggestive.

    • Michael Vassar

      I’m so very annoyed by serious and engaging comments like this never getting actual engagement on this site.

    • robertwiblin

      Agree with most of this.

      Waste is research which does less to improve welfare than it otherwise might.

      The pro-drug bias would occur if it were easier to get patents and revenue from drugs than other equally useful inventions or information. That would require a different intervention to a situation where the government was promoting the research directly, but intellectual property law is still government policy and can set priorities well or badly.

      • Douglas Knight

        I think that it is a bit confused to talk about a drug bias. I think that it is confused to look at the ratio of drug research to non-drug research and to compare that to patient interest. Yes, it is a useful simple heuristic to compare those ratios and try to explain their discrepancy. But I think that emphasis on the ratio is distracting from the two quantities which should be optimized separately.
        If we change the IP regime to make drugs not viable or to make other medical interventions viable, then the ratio would change. But I think it would mostly change because the one component increased or the other decreased, not because there is a trade-off between them. At some level of research (especially in the short term) there is a trade off because there is a limited pool of researchers, but I don’t think that we’re there.Since I don’t think that there is a trade-off between types of medical research, I don’t think one should talk in terms of rebalancing them. Instead, they must be compared to things outside of medicine. Do we want more medical research or less? One way to achieve the former is to ease the monetizing of non-drug treatments; one way to achieve the latter is to weaken drug patents. Also, while the current IP regime is not perfect, it isn’t arbitrary, either. Drugs are easier to isolate and treat as IP. Also, they are easier to isolate and treat as not the practice of medicine, allowing venture capital.

        Also, writing in medical journals might result in journals requiring the publication of data, but it is not going to change IP laws.

  • http://juridicalcoherence.blogspot.com/ srdiamond

    So, per a recent posting, you maintain that U.S. laissez-faire capitalism is great stuff because the research it encourages (particularly in pharmaceuticals) benefits the Third World—but it turns out, said research is mostly waste.

    • Elithrion

      Research elsewhere is not any better and it’s not so much a waste that it’s not, in aggregate, worth doing.

    • robertwiblin

      I noted in that post that there’s good reasons for governments to direct science research. Unfortunately, it’s not clear that publicly funded research generally does much better (but it’s an open question). Neither profit nor political incentives focus the mind on helping the poor overseas.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        In that post, you had written:

        While tempering the ravages of the market may on balance improve the welfare of current Americans, doing so is likely to lead to less experimentation in science, equipment, software, art, business models and so on.

        The point isn’t about public versus private research but rather about your conclusion that current research (in whatever form) is waste. How is laissez-faire capitalism benefiting the Third World by means of wasteful experimentation and research?

        And while not all “innovation” involves “research,” the discovery that research is wasteful should, perhaps, lead you to wonder whether other touted forms of “innovation” aren’t also wasteful.

      • robertwiblin

        It would suggest research is even more important, if a small minority of existing research is generating all the benefits (e.g. new products, higher productivity) we observe.

  • MPS

    I don’t have time to read your post at the moment but I’ve always thought it was quite obvious and acceptable that most “research” is a “waste.”  

    This is, indeed, why we create intellectual property to protect positive outcomes of research investments, and why we fund research publicly.  

    The basic dynamic with research is you spend a lot of time pursuing unproductive ends and occasionally find a productive outcome.  In a free capitalist market, the agent performing the research invests in mostly unproductive activities and occasional productive activities, while other agents can simply wait and copy the productive activities.  In this environment it is not worthwhile to invest in research.  And so we create intellectual property, which gives agents performing research temporary monopoly protections on the productive outcomes of their work, and we fund public research, which socializes the costs / benefits.

    Your post seems to approach a more detailed question about medical research but I can’t help but think it all fits into this context.  There will always be a compartmentalization of a general program of research whereby most of it is wasteful.  The important question is whether the *entire* program research — “entire” meaning that which can be pursued in isolation of any outside inputs — is net positive or negative.

  • Pingback: The Wisdom of Crowds | Conor D'Arcy

  • Stephanie Wykstra

    Interesting post!
    On the topic on preregistration: I recently noticed some disappointing results of the ICMJE commitment to preregister (less than half of studies preregistered, though they made the commitment to require preregistration of all studies in their journals). I noted the details here http://www.highqualityevidence.org/2013/02/half-broken-preregistration-commitment.html. 

    One thing that worried me was that I noticed that in 1/3 of even the preregistered studies, the primary outcome was different in some way in the published results (suggesting possible cherry-picking)– it makes me wonder how often that’s a problem generally. Obviously, these solutions are only as good as their implementation! 

  • Pingback: 10 Monday PM Reads | The Big Picture

  • Pingback: Morning Reads for Tuesday, March 5th, 2013 — Peach Pundit

  • Pingback: Paletleme Amirliği – Mart 2013 | Emrah Göker'in İstifhanesi