Popular Fields Less Accurate

From a recent PLOS:

It has been suggested that the reliability of findings published in the scientific literature decreases with the popularity of a research field. Here we provide empirical support for this prediction. We evaluate published statements on protein interactions with data from high-throughput experiments. We find evidence for two distinctive effects. First, with increasing popularity of the interaction partners, individual statements in the literature become more erroneous. Second, the overall evidence on an interaction becomes increasingly distorted by multiple independent testing.

This is an important point: typical academic processes tend to produce more reliable results when no one cares or pays much attention; do not assume they give the same reliability to high profile topics.  I’ve seen this trend clearly in economics.

This trend cuts both ways.  Just because you are part of a field that seems to produce reliable results off in your largely unnoticed corner, don’t assume the high profile bigshots in your field that get more outside attention are as reliable.  And just because the public bigshots in another field that you notice seem to you sloppy and sleezy, don’t assume that those laboring in the shadows of that field know nothing.

This phenomena helps explain why we need prediction markets for academic topics, and why most academics may not preceive that need.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Constant

    we need prediction markets

    I just thought of a subtitle for this blog.

    • http://lesswrong.com/ CannibalSmith

      +1

  • Mike

    This is an important point: typical academic processes tend to produce more reliable results when no one cares or pays much attention

    My initial reaction was the opposite. I thought to explain this by supposing peer review was more lax in “popular” fields because, to put it one way, affiliation with the popular field acted as a sort of credential. Meanwhile, peer review would be more stringent in non-mainstream fields because more novel ideas invite more critical analysis (they don’t have a history of surviving criticism as does the popular field). And authors respond to these trends, too, in the care they bring to their work. In some sense, I suspect the cost in status among a group of researchers of being wrong about an idea that fits intuitively into mainstream ideas, is lower than the cost of being wrong about an idea that is novel or goes against the grain.

    • http://hanson.gmu.edu Robin Hanson

      This result has nothing to do with ideas going against the grain, nor is it about weaker peer review; the popular results are more wrong.

      • Mike

        Looking at the paper, I find two suggested explanations for the results (1) in more popular” fields there might be stronger incentive to seek schemes to minimize error bars, thus making the statistical significance of results seem stronger than it is, and (2) if a hypothesis is tested many times, it becomes more probable that someone will obtain a false positive and publish it (presumably people are more likely to publish positive results than negative ones).

        These seem to fit into what I said, though I agree I am extrapolating. But, analogous to (1), when the result matches expectations (and expectations are clearer in more developed, more popular, fields) one might be more prone to minimize the uncertainties of that result. Related to (2), when a result goes “against the grain,” one is perhaps more careful to perform more tests and to verify it is not a false positive.

        That being said, I admit my original post was made under a misunderstanding of what this was all about — I was thinking more in terms of technical errors of analysis (like a calculation that relies on an assumption that in fact inconsistent or inappropriate), not in terms of statistical errors due to experimental limitations.

  • q

    why would prediction markets be more reliable rather than less reliable in this case?

    are you seeing prediction markets as a kind of filter or censor or filter — to focus on “correct” beliefs?

    why would the betters be more reliable than the researchers, and who would decide what the bets are and who wins the bets? since truth and new information is often very very illiquid in academic research, why would betting markets function any better than what we have now? (i am taking popularity = liquidity of the field beyond what can be cleared by the underlying market in truth/information.)

    nb another possible explanation is that interest follows genuine controversy — ie where there are actual different views.

    • http://hanson.gmu.edu Robin Hanson

      The usual problem is that popular topics give stronger distorting incentives, and prediction markets do very well are resisting trader incentives to manipulate prices.

      • q

        truth and scientific progress are illiquid and answers often come after a long time, sometimes decades or centuries. how would you separate duration and outcome uncertainties in financing the market?

  • Requia

    By popular do they mean more papers published in the field or more people outside the field paying attention?

    • Douglas Knight

      “field” is a misleading description of the article, though it may be a reasonable inference. The range of the paper is proteins. The measure is number of papers published, but that doesn’t explain why why some proteins are popular. I imagine that it’s because they’re expected to be medically relevant, that is, outside attention, or the expectation thereof. But I don’t think the paper addresses the issue of outsiders wanting particular answers, as if the protein were already relevant to a drug. That would produce bias and not just small error bars, but would probably be hard to test in such a large automated study.

      • Douglas Knight

        I should have said something more like “‘field’ is a misleading description of the experiment”; the article is already using the word “field,” although more for the general theory than for the experiment. It seems likely that proteins cut across the organization of biology, but it’s not obvious that popular proteins are the same as popular fields. It seems plausible that popular fields also have unpopular proteins. Do people who do sloppy work on popular proteins do sloppy work on unpopular proteins? (if the problem is raw attention leading to publication bias, then no; if competition creates fraud, then maybe)

  • mike

    Couldn’t there be a reverse causation, with false results more likely to be interesting and thus more studies (or even possibly, bad results spur studies to disprove them)

  • George Weinberg

    This phenomena helps explain why we need prediction markets for academic topics, and why most academics may not preceive that need.

    I think it’s fair to say that any action which has significant consequences will have unanticipated consequences, and that in general the unintended consequences of a restriction on human action will usually overall be negative.

    But for prediction markets to work, what is being predicted must be well defined. It’s easy for me to say something like “your idea may well accomplish what it intends, but it will have negative results overall”, but even if I am quite confident that this is true, given that I don’t know what the negative results will be beforehand and can’t prove that the results followed from the policy after, nor can we agree how bad they are, how can we come up with a reasonable bet?

  • George

    Very interesting article. Perhaps this recent article in BMJ is related:
    How citation distortions create unfounded authority

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Hypothesis: Popular fields attract people who are relatively more interested in status and relatively more inclined to conformity.

    Testable prediction: Run some standard conformity experiments on scientists in fields that were more popular or less popular at the time the scientist joined them.