Who Wants Unbiased Journals?

Five years ago I proposed result-blind peer review, and I revised it later. Brendan Nyham just posted a nice long review of many such proposals, including a recent test at the journal Archives of Internal Medicine:

The … alternate review process was applied to the editorial review that occurred prior to outside peer review. … Of the 46 articles examined, 28 were positive, and 18 were negative. … Ultimately, 36 of the 46 articles (>77%) were rejected. … Editors were consistent in their assessment of a manuscript in both steps of the review process in over 77% of cases. … Over 7% of positive articles benefited from editors changing their minds between steps 1 and 2 of the alternate review process, deciding to push forward with peer review after reading the results. By contrast, … this never occurred with the negative studies. Indeed, 1 negative study, which was originally queued for peer review after an editor’s examination of the introduction and “Methods” section, was removed from such consideration after the results were made available. (more)

So even with two stage review, journal editors are tempted to publish papers with weak methods but positive results. And why not – unless important customers insisted, why would a journal handicap itself by committing itself to not publish such papers, which bring more fame and prestige to the journal.

Journal customers include universities who tenure professors who publish in prestigious journals, and grant givers who prefer grantees who publish similarly. But why should these customers handicap themselves – they also win by affiliating with those who publish papers with weak methods but positive results.

I’ve suggested that academia functions primarily to credential people as impressive and interesting in certain ways, so outsiders, like students and patron, can gain prestige by affiliating with them. If so, and if those who publish weak-method positive-results are in fact more impressive and interesting than those who publish stronger-method negative-results, there is little prospect to get rid of this publication bias.

What is possible is to augment publications with betting market prices estimating the chance each result will be upheld by future research. This would let readers get unbiased estimates on the reliability of research results. Alas, it seems there is no customer willing to pay extra to get such reliability estimates. Most everyone involved in the process mainly cares about signals of impressiveness; few care much about which research results are actually true.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://www.mccaughan.org.uk/g/ g

    Robin, are you aware at all that there is a distinction between each pair of the following propositions? (1) “No one has so far organized a prediction market for X.” (2) “No one in a position to set one up considers a prediction market for X worth while.” (3) “No one in a position to set one up considers a prediction market for X to have any value.” (4) “Scarcely anyone cares about accurate estimates for X.” (5) “Just about everyone involved in X is merely signalling.”

    I ask because this is by no means the first time you’ve mentioned some topic, observed #1, and leaped immediately to #4 and thence to #5 without showing any sign that the leap is anything other than a trivial straightforward inference.

    It is not, or at least shouldn’t be, a trivial straightforward inference. 1->2 is wrong because it’s possible that the people concerned simply have other higher priorities. 2->3 is wrong because a prediction market might have value but not enough value to justify the time and expense of making one, promoting it sufficiently to get it used enough to be useful, etc. 3->4 is wrong because someone might not know about prediction markets, or might not agree with you about their merits, or might think they’re good in general but not in case X. 4->5 is probably right in some cases, but in general people might be doing X just for fun, or because they’re compelled to by others, or something. The inference from 1->5 seems to me more in the category of “absurdly wrong” than “obviously right”.

    In the present case, the value of a betting market seems conditional on having a substantial number of people who (1) know a lot about the technical area in question and (2) are interested in betting on it, and on not having them swamped by others who (3) have a vested interest in distorting perceptions. #1 and #2 are problematic in many cases (for most academic questions, experts are rare and so are people who care much). For some medical questions #2, at least, may be less of an issue, but those are exactly the ones for which #3 is more worrying; for instance, pharmaceutical companies might well find that betting to distort prediction market prices is a good investment, and a better investment than betting to find out the truth.

    • http://hanson.gmu.edu Robin Hanson

      I can’t put all my evidence for a belief in every post that mentions that belief. I’ve posted on this subject many times. These sort of markets need at most one person per market willing to express an opinion once, plus a wider pool who browse for errors.

      • http://www.mccaughan.org.uk/g/ g

        I wasn’t suggesting that you put all your evidence in every post (nor even that you put *any* evidence in every post, which might actually be an improvement); my complaint isn’t that the inference 1->2->3->4->5 is *insufficiently supported by the evidence you’ve given* but that it’s *wrong*. Unless it is universally agreed that prediction markets are valuable — which it isn’t — the fact that any given set of people don’t make one is not good evidence that they’re about something other than truth.

        “At most one person plus a wider pool”: the point, of course, is the wider pool and the question of who’s going to make it up.

      • http://hanson.gmu.edu Robin Hanson

        g, if someone would pay to subsdize markets on journal article reliability, lots of folks would gladly play the role of browsing for errors. The key point is that no one offers to pay such subsidies.

  • http://www.johnicholas.com Johnicholas

    Is it feasible to take one person or a few person’s enthusiasm for this project, and just start doing it? Prestige can be altered by determined, substantive criticism.

    People wouldn’t submit to a “journal” that doesn’t have any (inital) reputation, but perhaps it could be a “moot journal” that reviews arXiv preprints, or published papers in one or several open access journals.

    • http://hanson.gmu.edu Robin Hanson

      No, prestige usually can’t be altered by such criticism.

  • http://filedrawer.wordpress.com Chris Said

    If academics are more interested in signaling impressiveness than in discovering truth, then we need to better align their incentives with truth discovery. As I have written about here, the granting agencies should give grant preferences (and hence prestige) to researchers who submit to good-practice journals.

    • http://hanson.gmu.edu Robin Hanson

      Granting agencies are players in a game just like the rest. Why expect them any more than the rest to sacrifice personal gain for some larger social benefit?

      • http://daedalus2u.blogspot.com/ daedalus2u

        Very well stated. It is the prisoner’s dilemma, no one wants to defect first.

      • http://filedrawer.wordpress.com Chris Said

        Robin – How is it against the agencies’ interests to better incentivize good-practice journals? It seems like the biggest impediments are just lack of awareness and institutional inertia. Call me crazy, but it is possible to change bureaucracies for the better, even if only on the margins. Raising awareness is the first step. Much better than throwing one’s hands up in the air.

      • http://daedalus2u.blogspot.com/ daedalus2u

        It isn’t against the agencies’ interests, it is against the employees of the agencies interests because it is against the congressmen who fund the agencies interests because it is against the lobbyists who fund the congressmen interests.

        Look at the NCCAM.

        https://en.wikipedia.org/wiki/NCCAM

        If NCCAM didn’t have hundreds of millions to spend on nonsense, the congressmen who support it wouldn’t get millions in campaign contributions.

        With NCCAM it is pretty easy to see because it is all nonsense. Even more so than what the Military Industrial Complex spends money on, or farm subsidies, oil industry subsidies, and so on.

        The problem isn’t the bureaucrats, the problem is who incentivizes the bureaucrats, and to do what. The surest way to get fired is to blow the whistle on malfeasance of higher-ups.

  • http://aguanomics.com David Zetland

    Robin — I, like you, am depressed by the emphasis on headlines instead of impact, but your arbitrage suggestion will not take off until it’s more profitable ($) to publish accuracy instead of attention (power). The problem, indeed, is that “leaders” want positive vibes to transmit to voters, no matter how irrelevant (most environmental topics, my area…)

  • Don

    This problem has been a persistent issue within the scientific community for hundreds of years. As an example a famous biologist decided that humans had the same number of chromosomes as monkeys because obviously we had descended from that species.

    In 1955 two scientists in Lund, Sweden rocked the world by proving that we had only 46. The technology to prove this had been available for over 2 decades, but one of the worlds most prominent biologists was adamant that it was 48. In fact people had seen this before but nobody would publish it.

    Scientists are just like everybody else… despite the fact that many feel they are different.

    • http://daedalus2u.blogspot.com/ daedalus2u

      Some people-scientists are like non-scientists and are motivated by the normal non-science things that ordinary-people are motivated by. These are the people-scientists who strive to be leaders of large organizations and to amass power and prestige and authority over underlings.

      This type of scientist does best working at the interface between real-scientists and ordinary-people. They are able to channel either way, depending on what is needed. When the funders of their science want certain results, the people-scientists can produce whatever they non-scientists want for a fee.

      Much of the problem in science relates to how non-scientists mandate that science research be funded. Non-scientists don’t understand science or how science is done, so they apply human metrics to try and make it more efficient by inducing competition; competition for prestige, competition for funding, competition for authority, competition to determine who is right and who is wrong.

      The problem is that reality doesn’t care about human competition metrics. Competition gets in the way of understanding science. Working together in cooperation would make doing science easier, but ordinary-people don’t understand that. Making funding contingent on competition doesn’t select for the best scientists, it selects for the best competitors.

      The easiest way to compete is by having monopoly power. Whoever controls the monopoly power will win every competition. But then you don’t decided who is right, you have decided who has won the competition.

      Humans really like to have a top-down power hierarchy. That is how essentially all human institutions are arranged. Reality isn’t top-down, but humans have a compulsion to imagine that it is by postulating a supreme entity at the top (i.e. God), or by making one (Pope, president, king, fAI) and attributing God-like powers to it.

  • Dave

    In some ways it is worse than you think. There is no reason to invoke signaling.If research is supported by money making outfits such as drug or device manufacturers,few negative studies will be published. Also promoter/entrepreneur style doctors are at times little better than quacks.
    So the most powerful signals are dollars. You are an economist. What’s all this signal stuff ?

    However is is not that simple.In medical academia there are competitive groups,just as in regular universities. It is entirely wrong to say that medical academicians are uncritical simpletons. There is an atmosphere of acerbic criticism here.Visit a departmental journal club some time and you will see one paper after another being napalmed by the attending professors.
    Also there are vigorous debates between specialties such as cardiologists vs cardiovascular surgeons and other specialties.

    The real simpletons are the people who consume the offerings of the media where the quacks and promoters including academia feed the public baloney. This is not to say that there are not biases,including publication bias.

  • arch1

    Thanks for posting on this topic, from an interested citizen who indirectly consumes this stuff.

    1) When skimming the summary I found myself wondering how much of the 77 percent agreement between the stages was artificial – that is, reflecting reviewer reluctance to change decisions precisely because that would lay bare their results bias.

    Lurching from cynicism toward idealism,

    2) Betting markets or otherwise, it’s hard for me to understand (or perhaps to accept) why a grassroots movement among secure, principled researchers in at least one academic discipline could not get the ball rolling in a good direction. Such people exist or I’m the Queen of England. Idealistic funders exist. Quality research measurably stands up over time. And, the problem being addressed cuts to the core of science.

  • http://un-thought.blogspot.com/ Floccina

    I do not care much if they are biased, I just wish that they knew more about that which they speak about.

  • Michael Wengler

    Regardless of the logic of the study, I will typically use my scarce time to learn of a somewhat plausible cure for what ails me rather than spending it admiring the details of an intelligently designed study that has nothing to add to my life.  

    Ultimately, the entire purpose of publications of finite abstractions of complex studies is to save me the time of becoming an expert myself in order to possibly benefit from somebody else’s conclusions.  I think THIS is why the logical positivists will never win the quality vs utility argument in what earns the attention of readers.  

  • Pingback: Overcoming Bias : Fixing Academia Via Prediction Markets