Five years ago I proposed result-blind peer review, and I revised it later. Brendan Nyham just posted a nice long review of many such proposals, including a recent test at the journal Archives of Internal Medicine
Regardless of the logic of the study, I will typically use my scarce time to learn of a somewhat plausible cure for what ails me rather than spending it admiring the details of an intelligently designed study that has nothing to add to my life.
Ultimately, the entire purpose of publications of finite abstractions of complex studies is to save me the time of becoming an expert myself in order to possibly benefit from somebody else's conclusions. I think THIS is why the logical positivists will never win the quality vs utility argument in what earns the attention of readers.
I do not care much if they are biased, I just wish that they knew more about that which they speak about.
Thanks for posting on this topic, from an interested citizen who indirectly consumes this stuff.
1) When skimming the summary I found myself wondering how much of the 77 percent agreement between the stages was artificial - that is, reflecting reviewer reluctance to change decisions precisely because that would lay bare their results bias.
Lurching from cynicism toward idealism,
2) Betting markets or otherwise, it's hard for me to understand (or perhaps to accept) why a grassroots movement among secure, principled researchers in at least one academic discipline could not get the ball rolling in a good direction. Such people exist or I'm the Queen of England. Idealistic funders exist. Quality research measurably stands up over time. And, the problem being addressed cuts to the core of science.
In some ways it is worse than you think. There is no reason to invoke signaling.If research is supported by money making outfits such as drug or device manufacturers,few negative studies will be published. Also promoter/entrepreneur style doctors are at times little better than quacks.So the most powerful signals are dollars. You are an economist. What's all this signal stuff ?
However is is not that simple.In medical academia there are competitive groups,just as in regular universities. It is entirely wrong to say that medical academicians are uncritical simpletons. There is an atmosphere of acerbic criticism here.Visit a departmental journal club some time and you will see one paper after another being napalmed by the attending professors.Also there are vigorous debates between specialties such as cardiologists vs cardiovascular surgeons and other specialties.
The real simpletons are the people who consume the offerings of the media where the quacks and promoters including academia feed the public baloney. This is not to say that there are not biases,including publication bias.
It isn't against the agencies' interests, it is against the employees of the agencies interests because it is against the congressmen who fund the agencies interests because it is against the lobbyists who fund the congressmen interests.
Look at the NCCAM.
If NCCAM didn't have hundreds of millions to spend on nonsense, the congressmen who support it wouldn't get millions in campaign contributions.
With NCCAM it is pretty easy to see because it is all nonsense. Even more so than what the Military Industrial Complex spends money on, or farm subsidies, oil industry subsidies, and so on.
The problem isn't the bureaucrats, the problem is who incentivizes the bureaucrats, and to do what. The surest way to get fired is to blow the whistle on malfeasance of higher-ups.
g, if someone would pay to subsdize markets on journal article reliability, lots of folks would gladly play the role of browsing for errors. The key point is that no one offers to pay such subsidies.
Robin - How is it against the agencies' interests to better incentivize good-practice journals? It seems like the biggest impediments are just lack of awareness and institutional inertia. Call me crazy, but it is possible to change bureaucracies for the better, even if only on the margins. Raising awareness is the first step. Much better than throwing one's hands up in the air.
I wasn't suggesting that you put all your evidence in every post (nor even that you put *any* evidence in every post, which might actually be an improvement); my complaint isn't that the inference 1->2->3->4->5 is *insufficiently supported by the evidence you've given* but that it's *wrong*. Unless it is universally agreed that prediction markets are valuable -- which it isn't -- the fact that any given set of people don't make one is not good evidence that they're about something other than truth.
"At most one person plus a wider pool": the point, of course, is the wider pool and the question of who's going to make it up.
Very well stated. It is the prisoner's dilemma, no one wants to defect first.
Granting agencies are players in a game just like the rest. Why expect them any more than the rest to sacrifice personal gain for some larger social benefit?
No, prestige usually can't be altered by such criticism.
I can't put all my evidence for a belief in every post that mentions that belief. I've posted on this subject many times. These sort of markets need at most one person per market willing to express an opinion once, plus a wider pool who browse for errors.
Some people-scientists are like non-scientists and are motivated by the normal non-science things that ordinary-people are motivated by. These are the people-scientists who strive to be leaders of large organizations and to amass power and prestige and authority over underlings.
This type of scientist does best working at the interface between real-scientists and ordinary-people. They are able to channel either way, depending on what is needed. When the funders of their science want certain results, the people-scientists can produce whatever they non-scientists want for a fee.
Much of the problem in science relates to how non-scientists mandate that science research be funded. Non-scientists don't understand science or how science is done, so they apply human metrics to try and make it more efficient by inducing competition; competition for prestige, competition for funding, competition for authority, competition to determine who is right and who is wrong.
The problem is that reality doesn't care about human competition metrics. Competition gets in the way of understanding science. Working together in cooperation would make doing science easier, but ordinary-people don't understand that. Making funding contingent on competition doesn't select for the best scientists, it selects for the best competitors.
The easiest way to compete is by having monopoly power. Whoever controls the monopoly power will win every competition. But then you don't decided who is right, you have decided who has won the competition.
Humans really like to have a top-down power hierarchy. That is how essentially all human institutions are arranged. Reality isn't top-down, but humans have a compulsion to imagine that it is by postulating a supreme entity at the top (i.e. God), or by making one (Pope, president, king, fAI) and attributing God-like powers to it.
This problem has been a persistent issue within the scientific community for hundreds of years. As an example a famous biologist decided that humans had the same number of chromosomes as monkeys because obviously we had descended from that species.
In 1955 two scientists in Lund, Sweden rocked the world by proving that we had only 46. The technology to prove this had been available for over 2 decades, but one of the worlds most prominent biologists was adamant that it was 48. In fact people had seen this before but nobody would publish it.
Scientists are just like everybody else... despite the fact that many feel they are different.
If academics are more interested in signaling impressiveness than in discovering truth, then we need to better align their incentives with truth discovery. As I have written about here, the granting agencies should give grant preferences (and hence prestige) to researchers who submit to good-practice journals.
Is it feasible to take one person or a few person's enthusiasm for this project, and just start doing it? Prestige can be altered by determined, substantive criticism.
People wouldn't submit to a "journal" that doesn't have any (inital) reputation, but perhaps it could be a "moot journal" that reviews arXiv preprints, or published papers in one or several open access journals.