11 Comments

Nice thinking, but partial. Especially for the benefit of economists of all academics, I recommend the following funny (even though bitter and cynical) reading:http://www.labspaces.net/vi... as a start of cure against economists' now famous autism.

Expand full comment

Are peer reviewed journals relevant anymore?

They were invented in the days before cheap copiers fuhgeddaboud web pages, WYSIWYG, matlab, and excel. The peer reviewed journal system is unchanged from the day I would write a paper longhand on a pad of paper, would redline it (with a red pen) would cut and paste (with scissors and a glue stick), would hand it to a typist, would hand-draw some figures which would be sent to a graphic artist who would do them up with black ink and other tricks of his trade on vellum. The typists copy of the paper might get a 2nd edit, but as you can infer the costs of editing cut the rate of revisions down. The final paper would be sent to a journal where it would be carefully reviewed before the scarce space in the only "broadband" media available to the profession was used on this paper.

At this point any paper can be more effectively published than a journal paper ever was 50 years ago by putting it on the web.

Now publicizing it, drawing in readers, is a different thing. We all have methods for choosing what we sped our time reading. This blog is an important one for me, for instance. And indeed, in my experience as an electrical engineer, journals are replaced by IEEE Xplore, which is the web presence of all IEEE journals. All papers published are on that site. So the reviewers still matter, somewhat. Its not clear what would be lost if IEEE opened it up so all properly formatted papers were "published" in this way.

I think the problem with making the reviewing process HARDER is that it is simply ineffective. I'd imagine in most small science, we all change the details of our studies as we go along. And "randomly" chosen reviewers never have the background to fully appreciate what we are trying to say, while our fans will read our papers irrespective of, and indeed prior to, their review status is determined.

Expand full comment

Jeffrey, hamilton, on further reflection, "nothing of interest" was some sloppy phrasing on my part. As Jeffrey point out, it's not unusual for unpublished results to be of interest to someone, but maybe not the researcher who is more concerned with advancing his career, or the journal looking to develop a reputation of importance. And hamilton, you're right that the way I put it said that results give the questions importance, and not the other way around, which is more accurate. So perhaps I should revise this to results where the benefits of publishing are outweighed by the costs of publishing, to the researcher or the journal. And I agree that this system isn't always well suited to actually advancing knowledge.

To make sure I'm being clear, I'll give a quick example I'm familiar with in my own research. Some, but not all, laser materials work better at cryogenic temperatures than at room temperature. There are many papers showing how various materials improve when cooled to cryogenic temperatures, but very few showing that a material that's terrible at 300K is equally terrible at 70K. And I'd guess this is because:- The researcher doesn't want to advertise wasting time on a material with little use- The researcher doesn't want to tip off competitors not to waste time on the material- No journal wants to publish this result.

Expand full comment

Uninteresting results are only uninteresting if the question itself isn't interesting. If the question is of importance, and the design identifies the question, the result is of importance *irrespective of the acceptance or rejection of the null*. You've framed everything in terms of the results, as if results are interesting or uninteresting irrespective of the question. That's just completely incorrect.

The important questions:

1. Is the research question important and interesting?

2. Is the design for answering the research question well done (free of confounds, identifying causation, etc)?

Journal editors likely claim to select on these, but often select on:

3. Did you reject a null hypothesis that everyone believes should be rejected?

and seem to reject (accept) papers even when the answers to 1 and 2 are yes (no), provided the answer to 3 is no (yes).

Expand full comment

There are many ways in which journals could be improved. Indeed, there are many ways in which "lesser" journals are better run than "top" journals. But as others have noted, many improvements that would raise truth value would lower a journal's status without some kind of overall coercion scheme. Hence, neither this suggestion, nor many simpler suggestions that improve journals' truth generating capacity but not their status will be adopted by strong, influential journals. Indeed, there is no evidence that most innovations that work "better" do much to improve a journal's ranking short of being able to attract the top papers, the top editors, and to successfully be used as the gold standard by international departments when granting tenure.

Expand full comment

What do you put in the category of "nothing of interest"? One large problem with meta-analyses of drug effects is that studies finding that the drug under study wasn't better than a placebo get published less than positive outcomes. An absence of effectiveness may not be what the investigator is happy to report, but, for those of us who ultimately need to decide whether to take the drug or not, having part of the body of evidence on it vanish into unpublished obscurity is not a help.

Expand full comment

A few possibilities:

- Some experiments will produce nothing of interest, and uninteresting results do little to help advance one's career. Once the experiment has been done, and nothing of interest has been found, there's very little benefit to expending further time writing and publishing the result- If an important result is found, the researcher may prefer to send it to a more prestigious journal instead of the original one that accepted it before the big result was found. Journals that adopt this strategy may find their status suffering, as they are filled with results only marginally worth publishing. Prestigious journals would have little reason to adopt this strategy.

A journal could try to alleviate the above by, as a condition of acceptance, requiring the researcher to publish the result regardless of importance, and not send it to any other journal. The problems I'd expect:

- The number of uninteresting results published would increase, and the journal's status would suffer.- Prominent researchers would avoid the journal, since publishing in it would send a signal that the researcher doesn't expect any important results.- Along those lines, many researchers don't want their colleagues and competitors to know how many their experiments produce nothing of interest.- Publishing results of little interest may not help one's career, but they do help a researcher's competitors by letting them know where not to look for important results.

Expand full comment

Rather than sending papers once completed, with or without results, why not get *designs* of experiments or data analysis peer-reviewed? Why not a journal that says "you send us your design. We peer review the design, and accept or reject it. If we accept it, we will publish your paper provided you use exactly this design in conducting the research, irrespective of the outcome."

Why does nobody do this?

Expand full comment

The OrgTheorists have proposed extra layers of peer-review, from triple-blind to sextuple-blind.

Expand full comment

Peer review is a more laughable standard than the auditing work done by "independent" CPAs on public company financial statements. This approach is a band-aid on a gaping chest wound.

We already read everything pre-publication. Why not just complete the loop and build in the replicability step as part of peer review? It won't slow down progress because the pre-publication papers will still be out there. At the same time, peer review can also move forward from a high-level review of the paper to an examination of the integrity of the data-gathering protocols, programming methods, and related areas.

Expand full comment

grant proposals are often basically the first half of a paper (intro, lit review, and methods but not findings and conclusion) and grants are usually peer reviewed. thus to a small extent we already have an approximation of this.

also, i totally understand what you're getting at as i've personally experienced a sort of natural experiment along these lines. my first paper had one interpretation consistent with the priors in my field and people loved it, then i fixed a coding error and collected more data and the interpretation reversed, after which the paper was repeatedly rejected on grounds of generalizability.

Expand full comment