Overcoming Bias

Share this post

Peer Review Is Random

www.overcomingbias.com

Discover more from Overcoming Bias

This is a blog on why we believe and do what we do, why we pretend otherwise, how we might do better, and what our descendants might do, if they don't all die.
Over 11,000 subscribers
Continue reading
Sign in

Peer Review Is Random

Robin Hanson
Dec 21, 2010
Share this post

Peer Review Is Random

www.overcomingbias.com
4
Share

Which academic articles get published in the more prestigious journals is a pretty random process. When referees review an academic paper, less than 20% of the variability in referee ratings is explained by a tendency to agree:

This paper presents the first meta-analysis for the inter-rater reliability (IRR) of journal peer reviews [using] … 70 reliability coefficients … from 48 studies. … [covering] 19,443 manuscripts; on average, each study had a sample size of 311 manuscripts (minimum: 28, maximum: 1983). … The more manuscripts that a study is based on, the smaller the reported IRR coefficients are. .. If the information of the rating system for reviewers was reported in a study, then this was associated with a smaller IRR coefficient. … An ICC of .23 indicates that only 23% of the variability in the reviewers’ rating of a manuscript could be explained by the agreement of reviewers. (more: HT Tyler)

reviewreliability

The above is from their key figure, showing reliability estimates and confidence intervals for studies ordered by estimated reliability. The most accurate studies found the lowest reliabilities, clear evidence of a bias toward publishing studies that find high reliability. I recommend trusting only the most solid studies, which give the most pessimistic (<20%) estimates.

Seems a model would be useful here. Model the optimal number of referees per paper, given referee reliability, the value of identifying the best papers, and the relative cost of writing vs. refereeing a paper. Such a model could estimate losses from having many journals with separate referees evaluate the each article, vs. an integrated system.

Share this post

Peer Review Is Random

www.overcomingbias.com
4
Share
4 Comments
Share this discussion

Peer Review Is Random

www.overcomingbias.com
Overcoming Bias Commenter
May 15

@Tony, I also couldn't help but think of the random walk in understanding the role of reviews.

Expand full comment
Reply
Share
Overcoming Bias Commenter
May 15

There may be many criteria on which peer reviewers do agree, but which don't show up in this study because authors already know those criteria and have satisfied them before the paper is even submitted.

For example, most reviewers agree that a P-value of greater than 0.05 is not acceptable, so papers that don't meet that standard don't get written in the first place. This actually indicates that peer review works very well; it exerts its influence through the foreknowledge of review, not the review itself.

Maybe it's sort of like predicting stock prices - if most investors agree that a stock is underpriced, the price goes up immediately, erasing their agreement. All that remains is the residual disagreement, making it appear that they can't agree on anything. Maybe this study points to a kind of EMH for scientific publication.

Expand full comment
Reply
Share
2 more comments...
Top
New
Community

No posts

Ready for more?

© 2023 Robin Hanson
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing