9 Comments
User's avatar
Overcoming Bias Commenter's avatar

Are you planning on publishing a response to Dezhbakhsh and Rubin's response to your paper?

Expand full comment
Overcoming Bias Commenter's avatar

Just to add another signal: I liked Justin's death penalty paper a lot; here.

Expand full comment
Overcoming Bias Commenter's avatar

I'll counter your two words with one of my own: p-values. It's not a tongue-in-cheek argument to say that what seems to be the default value of 0.05 is chosen precisely so that papers can be published in the softer sciences. But by having a minimal discussion of statistical technique, a researcher can submit a fifteen-page article to a peer-reviewed outlet and appear to be generating significant results.

What really needs to happen is that every paper should have a discussion about why a particular p-value is chosen (perhaps even a discussion of the counternull value), and more fundamentally, there should be a much more rigorous education in statistics for research. Way too many people interpret 'statistically significant at p=0.05' as 'there is a 95% chance the hypothesis has been confirmed'.

Expand full comment
Overcoming Bias Commenter's avatar

Liptak's NY Times article contains the following quote:

"The economics studies are, moreover, typically published in peer-reviewed journals, while critiques tend to appear in law reviews edited by students."

Without any slight to yours and Professor Donohue's article, I wonder if this characterization isn't a part of the critique of the evidence against the death penalty?

Expand full comment
Overcoming Bias Commenter's avatar

Department of Stating the Obvious:

I'd naively have thought that, scientifically or politically speaking, a "negative" result on this question (say, a coefficient betwen -2 and +2 with 95% confidence) should be exactly as interesting as a "positive" one (say, a coefficient between 8 and 12 with 95% confidence). So, assuming that there isn't a consistent political bias in favour of the death penalty among researchers and publishers in this field (which there might be, I guess, but it's not obvious why there should be), it's clearly the magical words "statistically significant" that are biasing the results.

Advice to researchers in the field: Exploit prior publication bias! If you get a "negative" result, write it up as "Our results differ significantly (p<0.05 in each case) from those of prior publications such as those of Dezbakhsh and Shepherd [1], Dezbakhsh, Rubin and Shepherd [2], and Mocan and Gittings [3]."

(But alas, alas, for the Cult Of Statistical Significance. How much better the world would be if conclusions were expressed as "Our symmetrical 95% confidence interval for the coefficient is ..." or "The likelihood curve for the parameter is shown in Fig. 1". There'd still be publication bias of a sort, in favour of research yielding narrow intervals and sharply peaked curves -- and, I guess, in favour of research where those intervals and peaks are in unexpected places. But it would be much less serious, and somewhat self-correcting.)

Expand full comment
Overcoming Bias Commenter's avatar

Nice technique! But as a purely editorial note, everything under "You can probably guess what we find" should go under the fold (the "Post Continuation" rather than "Post Introduction").

Expand full comment
Overcoming Bias Commenter's avatar

Still need convincing? Download my death penalty data, and run your own regressions.

Now that is a strong signal.

Expand full comment
Robin Hanson's avatar

So why doesn't this sort of check become a standard feature of such publications? Seems easy enough to do.

Expand full comment
Overcoming Bias Commenter's avatar

How often do people do this kind of meta-analysis? I've only ever heard of Card-Krueger doing it, but that paper is invisible because of the better known Card-Krueger paper. Does the word "intuition" signal that people in medicine are aware of the problem, but incapable of doing anything about it?

Expand full comment