The 0.05 significance standard biases results in top polisci journals:
We examine the APSR and the AJPS for the presence of publication bias due to reliance on the 0.05 significance level. Our analysis employs a broad interpretation of publication bias, which we define as the outcome that occurs when, for whatever reason, publication practices lead to bias in the published parameter estimates. We examine the effect of the 0.05 significance level on the pattern of published findings using a "caliper" test, a novel method for comparing studies with heterogeneous effects, and find that we can reject the hypothesis of no publication bias at the 1 in 32 billion level. Our findings therefore raise the possibility that the results reported in the leading political science journals may be misleading due to publication bias. We also discuss some of the reasons for publication bias and propose reforms to reduce its impact on research.
This type of analysis isn't at all new, yet it keeps being relevant. The question is, why don't people learn and get this stuff right?
I would like to see medical journals make an exception to the 5% rule for splitting up subject populations. Some medical journal articles pool all subjects together in order to reach that magic 5% level - even if the results are very different for males and females. So they end up giving a slightly-more-confident answer to a much less useful question - eg., what is the proper drug dosage for a half-male half-female patient.