The 0.05 significance standard biases results in top polisci journals: We examine the APSR and the AJPS for the presence of publication bias due to reliance on the 0.05 significance level. Our analysis employs a broad interpretation of publication bias, which we define as the outcome that occurs when, for whatever reason, publication practices lead to bias in the published parameter estimates. We examine the effect of the 0.05 significance level on the pattern of published findings using a "caliper" test, a novel method for comparing studies with heterogeneous effects, and find that we can reject the hypothesis of no publication bias at the 1 in 32 billion level. Our findings therefore raise the possibility that the results reported in the leading political science journals may be misleading due to publication bias. We also discuss some of the reasons for publication bias and propose reforms to reduce its impact on research.
I would like to see medical journals make an exception to the 5% rule for splitting up subject populations. Some medical journal articles pool all subjects together in order to reach that magic 5% level - even if the results are very different for males and females. So they end up giving a slightly-more-confident answer to a much less useful question - eg., what is the proper drug dosage for a half-male half-female patient.
"We can reject the hypothesis of no publication bias at the 1 in 32 billion level"Likely an irrelevant observation in this case, but interesting in general: some "nil-hypotheses" are obviously false, like frequency of a real coin falling on one side being exactly equal to frequency of it landing on the other side. The only thing that rejecting of such hypothesis can show is that there was enough data (not "enough data to reject the hypothesis", just "enough data", since hypothesis is false anyway). See "The Earth Is Round (p < .05)" by J. Cohen (pdf).
This type of analysis isn't at all new, yet it keeps being relevant. The question is, why don't people learn and get this stuff right?
I would like to see medical journals make an exception to the 5% rule for splitting up subject populations. Some medical journal articles pool all subjects together in order to reach that magic 5% level - even if the results are very different for males and females. So they end up giving a slightly-more-confident answer to a much less useful question - eg., what is the proper drug dosage for a half-male half-female patient.
The same guys have done sociology too...http://smr.sagepub.com/cgi/...The abstract is almost copied word for word!
"We can reject the hypothesis of no publication bias at the 1 in 32 billion level"Likely an irrelevant observation in this case, but interesting in general: some "nil-hypotheses" are obviously false, like frequency of a real coin falling on one side being exactly equal to frequency of it landing on the other side. The only thing that rejecting of such hypothesis can show is that there was enough data (not "enough data to reject the hypothesis", just "enough data", since hypothesis is false anyway). See "The Earth Is Round (p < .05)" by J. Cohen (pdf).
Thanks guys; fixed it.
Here's the correct link:
http://www.qjps.com/prod.as...
Link doesn't seem to work.