I would like to see medical journals make an exception to the 5% rule for splitting up subject populations. Some medical journal articles pool all subjects together in order to reach that magic 5% level - even if the results are very different for males and females. So they end up giving a slightly-more-confident answer to a much less useful question - eg., what is the proper drug dosage for a half-male half-female patient.
"We can reject the hypothesis of no publication bias at the 1 in 32 billion level"Likely an irrelevant observation in this case, but interesting in general: some "nil-hypotheses" are obviously false, like frequency of a real coin falling on one side being exactly equal to frequency of it landing on the other side. The only thing that rejecting of such hypothesis can show is that there was enough data (not "enough data to reject the hypothesis", just "enough data", since hypothesis is false anyway). See "The Earth Is Round (p < .05)" by J. Cohen (pdf).
This type of analysis isn't at all new, yet it keeps being relevant. The question is, why don't people learn and get this stuff right?
I would like to see medical journals make an exception to the 5% rule for splitting up subject populations. Some medical journal articles pool all subjects together in order to reach that magic 5% level - even if the results are very different for males and females. So they end up giving a slightly-more-confident answer to a much less useful question - eg., what is the proper drug dosage for a half-male half-female patient.
The same guys have done sociology too...http://smr.sagepub.com/cgi/...The abstract is almost copied word for word!
"We can reject the hypothesis of no publication bias at the 1 in 32 billion level"Likely an irrelevant observation in this case, but interesting in general: some "nil-hypotheses" are obviously false, like frequency of a real coin falling on one side being exactly equal to frequency of it landing on the other side. The only thing that rejecting of such hypothesis can show is that there was enough data (not "enough data to reject the hypothesis", just "enough data", since hypothesis is false anyway). See "The Earth Is Round (p < .05)" by J. Cohen (pdf).
Thanks guys; fixed it.
Here's the correct link:
http://www.qjps.com/prod.as...
Link doesn't seem to work.