6 Comments

Gustavo,

I think scientists have an erroneous (self-servingly biased) view of their collective intellectual integrity, so that they underweight the probability of massaged data, and see only a small credibility benefit as an early adopter, when most respected scientists are not using RR certifications. If more scientists start using the label, then not doing so will become suspicious, in the same way that legal cash payments for big-ticket items have become suspicious.

Robin,

I see a few paths to wider adoption of image-checkers, RR certification, etc:

1. A few more high-profile cases like the Korean faux-cloning lab, with harsh punishments, could raise the level of suspicion among scientists, increasing the credibility benefits of the procedures.

2. One large government grant agency or private foundation could condition funds on use of the procedures (similar to requiring the advance registration of clinical trials), and set off a chain reaction. China's issues with faked research might lead it to implement such a ruleset.

3. Continued publication of results like the Journal of Cell Biology's persuading a few opinion leaders (Nobel laureates, the most elite departments) to publicly adopt the procedures and create a new norm that lower-status research entities would mimic.

Expand full comment

Gustavo, authors can already make their results exactly reproducible, and advertise that fact in their abstract and introduction. And other authors can attempt to so reproduce. Authors clearly do not now think that they will be rewarded for such efforts enough to cover their costs. Sure, "someone" could change things by offering more rewards for such behavior. But who? Either we find a new equilibrium in the relatively decentralized academic system we have, or we introduce a new "center" with enough resources to promote such things.

Expand full comment

A solution to make enforcement cheaper would be for journals to promote "reproducible research" labels, which the authors would be forced to use if they want their paper to be credible:http://www.andrew.cmu.edu/u...

Every paper/section with an RR label would reference a URL explaining how to reproduce the entire paper/section (including images). This way, authors would be more reluctant to lie or fudge results. To boost this, another obvious idea is to reward people who repeat the "reproducible" procedure and report that it gives different results (although this is only a proof of dishonesty to the extent that the system is deterministic).

Expand full comment

RH said: 'It would be interesting to better understand why some field do emphasize critiques more, and whether such fields benefits overall from that.'

Robin, what would be needed is the kind of study done by David L Hull (for evolutionary theory) in Science as a Process (Chicago U Press, 1988).

I have worked in several areas of science, albeit only over a span of about 25 years, and my hunch is that people only respond to critique when they 'have to' - in other words when failure of scientist X satisfactorily to answer a cogent critique results in the rest of the workers in the field ignoring the work of scientist X on the assumption that the critique must have been correct.

I think these circumstances prevail where there are *not* large differentials in power between workers in the field, where successful scientists do also not wield massive patronage and where they cannot crush rival careers.

For example, my department has neuroscientists and animal behaviourists - and sceintific critique seems much more powerful among animal behaviourists than neuroscientists. I think this is because the field of animal behaviour lacks power differentials because it has a lot of individuals and small groups whereas neuroscience is dominated by relatively few huge teams with vast research incomes, capital and personnel.

And I think that animal behavior _probably_ advances faster than neuroscience, in the sense that faulty paradigms of neuroscience take longer to change. But this is exactly the kind of thing we need to know empirically.

Expand full comment

Bruce, most areas of academia give few rewards or attention to critiques. It would be interesting to better understand why some fields do emphasize critiques more, and whether such fields benefits overall from that.

Expand full comment

I find this summary very interesting - but I havent read the linked articel (gated).

However, I don't think that greater regulation of publishing is the real answer. My feeling is that the root of the problem is the career structure of Big Science in which prestige comes mainly from research income and capital (such as the expensive machines that do brain imaging).

I commented on the problems of brain imaging on EconLog yesterday http://econlog.econlib.org/...

I think the problem is lack of effective critique of brain imaging studies, and failure of these studies to make significant scientific contributions - not that potential critique was lacking, but the publication of such critiques in high status journals was difficult and - when made - was able to be ignored because the status of the scientists who administered the rare and expensive big brain imaging machines made them virtually immune.

So it is particularly interesting to hear that even the modest achievements of brain imaging are often accomplished by image manipulation. However, even the standard brain image for many years (up to about 5 years ago) was a 'pseudo-coloured' image created by multiple averageing of several scans and subtraction techniques - this was then spun as areas 'lighting-up' during scanning.

My point on EconLog is that the new specialty of Neuroeconomics is now becoming implicated in the failures, dishonesty and Big Science distortions of brain imaging. Beware economists!

Expand full comment