See, But Don’t Believe

Friday’s Science reported that one in four published journal articles has misleadingly manipulated images.

Some biologists become so excited by a weak signal suggesting the presence of a particular molecule that "they’ll take a picture of it, they’ll boost the contrast, and they’ll make it look positive" … scientific journals, concerned about a growing number of cases of image manipulation, are cracking down on such practices with varying degrees of aggressiveness.  At one end of the spectrum is the biweekly Journal of Cell Biology, which for the past 4 years has scrutinized images in every paper accepted for publication — and reports that a staggering 25% contain at least one image that violated the journal’s guidelines.   That number has held steady over time …

Most journals are reluctant to devote much staff time and money to hunting for images that have been inappropriately modified.  Vanishing few are emulating the Journal of Cell Biology. … and its two sister journals, which have a dedicated staffer who reviews the roughly 800 papers accepted by all three each year.   Science‘s screening is principally designed to pick up selective changes in contrast and images that are cut and pasted. … Since initiating image analysis earlier this year, Science has seen "some number less than 10," or a few percent at most.  … the difference might be due to … the fact that [the Journal of Cell Biology‘s] staffer … is now unusually experienced at hunting for modifications.

The cost-effectiveness of this one staffer in disciplining an entire field of research seems enormous.  We could clearly increase research progress overall by replacing a few more researchers with such staffers.  The fact that no other journals do anything close suggests either that we have a serious coordination failure, or that research progress is not a high priority.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Bruce G Charlton

    I find this summary very interesting – but I havent read the linked articel (gated).

    However, I don’t think that greater regulation of publishing is the real answer. My feeling is that the root of the problem is the career structure of Big Science in which prestige comes mainly from research income and capital (such as the expensive machines that do brain imaging).

    I commented on the problems of brain imaging on EconLog yesterday

    I think the problem is lack of effective critique of brain imaging studies, and failure of these studies to make significant scientific contributions – not that potential critique was lacking, but the publication of such critiques in high status journals was difficult and – when made – was able to be ignored because the status of the scientists who administered the rare and expensive big brain imaging machines made them virtually immune.

    So it is particularly interesting to hear that even the modest achievements of brain imaging are often accomplished by image manipulation. However, even the standard brain image for many years (up to about 5 years ago) was a ‘pseudo-coloured’ image created by multiple averageing of several scans and subtraction techniques – this was then spun as areas ‘lighting-up’ during scanning.

    My point on EconLog is that the new specialty of Neuroeconomics is now becoming implicated in the failures, dishonesty and Big Science distortions of brain imaging. Beware economists!

  • Bruce, most areas of academia give few rewards or attention to critiques. It would be interesting to better understand why some fields do emphasize critiques more, and whether such fields benefits overall from that.

  • Bruce G Charlton

    RH said: ‘It would be interesting to better understand why some field do emphasize critiques more, and whether such fields benefits overall from that.’

    Robin, what would be needed is the kind of study done by David L Hull (for evolutionary theory) in Science as a Process (Chicago U Press, 1988).

    I have worked in several areas of science, albeit only over a span of about 25 years, and my hunch is that people only respond to critique when they ‘have to’ – in other words when failure of scientist X satisfactorily to answer a cogent critique results in the rest of the workers in the field ignoring the work of scientist X on the assumption that the critique must have been correct.

    I think these circumstances prevail where there are *not* large differentials in power between workers in the field, where successful scientists do also not wield massive patronage and where they cannot crush rival careers.

    For example, my department has neuroscientists and animal behaviourists – and sceintific critique seems much more powerful among animal behaviourists than neuroscientists. I think this is because the field of animal behaviour lacks power differentials because it has a lot of individuals and small groups whereas neuroscience is dominated by relatively few huge teams with vast research incomes, capital and personnel.

    And I think that animal behavior _probably_ advances faster than neuroscience, in the sense that faulty paradigms of neuroscience take longer to change. But this is exactly the kind of thing we need to know empirically.

  • A solution to make enforcement cheaper would be for journals to promote “reproducible research” labels, which the authors would be forced to use if they want their paper to be credible:

    Every paper/section with an RR label would reference a URL explaining how to reproduce the entire paper/section (including images). This way, authors would be more reluctant to lie or fudge results. To boost this, another obvious idea is to reward people who repeat the “reproducible” procedure and report that it gives different results (although this is only a proof of dishonesty to the extent that the system is deterministic).

  • Gustavo, authors can already make their results exactly reproducible, and advertise that fact in their abstract and introduction. And other authors can attempt to so reproduce. Authors clearly do not now think that they will be rewarded for such efforts enough to cover their costs. Sure, “someone” could change things by offering more rewards for such behavior. But who? Either we find a new equilibrium in the relatively decentralized academic system we have, or we introduce a new “center” with enough resources to promote such things.

  • Carl Shulman


    I think scientists have an erroneous (self-servingly biased) view of their collective intellectual integrity, so that they underweight the probability of massaged data, and see only a small credibility benefit as an early adopter, when most respected scientists are not using RR certifications. If more scientists start using the label, then not doing so will become suspicious, in the same way that legal cash payments for big-ticket items have become suspicious.


    I see a few paths to wider adoption of image-checkers, RR certification, etc:

    1. A few more high-profile cases like the Korean faux-cloning lab, with harsh punishments, could raise the level of suspicion among scientists, increasing the credibility benefits of the procedures.

    2. One large government grant agency or private foundation could condition funds on use of the procedures (similar to requiring the advance registration of clinical trials), and set off a chain reaction. China’s issues with faked research might lead it to implement such a ruleset.

    3. Continued publication of results like the Journal of Cell Biology’s persuading a few opinion leaders (Nobel laureates, the most elite departments) to publicly adopt the procedures and create a new norm that lower-status research entities would mimic.