Needed: Cognitive forensics?

Perhaps we need a new field of "cognitive forensics" for analyzing and investigating motivated scientific error, bias, and intellectual misconduct. The goal would be to develop a comprehensive toolkit of diagnostic indicators and statistical checks that could be used to detect acts of irrationality and to make it easier to apprehend the culprits. (Robin’s recent post gives an example of one study that could be done.) Another goal would be to create a specialization, a community of scholars who had expertise in this subfield, who could apply it to various sciences, and who could train students taking advanced methodology classes.

Of what components would cognitive forensics be built? I’d think it would have a big chunk of applied statistics, but also contributions from cognitive and social psychology, epistemology, history and philosophy of science, sociology of science, maybe some economics, data mining, network analysis, etc.

Compared to this blog, the field could have somewhat narrower scope, focusing primarily on empirical scientific research rather than on rationality in general. It might also focus primarily on statistical tests rather than on wider issues such as institution design (although ideas for institutional reform might emerge as a side product). It might be driven more by statistical analysis of particular data sets than by big theories of common human cognitive biases (although the latter would serve as a source of inspiration for hypotheses to test).

The time might be ripe for this sort of endeavor. I have the impression that scattered articles on the problems of peer review and on possible statistical biases in scientific research (e.g. by funding source, file drawer effect etc.) are now appearing fairly regularly in Science and Nature.

Three questions I have are: (1) to what extent would it make sense to study *motivated* scientific error semi-separately (as a sub-discipline) rather than as part of the course of statistics and scientific methodology in general? (2) to what extent does such a sub-discipline already exist today? (3) if there is a need for a new sub-discipline, should it be as envisaged here or should it be constructed in a different way?

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/robinhanson/ Robin Hanson

    I do think the issues and techniques for dealing with motivated error are distinct enough to justify a distinct specialty from generic statistics. One way to focus attention and effort of such a new specialty might be to focus on doing meta-analysis that takes such issues into account. The field would then be successful when it was accepted as the best place to go for methods of meta-analysis.

  • David J. Balan

    I think this is in principle a good idea, though I am concerned that it might stifle some novel or specualtive research that would have ultimately proven itself valuable. The practical problem, however, is that the people who are talented enough to do this job well are exactly the people who want to be players, not referees. So the referees would have to be pretty well paid to attract anybody decent to the job, but of course players are in charge of the money.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    David, the way I’m picturing things is that the people doing this work would be “players”. For example, a bright person in applied statistic could make a name for herself developing new ways of checking for motivated error in meta-analysis. And the people who currently do meta-analyses could achieve more impressive results if they managed to discover and correct for some systematic bias in previous work on their chosen topic. I’d think there could be a fair amount of fame and glory in this.

  • Paul Gowder

    Do you think there are currently inadequate incentives to do this within disciplines? My sense (although this might be salience bias) is that there are regular exposures of lousy research (motivated or otherwise) in the traditional disciplines. What reason is there to believe that there is a suboptimal level of this exposure?

  • critic

    What about the biases of these enforcers — those annointed to “detect acts of irrationality and to make it easier to apprehend the culprits”? It sounds like an open invitation for the ultimate in PC: to enforce the political orthodoxy of the arbiters of bias at the expense of the “biased” opinions out of favor with the practitioners of this proposed field.

  • Doug S.

    A couple of notorious cases of research fraud were discovered because the perpetrator presented the same (fake) picture as coming from two separate experiments. Note to everyone who wants to commit research fraud: don’t reuse your own bullshit!

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Well, sure, these Science Enforcers would be a wonderful thing for science, if they were all perfect rationalists. Quis custodiet etc.

    It seems to me that the real question here is, is there some combination of training and knowledge that will produce specialists in detecting scientific biases? Sufficient that a Science Enforcer would provide a service above and beyond the opinions of professionals in that particular field?

    Now this is not utterly implausible, because there are all sorts of obvious courses to take in statistics, probability theory, social psychology, history of science, heuristics and biases, evolutionary psychology, et cetera, which would all be specialized training that an ordinary scientist doesn’t usually get. But it still seems to me that the big issue is: can you outperform conventional science? Can you improve on the existing professionals? Can you demonstrate that you have done so? Will the demonstration method that you set in place, sustain the field of cognitive forensics and keep it clean from the next politically correct ideological fad to come along?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Those of you who are worried about the biases of these specialists, compare them to statisticians. Unless you think scientists are better off ignoring statisticians, or that the bias problem is worse for motivation-bias specialists than for statisticians, you should expect these specialists to also add value.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    For those of you who worry about the biases of the enforcers…

    I think I might have given the wrong impression when I wrote about “apprehending the culprits”. I wasn’t actually thinking about some sort of academic police force tasked with combating scientific fraud. Maybe such a thing would be useful; I’m not sure. Yes, scientific fraud does happen, and it’s a serious offense, but my guess is that it’s a small fraction of the problem. The great bulk of the problem is the vast grey zone between fraud and inadvertent error.

    I was thinking more about this grey zone, and even there I was thinking primarily about detecting biases on a topic rather than biases of particular individuals or research groups. And the “enforcement” I had in mind is merely the publication of academic critique.

    Let me distinguish three things that a cognitive forensics could be or include:

    A. Detecting scientific fraud. Developing better tools and methods for this might be useful for journal editors and referees, and could help scientific communities to better police themselves. I’m not proposing any kind of new police force for this, just better tools.

    B. Detecting motivated error on some topic, primarily through data mining and statistical analysis. This is what I had in mind. I see a need for better tools for this, and for specialists who are skilled at developing and using these tools. The only enforcement power these specialists would have would be to publish meta-analyses and critiques.

    C. Combating bias in general. This is a much more open-ended project, what we are trying to do on this blog. A great many disciplines are needed to do this well. It’s also more of an art than a science. I’m not sure this is yet ready to become a recognized “field” in its own right, although something like that might one day emerge if we can build up a community of people who are interested in these issues. But one could start with something more narrow, such as (A) and (B).

  • Matthew

    I wonder if some of this stuff is not better addressed by “opening up” academia to more daylight.

    For example, my own field of software development, the most interesting techniques, ideas, and even implementations are researched, discussed, critiqued, and posted to the web, on blogs, email lists, and bulletin boards. Of course not every line of code of every software product is available on the internet, but many of them are (Linux, apache group, many frameworks and components) along with tons of “here’s how you do this” sample code and lots of troubleshooting, all more-or-less easily navigable via google.

    I would suspect that subjecting academic research to the same kinds of open environments and discussions would be likely to lead to the kinds of iterative, evolutionary progress in methods and rigor. The biggest barrier to this is probably the stranglehold that the academic press holds over journal articles.

  • Douglas Knight

    Unless you think scientists are better off ignoring statisticians

    My impression of the status quo is that scientists have off-loaded understanding of the use and abuse of statistical methods (eg, overfitting) to the statisticians, and then proceeded to ignore them, the worst of both worlds.

    Nick Bostrom: The few examples I know of meta-analyses making accusations of bias are of type (B). They are also politicized. Mainly there are papers about funding bias. Would RH’s meta-analysis contrasting control variables to focal variables be as well-received? Would people bother to write such papers without a preidentified bad guy?

  • Paul Gowder

    Why the focus on “motivated” error, anyway? How do you determine the intent behind bad research, and who cares?

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Douglas, would people bother writing such papers without a preidentified bad guy? I think so. Many academics would like to publish papers that show that the received wisdom in some area was wrong, or ones that improved on the current best estimates.

    Paul, I think that the pattern of error may be different depending on how it originated. Motivated errors might have distinct signatures that one could detect (and they might also lack some signatures by which one can identify inadvertent error). Thus, they pose a special challenge which we (or somebody) should think about.

  • commentor

    > Mainly there are papers about funding bias

    These often themselves reflect an extreme bias. For example, they trash almost any kind of private funding (e.g. by oil companies on subjects that effect oil companies) but blithely ignore the extreme and endemic conflicts of interest among government-funded scholars who promote the virtues of government generally and government by their kind of expertise in particular.

    It’s a great example of bias being more extreme due to it being more centralized and authoritative and thereby even less accountable. A “bias police” is the perfect way to create more extreme and unfixable bias. Extreme bias that we must all accept as normal.

  • michael vassar

    Nick: I think Douglas is actually right here, on both of his points.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Michael, I think several of us have had a worry from the beginning that giving more prominence to bias-talk could make the bias problem worse, because it might be easier to obfuscate the truth in a cloud of bias-allegation than to hide it under smoke screens of selective citation of first-level data etc. Also, bias-allegations might be more likely to trigger tribal feelings than is the dry discussion of first-level data. There is a reason why the use of ad hominem arguments is tightly circumscribed in academic discourse.

    Yet my instinct is to charge ahead and to expand the number of interesting, important questions that academics are encouraged to think systematically about. In particular, if the methodological tools that can be developed in cognitive forensics turn out to be so weak that they become misused on a massive scale, then (I’d expect) norms will develop that discount arguments constructed with these tools, so not much damage will be done.

  • Jack

    Two thoughts: (1) Research is too specialized; at best we’d need a subgroup in each field (e.g. econ, soc, polit sci…).
    (2) I see more promise in simply raising the value of refereeing. If refereeing is highly valued, especially in top journals, the supply of high-quality refereeing will increase, and that includes detecting fraud.