17 Comments

Two thoughts: (1) Research is too specialized; at best we'd need a subgroup in each field (e.g. econ, soc, polit sci...).(2) I see more promise in simply raising the value of refereeing. If refereeing is highly valued, especially in top journals, the supply of high-quality refereeing will increase, and that includes detecting fraud.

Expand full comment

Michael, I think several of us have had a worry from the beginning that giving more prominence to bias-talk could make the bias problem worse, because it might be easier to obfuscate the truth in a cloud of bias-allegation than to hide it under smoke screens of selective citation of first-level data etc. Also, bias-allegations might be more likely to trigger tribal feelings than is the dry discussion of first-level data. There is a reason why the use of ad hominem arguments is tightly circumscribed in academic discourse.

Yet my instinct is to charge ahead and to expand the number of interesting, important questions that academics are encouraged to think systematically about. In particular, if the methodological tools that can be developed in cognitive forensics turn out to be so weak that they become misused on a massive scale, then (I'd expect) norms will develop that discount arguments constructed with these tools, so not much damage will be done.

Expand full comment

Nick: I think Douglas is actually right here, on both of his points.

Expand full comment

> Mainly there are papers about funding bias

These often themselves reflect an extreme bias. For example, they trash almost any kind of private funding (e.g. by oil companies on subjects that effect oil companies) but blithely ignore the extreme and endemic conflicts of interest among government-funded scholars who promote the virtues of government generally and government by their kind of expertise in particular.

It's a great example of bias being more extreme due to it being more centralized and authoritative and thereby even less accountable. A "bias police" is the perfect way to create more extreme and unfixable bias. Extreme bias that we must all accept as normal.

Expand full comment

Douglas, would people bother writing such papers without a preidentified bad guy? I think so. Many academics would like to publish papers that show that the received wisdom in some area was wrong, or ones that improved on the current best estimates.

Paul, I think that the pattern of error may be different depending on how it originated. Motivated errors might have distinct signatures that one could detect (and they might also lack some signatures by which one can identify inadvertent error). Thus, they pose a special challenge which we (or somebody) should think about.

Expand full comment

Why the focus on "motivated" error, anyway? How do you determine the intent behind bad research, and who cares?

Expand full comment

Unless you think scientists are better off ignoring statisticians

My impression of the status quo is that scientists have off-loaded understanding of the use and abuse of statistical methods (eg, overfitting) to the statisticians, and then proceeded to ignore them, the worst of both worlds.

Nick Bostrom: The few examples I know of meta-analyses making accusations of bias are of type (B). They are also politicized. Mainly there are papers about funding bias. Would RH's meta-analysis contrasting control variables to focal variables be as well-received? Would people bother to write such papers without a preidentified bad guy?

Expand full comment

I wonder if some of this stuff is not better addressed by "opening up" academia to more daylight.

For example, my own field of software development, the most interesting techniques, ideas, and even implementations are researched, discussed, critiqued, and posted to the web, on blogs, email lists, and bulletin boards. Of course not every line of code of every software product is available on the internet, but many of them are (Linux, apache group, many frameworks and components) along with tons of "here's how you do this" sample code and lots of troubleshooting, all more-or-less easily navigable via google.

I would suspect that subjecting academic research to the same kinds of open environments and discussions would be likely to lead to the kinds of iterative, evolutionary progress in methods and rigor. The biggest barrier to this is probably the stranglehold that the academic press holds over journal articles.

Expand full comment

For those of you who worry about the biases of the enforcers...

I think I might have given the wrong impression when I wrote about "apprehending the culprits". I wasn't actually thinking about some sort of academic police force tasked with combating scientific fraud. Maybe such a thing would be useful; I'm not sure. Yes, scientific fraud does happen, and it's a serious offense, but my guess is that it's a small fraction of the problem. The great bulk of the problem is the vast grey zone between fraud and inadvertent error.

I was thinking more about this grey zone, and even there I was thinking primarily about detecting biases on a topic rather than biases of particular individuals or research groups. And the "enforcement" I had in mind is merely the publication of academic critique.

Let me distinguish three things that a cognitive forensics could be or include:

A. Detecting scientific fraud. Developing better tools and methods for this might be useful for journal editors and referees, and could help scientific communities to better police themselves. I'm not proposing any kind of new police force for this, just better tools.

B. Detecting motivated error on some topic, primarily through data mining and statistical analysis. This is what I had in mind. I see a need for better tools for this, and for specialists who are skilled at developing and using these tools. The only enforcement power these specialists would have would be to publish meta-analyses and critiques.

C. Combating bias in general. This is a much more open-ended project, what we are trying to do on this blog. A great many disciplines are needed to do this well. It's also more of an art than a science. I'm not sure this is yet ready to become a recognized "field" in its own right, although something like that might one day emerge if we can build up a community of people who are interested in these issues. But one could start with something more narrow, such as (A) and (B).

Expand full comment

Those of you who are worried about the biases of these specialists, compare them to statisticians. Unless you think scientists are better off ignoring statisticians, or that the bias problem is worse for motivation-bias specialists than for statisticians, you should expect these specialists to also add value.

Expand full comment

Well, sure, these Science Enforcers would be a wonderful thing for science, if they were all perfect rationalists. Quis custodiet etc.

It seems to me that the real question here is, is there some combination of training and knowledge that will produce specialists in detecting scientific biases? Sufficient that a Science Enforcer would provide a service above and beyond the opinions of professionals in that particular field?

Now this is not utterly implausible, because there are all sorts of obvious courses to take in statistics, probability theory, social psychology, history of science, heuristics and biases, evolutionary psychology, et cetera, which would all be specialized training that an ordinary scientist doesn't usually get. But it still seems to me that the big issue is: can you outperform conventional science? Can you improve on the existing professionals? Can you demonstrate that you have done so? Will the demonstration method that you set in place, sustain the field of cognitive forensics and keep it clean from the next politically correct ideological fad to come along?

Expand full comment

A couple of notorious cases of research fraud were discovered because the perpetrator presented the same (fake) picture as coming from two separate experiments. Note to everyone who wants to commit research fraud: don't reuse your own bullshit!

Expand full comment

What about the biases of these enforcers -- those annointed to "detect acts of irrationality and to make it easier to apprehend the culprits"? It sounds like an open invitation for the ultimate in PC: to enforce the political orthodoxy of the arbiters of bias at the expense of the "biased" opinions out of favor with the practitioners of this proposed field.

Expand full comment

Do you think there are currently inadequate incentives to do this within disciplines? My sense (although this might be salience bias) is that there are regular exposures of lousy research (motivated or otherwise) in the traditional disciplines. What reason is there to believe that there is a suboptimal level of this exposure?

Expand full comment

David, the way I'm picturing things is that the people doing this work would be "players". For example, a bright person in applied statistic could make a name for herself developing new ways of checking for motivated error in meta-analysis. And the people who currently do meta-analyses could achieve more impressive results if they managed to discover and correct for some systematic bias in previous work on their chosen topic. I'd think there could be a fair amount of fame and glory in this.

Expand full comment

I think this is in principle a good idea, though I am concerned that it might stifle some novel or specualtive research that would have ultimately proven itself valuable. The practical problem, however, is that the people who are talented enough to do this job well are exactly the people who want to be players, not referees. So the referees would have to be pretty well paid to attract anybody decent to the job, but of course players are in charge of the money.

Expand full comment