Literature Research

In my hand I have a hefty article on a canonical English poet, published 10 years ago in a distinguished journal. … The argument is dense, the analysis acute, on its face a worthy illustration of academic study deserving broad notice and integration into subsequent research in the field. That reception doesn’t seem to have happened. … After 5,000 studies of Melville since 1960, what can the 5,001st say that will have anything but a microscopic audience of interested readers? …

I devised a [small] study of literary research. … Of 13 research articles … in 2004, 11 of them received zero to two citations, one had five, one 12. Of 23 article … 16 received zero to two citations, four of them three to six, one eight, one 11, and one 16. … The unfortunate conclusion is that the overall impact of literary research doesn’t come close to justifying the money and effort that goes into it. …

The research identity is a powerful allure, flattering people that they have cutting-edge brilliance. Few of them readily trade the graduate seminar for the composition classroom. But we have reached the point at which the commitment to research at the current level actually damages the humanities, turning the human capital of the discipline toward ineffectual toil. More books and articles don’t expand the audience for literary studies. A spurt of publications in a department does not attract more sophomores to the major, nor does it make the dean add another tenure-track line, nor does it urge a curriculum committee to add another English course to the general requirements. All it does is “author-ize” the producers. Deep down, everybody knows this. (more)

This is pretty much the standard situation in academia – English is not much different. Academics talk as if academia is all about the research progress, but in fact it is more about “authorizing” the academics. That is, about credentialing their impressiveness, so that others can affiliate with credentialed impressive folks.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Phil

    I don’t disagree with you. But I’ve always wondered: is the research *correct*? That is, is it the right answer to the question asked?

    That is, if you intercepted a paper on its way to peer review, and replaced an occasional sentence with its opposite:

    (a) would it make the argument wrong?
    (b) would the referees notice?
    (c) if you innocently pointed it out to the referees, would they say, “oh, yeah, you’re right!” or would they think the opposite was *also* valid research?

  • Vaniver

    Phil: The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth. — Niels Bohr

  • Ed

    I’m a newcomer to this blog, so sorry if the following have already been discussed. But

    (1) I am not sure how to use these citation statistics. If academics did not care about credentializing, would we see more citations of the few papers being published, or fewer citations since fewer papers overall means fewer citations? It might be good to compare citation counts in a system where papers are published secretly or anonymously. Maybe the NSA’s internal cryptography papers?

    In particular it might be bad to compare English papers to those from any experimental subject, where each minor procedure of each experiment typically gets another citation or three.

    Finally, the Best English Paper Ever, which single-handedly lays to rest all problems of English literature, would receive no citations since it would put an end to the discipline.

    (2) Just a nit-pick, but it is one thing to say that academics are about authorizing themselves, and another to say that academia is about authorizing academics. One can imagine a situation where military officers only care about being promoted, but the military might still be about fighting wars.

    • http://disputedissues.blogspot.com Stephen R Diamond

      “If academics did not care about credentializing, would we see more citations of the few papers being published, or fewer citations since fewer papers overall means fewer citations? ”

      That, it would seem, is the key question; stated differently,: does the large number of useless publications express waste; or is trivial research the inevitable counterpart to important research? Or, again restated, does the publisher of work that won’t be cited know this in advance, or can anyone reliably know it in advance (beyond, that is, that predictive power already expressed in acceptance-for-publication decisions).

      • Jeffrey Soreff

        does the publisher of work that won’t be cited know this in advance

        Good point!

        My suspicion is that the answer to your

        can anyone reliably know it in advance

        question is likely to be “not at an acceptable cost”.
        If nothing else, the fact that the citation numbers look something
        like a smooth power curve suggests that there is a fairly uniform
        filtering mechanism already in operation. If the citation numbers
        had been sharply bimodal – 3 papers heavily cited, and all the
        rest cited once or never, I’d believe that there was something
        one could do to throw out the bad ones – because there would
        be a sharply defined group of bad ones. With a smooth power
        law, this seems less probable.

        So does anyone want to start a prediction market
        for anticipated future citations to papers? :-)

  • Phil

    Vaniver: Nice!!!

  • Robert Koslover

    I have to admit I was surprised at the number of citations reported for those examples; I would have expected fewer.

  • Matt

    It never hurts to double check…

    http://www.youtube.com/watch?v=tj7RlQdF25A

  • Mark M

    Publish or die, right?

    When publishing is the goal, and the number of articles published is a measure of success, It shouldn’t be a surprise that many (most?) academic publications aren’t new, profound, enlightening, or worth citing. Unfortunately, those that might be new, profound, enlightening, or worth citing are lost in the noise.

    • DK

      “Most” for sure. Like 90% at least.

  • komponisto

    If literature academics want a “research identity”, they should be producing original literature, rather than merely commentaries on existing literature.

    (Some may argue that we don’t need more literature, but I’d rather have more new literature than more commentary on old literature.)

    Similarly (and closer to home), I think the increasing separation in academia between music theory and music composition is a disaster for both, with theorists spoiling their field by going off on various postmodernist tangents and (some argue) threatening to push composers out of the academy altogether. One fears that the days of the “composer-theorist” are fast receding, if not already over.

    • Alex

      Yeah, and historians should be out trying to ‘make’ history rather than merely study it. You’re a twit.

  • http://www.gwern.net gwern

    Citation counts are low in a *lot* of fields; literature isn’t actually that bad, at least as far as this small sample proves. See http://www.gwern.net/Culture%20is%20not%20about%20esthetics#fn41

  • http://jeromyanglim.blogspot.com Jeromy Anglim

    Citation counts are always a zero-sum game. Every article generates zero or more citations through its references and receives zero or more citations over time. Some fields have more references per article on average; some fields have articles with shorter citation half lives. Some fields might cite outside their field a little more than others. However, beyond those effects, it is strange to single out a discipline as having low numbers of citations. Eventually given enough time, articles in a discipline should tend over time to attain an average number of citations equivalent to the average number of references in an article in that field. The distribution of citations is generally highly skewed, but that’s a different issue.

    Thus, I wouldn’t use citation counts as an argument for assessing impact of a field. It’s always a zero sum game. You can use citation counts to assess the impact of an article or a journal, but a field as a whole seems problematic.

  • http://shagbark.livejournal.com Phil Goetz

    “Of 13 research articles … in 2004, 11 of them received zero to two citations, one had five, one 12. Of 23 article … 16 received zero to two citations, four of them three to six, one eight, one 11, and one 16. … The unfortunate conclusion is that the overall impact of literary research doesn’t come close to justifying the money and effort that goes into it. …”

    How do you get a conclusion without a hypothesis? All you have is a set of numbers, without any expectation of what those numbers should look like. What were the impact factors of the journals? What would the numbers be if the impact justified the money going into it?