Function of Stat Academia

Imagine an academic arguing:

Some say academics are lost in their “ivory tower” trying to impress each other and so aren’t very useful to the wider world.  But this is ridiculous.  Every academic paper cites previous papers the author found useful in writing this paper, and academics are very eager to be cited.  The most cited papers are the most celebrated papers.  So of course academics try to be useful.  If huge areas of academia seem pretty useless that is just because those academics just happen to be quite ignorant about to be useful – it doesn’t mean they aren’t trying to be useful.

See the flaw in that argument?  Right – being useful to other academics in trying to impress each other isn’t at all the same as being useful to the wider world.  Now consider a recent exchange between Seth Roberts and Andrew Gelman (with whom I discussed this in July.)  Seth:

Graphs and transformations are low-status. They are low-status because graphs are common and transformations are easy.  Anyone can make a graph or transform their data. I believe they were neglected for that reason.  To show their high status, statistics professors focused their research and teaching on more difficult and esoteric stuff — like complicated regression.  That the new stuff wasn’t terribly useful (compared to graphs and transformations) mattered little.  Like all academics — like everyone — they cared enormously about showing high status.  It was far more important to be impressive than to be useful.


This is, in my experience, ridiculous. Seth … says that useful statistical research work is generally low status. No, no, no, no! It’s hard to be useful! Just about everybody in statistics tries to do work that is useful.

OK, I know what Seth is talking about. I used to teach at Berkeley (as did Seth), and indeed the statistics department back then was chock-full of high-status professors (the department was generally considered #1 or #2 in the world) who did little if anything useful in applied statistics. But they were trying to be useful! They were just so clueless that they didn’t know better. … It’s certainly true that they didn’t appreciate graphical methods or the challenges of getting down and dirty with data. (They might have dismissed such work as being insufficiently general and enduring.) …

And they were also dismissive of applied research areas such as survey research that are fascinating and important but did not happen to be “hot” at the time. This is consistent with Seth’s hypothesis of status-seeking, but I’m inclined to give the more charitable interpretation that my Berkeley colleagues wanted to work on what they viewed as the most important and challenging problems. …

It’s much easier to run a plausible regression or Anova than to make a clear and informative graph.  I’ve published thousands of graphs but created tens of thousands more that didn’t make the cut.


In every department they look down on being useful. In some more than others, sure, and it isn’t constant over time, sure, but the general preference for useless over useful is blindingly clear. This is why academia is called “ivory tower”. …. So to me you seem to be arguing that stat professors are somehow different than all other professors.


I’m not suggesting that professors are different from everybody else … Whether or not status-seeking is the universal behavioral solvent you seem to feel it is, I don’t think it explains much. It seems to serve for you the same tautological purpose that for others is served by explanations such as “self-interest” or “unconscious drives.” Basically, if someone does something you don’t like, you’re attributing it to status-seeking. In my experience in statistics departments, it is applied work, not theoretical work, that has the highest status. Not always, and not everywhere, but most places.

But the “status” theory that academics are mainly trying to be credentialed as impressive often gives quite different predictions than the theory that academics are mainly trying to be useful to the wider world.  And Andrew seems to accept this when he talks about academics focusing on what seems “hot” rather than important – this data seems less likely if academics were mainly trying to be useful to a wider world.

Also, this status theory suggests not so much that academics try to do things that are hard, but that they try to do things that can be reliably credentialed as hard.  Yes it might be hard to make a good graph, but can journal referees reliably evaluate that difficulty, or is that mostly a subjective judgment?  The status theory expects graph-only stat papers only if referees have a reliable way to evaluate their difficulty, while the useful-to-society theory predicts frequent publication of insightful graph-only papers even when it is hard to objectively evaluate their difficulty.   So what does our data say – can you really get published in a top stat journal with an article containing only hard-to-make but also hard-to-evaluate graphs?

Rather than waive his hands pretending impressing-each-other theories of academia make no different predictions than useful-to-society theories, Andrew would do better to confront these theories with our actual data.

GD Star Rating
Tagged as:
Trackback URL:
  • Regarding your last sentence: I wasn’t “pretending” anything; I was saying that I didn’t get much out of Seth’s arguments about status-seeking. Consider that, in that same blog entry, Seth argued that the work of Steven Levitt (the so-called rogue economist) is low-status. Maybe so, maybe not, but at this point I think Seth is using “low status” more as a synonym for “something Seth likes” than anything else.

    I have no problem if someone wants to study the motivations of statistics professors (or professional workers in general). Neither Seth nor I were giving systematic data: we were giving our personal impressions and Seth was citing Veblen (which I’d hardly count as “data” on what’s going on today). On one hand, yes, I agree that Seth and I would do better to gather some data on all this, but, hey, we have our day jobs to think about!

    I was sharing my own experience and my observations of others, which is that running a reasonably-good regression turns out to be quite a bit easier than making a useful graph. Even simple regressions can reveal interesting things, whereas the graphs that social scientists make with their data typically don’t show anything interesting at all. Graphics has been less systematized than other areas of statistics, and, yes, methods that are hard to do get done less, I think.

    The point about the difficulty of making good graphs was very relevant to Seth’s argument. Seth claimed that graphs are easy, graphs are low-status, and people don’t do enough of them. I’m saying that graphs are hard and people don’t do enough of them. The difference in our positions is that Seth needs the status argument, otherwise he is left with a mystery (why don’t social scientists do the easy, more effective data-analytic option?), whereas for me the story is much more straightforward (and fits in better with my own experience).

    P.S. You ask about “hard-to-make but also hard-to-evaluate graphs.” It’s too bad that graphs are hard to make–that’s a bug, not a feature!–and it’s really too bad if graphs are hard to evaluate! The idea is for them to be easy to understand. In any case, my two most influential political science papers were full of innovative graphs.

    P.P.S. I agree with Seth (and, I think with you) that there is a large subset of academic statisticians that worship technical difficulty. Overall, though, I still think this stuff is low-status. These people might want to publish useless, super-theoretical work, but nobody else cares about it.

    • Andrew, yes or no: could you publish an article in a top stat journal that only contained insightful but hard to objectively evaluate graphs?

      • I’m not quite sure what you’re asking. Any article will have more than graphs: it will have words, numbers, maybe some formulas, …

        But perhaps this graph-heavy article in the journal Applied Statistics would count.

        Or this one in Annals of Applied Statistics.

        Or this one in the Journal of the American Statistical Association.

        And so on.

      • Your first paper is a method paper, the second very clearly has an explicit fitted model, and the third link is the same as the second. The ideal here would be a paper that just displays data in a way that helps you understand some phenomena, without an intervening model, something like what DR describes below.

      • Robin: Sorry about the repeated link. I was taking articles from this page.

        You ask for “a paper that just displays data in a way that helps you understand some phenomena, without an intervening model.” The closest example of this, I think, is this article which appeared in a top poli sci journal. I imagine we could’ve published in in a top stat journal, but that wouldn’t have reached the best audience–we were doing political science, after all.

        More generally, though, I think your query doesn’t make so much sense. To say that my paper in Applied Statistics doesn’t count because it’s a “methods paper” . . . that misses the point, I think. Statistics is methods. I’m claiming that academic statisticians generally give high status to useful methods, I think Seth is claiming that statisticians give high status to useless methods.

        Also, I disagree with what I think is your implication when you write, “without an intervening model.” I believe strongly–and I think this has happened in my own applied work–that graphical methods are most effective when used alongside a model, and that models are most effective when used with graphics. (See this paper on exploratory data analysis for complex models.) That was one of the big points of my blog entry that you linked to. I think it’s a major mistake–but an understandable one, given the history of the field–to associate statistical graphics with the display of data without a model.

      • OK, sounds like you are saying no stat journals don’t publish just graph-the-data papers, and furthermore you don’t approve of such papers in stat journals. But while the status theory predicts this behavior, you suggest instead that such graph papers are objectively bad and so unworthy of publication.

      • Huh? Statistics journals publish all sorts of things that I don’t approve of. But a graph of raw data with no methodological content–sure, that’s something that would be more appropriate in an applications journal. It’s not about status, it’s about finding the right audience. But if a stat journal wanted to publish the occasional raw-data graph, that would be fine with me. I think if Gary and I had submitted our 1993 paper–which was structured around graphs with very little mathematical modeling–to a top stat journal, it would’ve been accepted for publication.

  • tom

    This is a little like Tyler Cowen’s post the other day on Olympic figure skating, and a Steve Sailer post and comment on the same subject.

    The rules of figure skating have evolved to measure the things that are consistently measureable, even if those may not otherwise be the things that have been considered the most artistic/important/interesting in the past.

    Both of these are like the old joke about the guy looking for his keys under the lamppost. Large mature systems will measure things because they are consistently measurable even if they aren’t the things that would be most closely tied to the trait they are supposed to be tracking.

    Another argument against big companies, big universities, big governments….

  • Most of the useful stuff that we humans know is pretty simple and so cannot be taught in school because it is too easy to learn and school is about rigorous testing (grading people) far more than it is about teaching useful stuff.

  • DR

    In comparison I do quant research in high frequency finance where there’s a lot more emphasis on being right rather than being impressive (if not just for the reason that your models get tested/invalidated over a period of hours or days).

    Even though the place tends to be stocked with PhDs who could easily be academics (some of whom were), you very rarely see anything more complicated than ordinary linear regressions or basic ARMA time series used. Histograms and scatterplots are heavily used as are sample transformations.

    • michael vassar

      Might more complex methods be appropriate where data is more expensive to collect, time horizons are longer, testing more difficult, etc?

      • fburnaby

        Yup. My research uses geospatial and oceaongraphic data. My entire research project could be replaced by a few sampling programs over 5 years, instead of having me do 2 years worth of analysis on existing data. The difference is that my time costs maybe $60,000 (inc. overheads) over those 2 years (I’m a student), and the sampling program would cost O($100,000,000).

        Obviously it’s rational for myself (and O(5) ) others need to milk this data for all it’s worth.

        This is probably a niche thing that no-one outside of academia would do, but there is a practical need for some more complicated (what might get accused of “masturbatory”) approaches. But it’s my understanding that this is what university research departments are *for* — solving the non-standard problems. Academia is where you go to do work that, while important, no-one else would bother with.

      • fburnaby

        Typo above. I have one too many zeros: should be $10,000,000.

  • economics master’s student

    I feel the pressure every time I write a paper. Don’t bother with the simple stat tools, even if they are the best way to get a point across. The way to get high marks is to flex those stat muscles!

    Reminds me of an econometrics professor telling me how “hot” marginal effects were when he was a grad student. They were hard to compute then. Now they’re easy as hell, and if you include them in a paper it better not be the main dish.

  • Brenton

    I don’t know anything, but I had heard that graduate schools of economics are basically applied mathematics fields. They don’t actually study economics… instead they just try to do the most ‘impressive’ mathematics formulas for supply and demand calculations that they can handle.

    • economics master’s student

      Yep. Math, math, stats, and math with a bit of economics sprinkled on top.

      • David Jinkins

        Econ graduate school is certainly math heavy, but this background knowledge is necessary for understanding and adding to the frontier of economics research. For example, everyone knows how to open STATA and run an OLS regression with robust standard error. However, to develop a new statistical test or to create a good estimator of some new phenomenon you need to understand the nuts and bolts of what is going on in OLS, like different kinds of probabilistic convergence, and a whole slew of theorems relating to them. This takes a fair bit of real analysis and probability theory.

        Having a loose understanding of the meaning of some important bits of economic theory (say the Lucas critique, or the asset premium paradox or whatever) is good up to a point, but if you want to add something useful to the discussion, you have to understand the nitty gritty mathematical details.

      • economics master’s student

        Right, you need the math if you want to play with the big boys. And the fun train never stops if you signed up because you wanted to study math and statistics. Economics is what you learn on your own time.

        I’m happy, but my brain was never poisoned by the thought that I might learn something useful.

      • valter

        No, you need the math (or at least some kinds of math) even if you want to do applied work. Real life seldom throws you problems that you can solve simply by pressing a button in Stata; you have to know what lies behind the buttons both to decide which buttons to press and to write up your own models and algorithms when no button would work.

        (nothing in the above statement should be construed as an endorsement of the typical standard curriculum of undergraduate and graduate economics education; that would require a separate lengthy discussion, but my personal belief is that the total weight of math in the ideal curriculum is not going to be any lower than it is)

      • economics master’s student

        Yeah, I’m obviously talking about math, not mere math. Perfect for PhD prep and signalling that you’re a powerful human being, but overkill for nearly all practical applications.

        But, hey, if it’s a status-mongering world then play it as it lies.

  • The goal of an academic journal is to publish papers that incrementally advance knowledge, and the goal of peer review is to make sure that the paper actually includes an incremental advance — it isn’t enough for the author to claim so.

    This naturally leads to an ‘auditability bias’: papers that cannot definitively demonstrate their incremental advance, as interesting as they may be, generally fall to lower-tier journals.

    I’m not sure whether this is a bug or a feature, but I find it hard to believe that any journal editor or reviewer has ever said “this is an interesting paper, but the topic is just too darn useful to publish.” Instead, it turns out that useful papers are often not easily audited.

    As one of my mentors said long ago, “if someone hasn’t written a paper on an obviously important issue yet, either you are the first to think of it or it is really hard to do well.” A key part of an academic researcher’s growth is it realize, humbly, that there are probably a lot of smart people who tried to address obviously important issues before, and failed.

    A secondary observation: an incremental contribution to knowledge is typically not all that useful to present day problems — if it were, someone in the private sector would be doing the work and profiting from it. Why would that be the best use of researchers employed by non-profit organizations and governments?

  • anon

    My guess is that the “impressiveness” and “status” effects are real, but they are unintended artifacts of the journal-publishing process.

    Editors of top stats journals would like to publish articles which will be heavily cited and boost their journal’s impact factor. Using impressive, esoteric techniques is a credible signal that the paper is more likely to be highly cited, because: (1) some esoteric techniques turn out to be actually useful, even though most are not, and (2) esoteric techniques are related to effort, and expending lots of effort on a single paper signals that the author’s hidden info about that particular paper is very good.

    Note that the most celebrated papers tend to be cited across disciplines and/or reported in the science news. It would be hard to argue that the authors of these papers are simply trying to impress their fellow academics.

    I’m not sure about the graphs vs. math issue, but since information visualization is a well-established field, I find Seth’s claims about the “low” status of graphics to be unconvincing.

  • Guest

    I read Andrew’s point as this: that the difficulty of doing useful work is a utility cost that often outweighs the utility gains of the status associated with such work. Different marginal returns on work in the form of status may not be predictive of behavior because those marginal returns and the marginal costs of doing the work are highly correlated to the point that net motivation is only weakly correlated with status outcomes.
    He suggests that academics like hot fields- those where marginal return to marginal cost is high, attracting many investigators as one would expect in an efficient market that just discovered an intellectual arbitrage to exploit before it disappears. This does not contradict that high-status/low-status is not a predictive variable. It can support that status per work is the predictive variable if those assumptions about what makes fields hot. Aggregation behavior is as indicative of a feeding frenzy as of self-reinforcing status games.
    Finally, graph-only papers seem like an absurd edge case and not especially useful in proving your point. First, you have not made the case for why all-graph papers should be written more than they are, especially given Andrew’s description of how much harder it is to write graphs than text. Second, I suspect that good all-graph papers can usually be improved by adding other analysis, and I would expect reviewers to recommend revision for papers that could easily be improved by adding more analysis even if they thought that paper were already publishable.
    Without refinement, the status theory does not make predictions as clearly as you suggest in this post.

  • Don’t neglect the issue of intrinsic interestingness. Things which require novel thoughtful solutions can provide pleasure that more mundane work does not. This, frankly, is what drives a great deal of mathematics though fads do play a part.

    • ChristianK

      Preferring to work on intrinsic interestingness instead of working on useful stuff is a way to signal status.

      From an evolutionary perspective low status people can’t waste their precious time with things that have intrinsic interestingness.
      Those people with high status who don’t have to spend their time with practical stuff however can spend their time on things with intrinsic interestingness.

      • Popeye

        And preferring to work on useful stuff is also a way to signal status.

        And you know what’s a really great way to signal status? To talk about how everything boils down to signalling status.

        And an even better way to signal status? Talk about how status theory predicts that even if status theory is true, people will not believe in status theory because it’s true, but only because it’s a good way of signalling status.

        I’m king of the world!

  • valter

    I agree with Peter Gardes. I would push his argument even further: fads may also be generated by intrinsic interestingness independently of any status seeking (though I am not denying the importance of the latter here). If you hear a lot of colleagues talking about a given class of problems, you may simply get more curious about that class; and, when you start digging, almost any area of research will present interesting tidbits. I guess it is a kind of rational herding.