Why so little model checking done in statistics?

One thing that bugs me is that there seems to be so little model checking done in statistics.  Data-based model checking is a powerful tool for overcoming bias, and it’s frustrating to see this tool used so rarely.  As I wrote in this referee report,

I’d like to see some graphs of the raw data, along with replicated datasets from the model. The paper admirably connects the underlying problem to the statistical model; however, the Bayesian approach requires a lot of modeling assumptions, and I’d be a lot more convinced if I could (a) see some of the data and (b) see that the fitted model would produce simulations that look somewhat like the actual data. Otherwise we’re taking it all on faith.

But, why, if this is such a good idea, do people not do it? 

I don’t buy the cynical answer that people don’t want to falsify their own models. My preferred explanation might be called sociological and goes as follows: We’re often told to check model fit. But suppose we fit a model, write a paper, and check the model fit with a graph. If the fit is ok, then why bother with the graph: the model is OK, right? If the fit shows problems (which, realistically, it should, if you think hard enough about how to make your model-checking graph), then you better not include the graph in the paper, or the reviewers will reject, saying that you should fix your model. And once you’ve fit the better model, no need for the graph.

The result is: (a) a bloodless view of statistics in which only the good models appear, leaving readers in the dark about all the steps needed to get there; or, worse, (b) statisticians (and, in general, researchers) not checking the fit of their model in the first place, so that neither the original researchers nor the readers of the journal learn about the problems with the model.

One more thing . . .

You might say that there’s no reason to bother with model checking since all models are false anyway. I do believe that all models are false, but for me the purpose of model checking is not to accept or reject a model, but to reveal aspects of the data that are not captured by the fitted model. (See chapter 6 of Bayesian Data Analysis for some examples.)

GD Star Rating
Tagged as:
Trackback URL:
  • Great post, Andrew. I’d like to see more of this from you, and less moralizing or positive self-positioning (particularly of the variety that’s not carefully derived from empiricism).

    I don’t think your post in the comments such as “I don’t hold meetings to signal my status” add value the way your OP here does.

  • Sounds like a putdown to me, HA. Sure you’re not seeking status?

    Andrew, the sense I know of model checking is the theorem-proving one, but that’s obviously not what you mean here. Googling “data-based model checking” didn’t turn up much. The impression I get is that you want the authors to write a simulation which uses the model fitted to the data to output new data, and then graph the new data and the real data side-by-side. Is this correct?

  • Eliezer,

    Yes, that’s what I’m talking about.

  • Eliezer,
    My internal sense is that I’m seeking persistence, and that rationally status-seeking behavior on my part should be completely subordinate to that goal.

  • g

    HA, you might be seeking status without consciously seeking status. I think Eliezer’s question meant “are you sure you aren’t fooling yourself?” rather than “are you sure you aren’t lying to us?”.

  • g, I think my post reflects awareness of that possibility. My request to Andrew in my September 22, 2007 at 01:27 PM post stands. Readers (Andrew included) can make their own judgments as to the merit of my request.

  • Pingback: Deus Ex Macchiato » Model risk in economics and finance()