Author Archives: Andrew

The what-pisses-you-off heuristic

This post by Robin, in which he is annoyed that an organization of interior designers has persuaded state legislatures to license their profession (such as it is), and which he is also annoyed that his fellow economists don’t make more of a fuss about such regulations, reminds me of a principle that I heard once (I don’t remember where) that you can really understand someone’s deeper ideology by looking at what pisses him or her off.

As Robin himself notes, the licensing of florists, funeral directors, and interior designers is not a big deal–certainly nothing on the order of the problems caused by overfishing, say, or by various trade and migration restrictions, or even the (arguably) large problems caused by policies such as the mortgage tax deduction which reduce people’s ability to move.

Nonetheless, Robin writes of economists’ disinclination to fight the licensing battle that it “saddens me more than I can say.” I don’t doubt his sincerity. but what’s most interesting to me here is to think about why this bothers him so much.

P.S. I certainly don’t mean this to be a personal criticism of Robin in any way. I certainly have my own things that piss me off for no particular reason, ranging from socks lying around on the floor–they’re not a practical obstacle so why does the messiness bother me so much–to misinterpretations (as I see them) of Bayesian statistics–things that are probably lower on the scale of importance than the net welfare loss caused by economists not fighting the licensing of florists.

There are so many things to be pissed off about, that the choice of what we decide to let bother us can perhaps be revealing.

GD Star Rating
loading...
Tagged as:

The intervention and the checklist: two paradigms for improvement

I’m working on a project involving the evaluation of social service innovations, and the other day one of my colleagues remarked that in many cases, we really know what works, the issue is getting it done. This reminded me of a fascinating article by Atul Gawande on the use of checklists for medical treatments, which in turn made me think about two different paradigms for improving a system, whether it be health, education, services, or whatever.

The first paradigm–the one we’re taught in statistics classes–is of progress via “interventions” or “treatments.” The story is that people come up with ideas (perhaps from fundamental science, as we non-biologists imagine is happening in medical research, or maybe from exploratory analysis of existing data, or maybe just from somebody’s brilliant insight), and then these get studied (possibly through randomized clinical trials, but that’s not really my point here; my real focus is on the concept of the discrete “intervention”), and then some ideas are revealed to be successful and some are not (with allowances taken for multiple testing or hierarchical structure in the studies), and the successful ideas get dispersed and used widely. There’s then a secondary phase in which interventions can get tested and modified in the wild.

The second paradigm, alluded to by my colleague above, is that of the checklist. Here the story is that everyone knows what works, but for logistical or other reasons, not all these things always get done. Improvement occurs when people are required (or encouraged or bribed or whatever) to do the 10 or 12 things that, together, are known to improve effectiveness. This “checklist” paradigm seems much different than the “intervention” approach that is standard in statistics and econometrics.

The two paradigms are not mutually exclusive. For example, the items on a checklist might have had their effectiveness individually demonstrated via earlier clinical trials–in fact, maybe that’s what got them on the checklist in the first place. Conversely, the procedure of “following a checklist” can itself be seen as an intervention and be evaluated as such.

And there are other paradigms out there, such as the self-experimentation paradigm (in which the generation and testing of new ideas go together) and the “marketplace of ideas” paradigm (in which more efficient systems are believed to evolve and survive through competitive pressures).

I just think it’s interesting that the intervention paradigm, which is so central to our thinking in statistics and econometrics (not to mention NIH funding), is not the only way to think about process improvement. A point that is obvious to nonstatisticians, perhaps.

GD Star Rating
loading...
Tagged as:

Different meanings of Bayesian statistics

I had a discussion with Christian Robert about the mystical feelings that seem to be sometimes inspired by Bayesian statistics.  The discussion originated with an article by Eliezer so it seemed appropriate to put the discussion here on Eliezer's blog.  As background, both Christian and I have done a lot of research on Bayesian methods and computation, and we've also written books on the topic, so in some ways we're perhaps too close to the topic to be the best judge of how a newcomer will think about Bayes.

Christian began by describing Eliezer's article about constructing Bayes’ theorem for simple binomial outcomes with two possible causes as "indeed funny and entertaining (at least at the beginning) but, as a mathematician, I [Christian] do not see how these many pages build more intuition than looking at the mere definition of a conditional probability and at the inversion that is the essence of Bayes’ theorem. The author agrees to some level about this . . . there is however a whole crowd on the blogs that seems to see more in Bayes’s theorem than a mere probability inversion . . . a focus that actually confuses—to some extent—the theorem [two-line proof, no problem, Bayes' theorem being indeed tautological] with the construction of prior probabilities or densities [a forever-debatable issue].

I replied that there are several different points of fascination about Bayes:

Continue reading "Different meanings of Bayesian statistics" »

GD Star Rating
loading...
Tagged as:

Rationality of voting etc.

I was going to respond to this post by Philip Goetz (who writes that "voting kills") but I thought it would make more sense to summarize in a post of my own.  Even if you don’t care about voting, these issues–how to compute probabilities of extremely rare events–are relevant in other decision settings.

Goetz reports an estimate by Donald Redelmeier that there is an 18% increase in motor vehicle deaths on election day, corresponding to an average of 24 deaths per year and compares it to the 1 in 60 million probability of decisive vote estimated a few days before the election by Aaron Edlin, Nate Silver and myself.  (If anyone is interested in the details of our calculations, they are in this article.)

So the quick calculation goes like this:  24 out of 300 million is about five times 1 in 60 million.  So, according to these numbers, the chance of your vote making a difference is about five times, on average, as being killed in a car accident on the way to the polls.  On the other hand, people notoriously behave as to underestimate the risk of car crashes, so it’s not quite clear what to make of this.

Some other quick calculations might help make sense of this.  The probability that your vote will swing the election is essentially equal to 1/10,000 times the probability that a change of 10,000 votes will swing the outcome.  This has an average probability of about 1 in 6000, which is a little easier to grasp.

Continue reading "Rationality of voting etc." »

GD Star Rating
loading...
Tagged as:

Yes, it can be rational to vote in presidential elections

With less than a year to the next election, and with the publicity starting up already, now is a good time to ask, is it rational for you to vote? And, by extension, is it worth your while to pay attention to what Hillary, Rudy, and all the others will be saying for the next year or so? With a chance of casting a decisive vote that is comparable to the chance of winning the lottery, what is the gain from being a good citizen and casting your vote?

The short answer is, quite a lot. First the bad news. With 100 million voters, your chance that your vote will be decisive–even if the national election is predicted to be reasonably close–is, at best, 1 in a million in a battleground state such as Ohio and less than 1 in 10 million or less in a less closely-fought state such as New York. (The calculation is based on the chance that your state’s vote will be exactly tied, along with the chance that your state’s electoral vote is necessary for one candidate or the other to win the Electoral College. Both these conditions are necessary for your vote to be decisive.) So voting doesn’t seem like such a good investment.

But here’s the good news. If your vote is decisive, it will make a difference for 300 million people. If you think your preferred candidate could bring the equivalent of a $50 improvement in the quality of life to the average American–not an implausible hope, given the size of the Federal budget and the impact of decisions in foreign policy, health, the courts, and other areas–you’re now buying a $1.5 billion lottery ticket. With this payoff, a 1 in 10 million chance of being decisive isn’t bad odds.

And many people do see it that way. Surveys show that voters choose based on who they think will do better for the country as a whole, rather than their personal betterment. Indeed, when it comes to voting, it is irrational to be selfish, but if you care how others are affected, it’s a smart calculation to cast your ballot, because the returns to voting are so high for everyone if you are decisive. Voting and vote choice (including related actions such as the decision to gather information in order to make an informed vote) are rational in large elections only to the extent that voters are not selfish.

Continue reading "Yes, it can be rational to vote in presidential elections" »

GD Star Rating
loading...
Tagged as:

Some structural biases of the political system

In this recent entry, Eliezer discussed what might be called the "pork-barrel paradox" in politics, that even when there is public support for reducing the size of government, the political constituency for individual programs can be strong enough to keep them all going.  He also points out that the occupations represented in Congress don’t match the country at large, and maybe don’t match what we really need.  (To briefly quote myself, I’m willing to believe that the country’s 890,000 lawyers are being overrepresented, but what about the 114,000 biologists? A few of these in Congress might advance the understanding of public health. And then there are the 290,000 civil engineers–I’d like to have a few of them around also. I’d also like some of the 280,000 child care workers and 620,000 pre-K and kindegarten teachers to give their insight on deliberations on family policy. And the 1.1 million police officers and 340,000 prison guards will have their own perspectives on justice issues.)

Anyway, this reminded me of some other biases that are inherent in our political system:

Continue reading "Some structural biases of the political system" »

GD Star Rating
loading...
Tagged as:

Battle of the election forecasters

Douglas Hibbs is a political scientist whose "bread and peace" model forecasts presidential election votes pretty well from the economy alone, with corrections for wartime.  (I don’t know how to upload graphs to this blog so I’ll point you to some pretty pictures of how the model works for the elections from 1952 through 2004.)

This is interesting in its own right–if elections can be predicted, how do we make sense of fluctuations in the polls–but what I thought would particularly interest the Overcoming Bias community is Hibbs’s discussion, in his recent article, of an article by William Nordhaus that claimed that economic forecasts did not actually work well in 2004.  Nordhaus writes, "the Republican incumbent candidate in 2004 did significantly worse than would be predicted based on economic and political variables such as incumbency and economic performance."  Hibbs, however, makes a convincing case that Nordhaus just looked at some bad models.  Here’s Hibbs’s paper; the discussion of different forecasting models begins on page 5.

As a bonus, here’s an article by Bob Erikson and Chris Wlezien on why the political markets have been inferior to the polls as election predictors.  Erikson and Wlezien write,

Continue reading "Battle of the election forecasters" »

GD Star Rating
loading...
Tagged as:

The fallacy of the one-sided bet (for example, risk, God, torture, and lottery tickets)

This entry by Eliezer struck me as an example of what I call the fallacy of the one-sided bet.  As a researcher and teacher in decision analysis, I’ve noticed that this form of argument has a lot of appeal as a source of paradoxes.  The key error is the framing of a situation as a no-lose (or no-win) scenario, formulating the problem in such a way so that tradeoffs are not apparent.  Some examples:

Continue reading "The fallacy of the one-sided bet (for example, risk, God, torture, and lottery tickets)" »

GD Star Rating
loading...
Tagged as:

Why so little model checking done in statistics?

One thing that bugs me is that there seems to be so little model checking done in statistics.  Data-based model checking is a powerful tool for overcoming bias, and it’s frustrating to see this tool used so rarely.  As I wrote in this referee report,

I’d like to see some graphs of the raw data, along with replicated datasets from the model. The paper admirably connects the underlying problem to the statistical model; however, the Bayesian approach requires a lot of modeling assumptions, and I’d be a lot more convinced if I could (a) see some of the data and (b) see that the fitted model would produce simulations that look somewhat like the actual data. Otherwise we’re taking it all on faith.

But, why, if this is such a good idea, do people not do it? 

Continue reading "Why so little model checking done in statistics?" »

GD Star Rating
loading...
Tagged as:

Bias-awareness bias, or was 9/11/01 a “black swan”

The bias I’m talking about here–I’m not quite sure what to call it–is the readiness to assume bias where possibly none exists.  Or, more generally, the overestimation of the magnitude of a bias, or the attribution to bias of a phenomenon that can be explained more directly.  I’m thinking specifically of Eliezer’s entry on hindsight bias where he wrote:

Hindsight bias is when people who know the answer vastly overestimate its predictability or obviousness, compared to the estimates of subjects who must guess without advance knowledge. . . . Shortly after September 11th 2001, I [Eliezer] thought to myself, and now someone will turn up minor intelligence warnings of something-or-other, and then the hindsight will begin. Yes, I’m sure they had some minor warnings of an al Qaeda plot, but they probably also had minor warnings of mafia activity, nuclear material for sale, and an invasion from Mars.

This doesn’t seem quite right to me:  I’d think the FBI and CIA would have the resources to investigate warnings of an al Quaeda plot, mafia activity, and nuclear material for sale (and I think they know enough to ignore warnings of invasions from Mars).  As Alex puts it here,

What about this specific threat, Osama Bin Laden? Well, he did have a past prior for trying to blow up the World Trade Center, didn’t he? I don’t think his past failure would have made it less likely for him to try again, do you?

The comments at that link are also relevant to this discussion.  Anyway, my key point here is that people do make mistakes–people even make mistakes that could’ve been realized ahead of time if proper procedure had been followed.  In these cases, the concept of "hindsight bias" can be used inappropriately as a blanket to cover up all failures.

GD Star Rating
loading...
Tagged as: