Monthly Archives: February 2007

Bias Toward Certainty

Before the Iraq invasion, President Bush did not say, "I think that there is a 60 percent chance that Saddam has an active WMD program."

Al Gore does not say, "I think there is a 2 percent chance that if we do nothing there will be an environmental catastrophe that will end life as we know it."

Instead, they speak in the language of certainty.  I assume that as political leaders they know a lot better than I do how to speak to the general population.  So I infer that, relative to me, the public has a bias toward certainty.

Another piece of evidence of that is an anecdote I cite about my (former) doctor.  Several years ago, he said I needed a test.  I did some research and some Bayes’ Theorem calculations, and I faxed him a note saying that I did not think the test was worth it.  He became irate.

I think that one reason that our health care system works the way it does is that it does not occur to anyone to say, "OK, I can live with that level of uncertainty."  Instead, we must have the MRI, or the CT scan, or whatever.  Even, as in my case, when the patient is willing to live with uncertainty, the doctor has a problem with it.

Another way that bias toward certainty shows up is in the way we handle disagreement.  People don’t say that there were differences within the intelligence community about the probability distribution for Saddam having WMD.  They say that Bush manipulated the intelligence.  And they are right, in the sense that he tried to make it sound certain. 

My point is that what comes naturally to a lot of people on this blog–thinking in Bayesian terms–is in fact very unnatural in general.  It seems as though outside of the realm of sport betting, people don’t like to think in terms of chance.  Maybe there are realms where even those of us who are more Bayesian than most are victims of bias toward certainty.

GD Star Rating
loading...
Tagged as:

Selection Bias in Economic Theory

A bit ago I surprised David Balan by suggesting that selection bias is so strong that when estimating the health effect of something like alcohol, we should prefer the sloppier and noisier control variable estimates from papers that focus on other topics, relative to the estimates from papers where alcohol is the main focus. 

Selection bias is also very strong in economic theory.   Here also, "authors, funders, and referees have answers they expect and want to see," and authors can search among possible assumptions to find models that give expected answers.  Unfortunately, there is no useful theory analog of bias-avoiding control variable estimates.  So how can we avoid selection bias in policy advice from economic theorists?

The clearest way I can see is to limit attention to a small range of standard theoretical models, a range that is still capable of giving clear policy advice covering the full ideological advice range.  And the obvious choice here is:  economic efficiency evaluations of supply and demand with externalities and transaction costs.   This is the main framework taught in introductory economics courses, and is fully capable of recommending high levels of regulation and intervention, depending on what empirical findings one applies.

Of course professional economic theorists such as myself know a lot more than this basic theory, and it seems a terrible shame to ignore all this further insight.  But I’ll have to admit we are so capable of choosing further assumptions to get the answers we want that outsiders can’t gain much policy advantage from our further insight.  I have a distant hope that betting markets can someday help us overcome this serious limitation. 

GD Star Rating
loading...
Tagged as:

Induce Presidential Candidates to Take IQ Tests

Many U.S. voters, I suspect, give significant weight to their estimate of candidates’ intelligences when deciding who to vote for. We currently guess the level of candidates’ intelligences by evaluating their past actions and judging debate performances. But surely a better way would be to have all the candidates take IQ tests or perhaps some standardized test such as the SATs. True, most candidates took the SATs when they were much younger, but their intellectual capacity might have deteriorated since then. We could induce candidates to take IQ tests by giving federal election funds only to candidates who take them in the year prior to the first presidential primary election.

GD Star Rating
loading...
Tagged as:

crackpot people and crackpot ideas

I listened to Brian Doherty talk about his book on the history of libertarianism.  One point he makes is that in the forties and the fifties, libertarians were mostly crackpots.  He suggests that this is likely to be the case for any dissident idea.

This suggests that different ideas are going to occupy different niches.  For example, suppose that there is a large niche for anti-capitalist ideas.  The actual ideas occupying that niche may be different in different time periods, but something always fills that niche.

There may different niches for pessimistic ideas and optimistic ideas.

When there is a popular idea and a crackpot idea, which is more likely to be right?  Instead of thinking about this problem by thinking in terms of a probability distribution, it may be useful to think of an ecological model.   What sort of false ideas are likely to occupy particular niches, including the niche of popular opinion?  What sort of false ideas are likely to survive by finding crackpots to host them?

Belief in anthropogenic global warming is becoming popular.  Skepticism is becoming crackpot.  What is the probability that the global warming partisans will turn out to be the crackpots?  How does that probability depend on the niche that the global warming idea occupies?

I know that the ecological metaphor has been used in this context, with the term "meme," but I admit I have never read the literature, so I don’t know if the connection between bias and survival of memes has been addressed there.

GD Star Rating
loading...
Tagged as:

Press Confirms Your Health Fears

There is a huge disconnect between health factors that research suggests are most important, and health factors that get the most media and policy attention.  A new RWJF working paper suggests that the press overemphasizes obesity to satisfy reader demands:

News reports on the "obesity epidemic" have exploded in recent years, eclipsing coverage of other health issues including smoking. … Anyone with a Body Mass Index (BMI, weight in kilos divided by height in meters squared) over 25 is deemed "overweight." … Almost 2/3 of the U.S. population today weighs "too much" today by these standards. Recently, several researchers have argued that, for the overwhelming majority of people, weight is a poor predictor of health and should be less of a public health focus. A recent study by scientists at the Center for Disease Control and Prevention (CDC) suggests that it is only after BMI reaches 35 that there is a meaningful increase in mortality, that people in the "overweight" category actually had the lowest rate of mortality. Still, such skeptical voices remain a minority perspective in public discussion of obesity. …

This paper exploited a unique sample of: 1) scientific articles on weight and health; 2) press releases on those studies; and 3) and news reports on those same studies … We found that … the news media’s tendency to report more heavily on the most alarmist and individual-blaming scientific studies, and not simply how they frame individual stories, partly explains how the news dramatize and individualize science. … These findings support the contention that scientists work as "parajournalists" writing their stories  and especially the abstract  with journalists in mind. They then frame their research via press releases and interviews with journalists. A reward structure in which, all things being equal, alarmist studies are more likely to be covered in the media may make scientists even more prone to presenting their findings in the most dramatic light possible.

The press/policy overemphasis of obesity is probably small compared to the overemphasis of medical care.  In general it is very hard for the press and academic system to tell the public anything much different from what the public expects and wants to hear. 

GD Star Rating
loading...
Tagged as: ,

Too Many Loner Theorists?

The basic job of an economic theorist is to write papers, usually with at most one or two co-authors, that develop new models of some phenomenon of interest. A reasonably successful theorist writes about one such paper per year. Each paper contains a brand-new model, which while often similar to other models that have come before, has to be built up from scratch. The fact that you have to build up a new model with each paper, combined with the fact that you have to write lots of papers, means that the models can’t be too complicated, or at least can’t be complicated in ways other than the specific ways that you want them to be. They have to be tricked out in just such a way as to allow you to get at the question of interest, while leaving a whole bunch of other (important) stuff out.

This is not the only way that economic modelling could be done. You could imagine an alternative in which teams of modellers work for a long time on developing much more complicated models, running different versions of them (using different assumptions or different values for the various parameters of the model), seeing what pops out, and then writing a string of papers describing the ways in which the model has been tweaked and reporting the results that have been obtained. There are models like this floating around (I think the Federal Reserve has a big one) but they are rare and almost completely absent from the academic literature. Years ago they were somewhat common in Macroeconomics, but they seem to have fallen out of fashion. There might be good reasons not to use these kinds of models. One theorist friend suggested that working through mathematical proofs of relatively simple models provides more intuition and insight than just cranking away on a huge model that’s too complicated for anyone to really understand. I think he has a good point, and there may be others. But I suspect that these kinds of models are rarer than they should be, and I think the reason is because they are not fun for theorists to build. Building a nice model from scratch feels like a creative act, one with a lot of aesthetic appeal, almost more art than science. Building a huge model with a bunch of other geeks and sticking ugly numbers into it feels much different, and the kind of people who like that feeling are the kind who would have become empirical or experimental economists, and not theorists, in the first place.

GD Star Rating
loading...
Tagged as: ,

Words for Love and Sex

Do our words bias our thoughts?   Consider how differently we treat words for love and sex.

Words related to "love" tend to refer to usefully distinct concepts.  Words like "affection, devotion, fondness, and infatuation" describe identifiably different relationships and feelings.  But when we want to describe our affections for each other, we tend to gravitate to the common word "love." 

Words related to "sex," in contrast, tend to refer to pretty much the same concept.  Words like "intercourse, copulation, coitus, congress, relations" have a very similar connotation.  Some other words that don’t go in a family blog give connotations that vary along a spectrum of shock value, and sometimes identify alternative physical positions.   But while we can construct concepts that describe differing sex details or context, we don’t seem much interested in communicating those details.  Nevertheless, we go out of our way to use a wide variety of words for "sex." 

Perhaps those who use different words instead of "love" tend to be less focused on an exclusive relation with a single person, and so we gravitate to "love" to avoid this appearance.  Perhaps we use different words for "sex" in order to signal that we don’t consider our sex partners to be easily interchangeable with others.   

Whatever the reasons, it seems that using a common word can distract us from useful distinctions, while using differing words can distract us from commonalities.  Thanks to Colleen Berndt for suggesting the topic. 

GD Star Rating
loading...
Tagged as:

It’s Sad When Bad Ideas Drive Out Good Ones

Recently there was a piece by William Pfaff in the New York Review of Books. It starts off by pointing out that deeply rooted in American political culture is the idea that the United States is not a country like other countries, but rather has a unique (or nearly unique) world historical moral mission, and then makes a more-or-less standard lefty case that this idea has been and continues to be the source of a great deal of misguided and evil U.S. policy.

Pfaff may or may not be right that this widely held belief in a special American moral mission has been a major cause of the many terrible things that we have done in our history. The interesting thing about the piece is that he doesn’t even consider the possibility that there really is, in some meaningful sense that a liberal could get behind, something morally special about the United States. But the United States was explicitly and self-consciously created on the basis of Enlightenment principles, and has a national identity based on a progressive political creed rather than on tribal ties or obedience to kings and priests. This is a remarkable thing, and you would think that it would merit some discussion in a piece on this topic.

But there is none. Why not? One likely reason is that the great majority of the people who talk about the unique moral mission of the United States are illiberal jingoists whose “moral clarity” on subjects related to the use of U.S. power does not stem from a belief that there is an objective moral truth that can be apprehended and should be acted upon, but rather from a belief that whatever the U.S. does is axiomatically right and moral simply because we did it, no matter how stupid or corrupt or homicidal. So Pfaff and others like him are unlikely to pay much attention to anyone who wants to sell them a story about the great moral mission of America. And this is not crazy (though it is very sad); we are stuck in an equilibrium where anyone who makes noises about America’s moral mission is almost certainly a jingoist, so no non-jingoist will take any such idea seriously, so no non-jingoist will have any reason to offer a non-jingoist strain of the idea, and so such a strain never gets a chance to develop or spread. I think a lot of good ideas get frozen out this way.

GD Star Rating
loading...
Tagged as:

Truth is stranger than fiction

Robin asks the following question here:

How does the distribution of truth compare to the distribution of opinion?  That is, consider some spectrum of possible answers, like the point difference in a game, or the sea level rise in the next century. On each such spectrum we could get a distribution of (point-estimate) opinions, and in the end a truth.  So in each such case we could ask for truth’s opinion-rank: what fraction of opinions were less than the truth?  For example, if 30% of estimates were below the truth (and 70% above), the opinion-rank of truth was 30%.

If we look at lots of cases in some topic area, we should be able to collect a distribution for truth’s opinion-rank, and so answer the interesting question: in this topic area, does the truth tend to be in the middle or the tails of the opinion distribution?  That is, if truth usually has an opinion rank between 40% and 60%, then in a sense the middle conformist people are usually right.  But if the opinion-rank of truth is usually below 10% or above 90%, then in a sense the extremists are usually right.

My response:

1.  As Robin notes, this is ultimately an empirical question which could be answered by collecting a lot of data on forecasts/estimates and true values.

2.  However, there is a simple theoretical argument that suggests that truth will be, generally, more extreme than point estimates, that the opinion-rank (as defined above) will have a distribution that is more concentrated at the extremes as compared to a uniform distribution.

The argument goes as follows:

Continue reading "Truth is stranger than fiction" »

GD Star Rating
loading...
Tagged as: , ,

Posterity Review Comes Cheap

Compared to most people, academics care more about what posterity will think of them.  And academics tend to be overconfident, believing posterity will remember them better than most.  Also, the magic of compound interest rates makes the current price to get posterity to review current academic work remarkably low.  For example, at a real interest rate of five percent, one hour of work today will buy 130 hours of equally productive work in a century, and over 17,000 hours of such work in two centuries.   

Combining these observations, a relatively cheap way to improve the incentives of academics today could be pay to have posterity publish a careful historical review of today’s research.  Imagine that for each academic paper written today, we paid one minute of time now to buy 300 hours of evaluation at a future date (e.g., two centuries later at five percent interest).  Looking at all available records, including web, email, and voice records, and looking together at groups of related papers, this future evaluation would estimate the relative accuracy and valued added of each contribution relative to resources used, carefully tracing out where these insights came from and where they led.

Posterity review would seem to have a much better chance than peer review of figuring out who stole what ideas from who, who contributed real useful insight instead of showing impressive ability with words or math, and so on.  And once we knew that such review would take place, we could create many interesting forecasting mechanisms and reward schemes today, tied to those future evaluations. 

Of course there are crucial problems to work out regarding how to organize these future historians and give them the proper incentives.  But these seem like problems well worth thinking about. 

GD Star Rating
loading...
Tagged as: ,