Monthly Archives: November 2006

Asymmetric Paternalism

An article titled Regulation  for  Conservatives: Behavioral Economics and the Case for Asymmetric Paternalism provides a fairly good perspective on how paternalistic laws should be evaluated, but is a bit weak on the public choice considerations that should make us skeptical of laws. They provide good arguments showing that cognitive biases imply that a government run by angels ought to be sometimes paternalistic, because we can imagine a wide variety of laws which provide significant benefits to people who are acting irrationally while having much less effect on people who are acting irrationally. The examples in this paper show that’s it’s not hard to imagine laws like that appear to do this by mandating defaults that rational people can override, by requiring better disclosure, and by requiring delays for certain purchases. But the examples also show that it’s hard to tell whether a significant fraction of those laws are beneficial in practice.

Continue reading "Asymmetric Paternalism" »

GD Star Rating
loading...
Tagged as:

To the barricades! Against … what exactly?

< ?xml version="1.0" standalone="yes"?> < !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">

Within us are powerful tendencies to distort our beliefs, tendencies which hinder us from solving many other problems.  My hope is that we can build a community as serious about this problem as these folks are about theirs (minus the blood):

Delacroix_1 But to be effective we need not only passion but also precision.  Therefore, let us try to define our goal as clearly as possible, while avoiding tangential haggling over detail or verbal gymnastics.  Nick just posted on this issue, and several comments have previously raised it.  So, here goes. 

Continue reading "To the barricades! Against … what exactly?" »

GD Star Rating
loading...
Tagged as: ,

Foxes vs Hedgehogs: Predictive Success

I want to follow up on my earlier post on Philip E. Tetlock’s book, Expert Political Judgment, and in particular his discovery of differential predictive accuracy between individuals with the cognitive styles corresponding to "Foxes" vs "Hedgehogs".  As several commenters guessed, his study found that Foxes (who have a flexible, adaptive, tentative cognitive style) significantly outperformed Hedgehogs (who are said to "know one thing and know it well" and to focus on a single, coherent theoretical framework in their analyses and predictions).

I first want to emphasize that this is a wide-ranging book with a variety of points of view and directions of analysis.  I hope my focus on one aspect in these blog postings doesn’t give readers too narrow a view of Tetlock’s work.  Here is an excellent review of the book from The New Yorker that goes into more detail on the range of material covered.

However, from the point of view of predictive accuracy and bias, Tetlock does organize much of his presentation around the Fox/Hedgehog distinction.  This is not because of some prejudice that this aspect of cognitive style is of supreme importance, but rather that it came out of the data.  More on this below the fold.

Continue reading "Foxes vs Hedgehogs: Predictive Success" »

GD Star Rating
loading...
Tagged as:

What exactly is bias?

Bias is something bad from an epistemological point of view, but what exactly is it and how is it distinct from other kinds of epistemological error?

Let’s start with one remark Robin made in passing: "Bias is just avoidable systematic error." One big question here is what makes a systematic error avoidable. For example, suppose somebody has never heard about overconfidence bias. When such a person (erroneously) rates herself above average on most dimensions without strong specific evidence, is she making an avoidable error?

It seems to be avoidable only in the sense that there is information which she doesn’t have but which she could get which would make it possible for her to correct her beliefs about her own abilities. But in this sense, we would be biased whenever we lack some piece of information that bears on some broad domain. Socractes would be biased merely by being ignorant of evolution theory, neuroscience, physics, etc. This seems too broad.

Conversely, if she is systematically overestimating her own abilities, it seems she is biased even if these errors are unavoidable. Suppose she does learn about overconfidence bias, but for some deep psychological reasons simply cannot shake off the illusion. The error is systematic and unavoidable, yet I’d say it is a bias.

Here is an alternative explication that I’d like to put forward tentatively for discussion: A bias is a non-rational factor that systematically pushes one’s beliefs in some domain in one direction.

Continue reading "What exactly is bias?" »

GD Star Rating
loading...
Tagged as:

Moral Overconfidence

A Washington Post article from from last Saturday says:

In the 2006 survey of more than 36,000 high school students, 60 percent said they cheated on a test, 82 percent said they lied to their parents about something significant and 28 percent said they stole something from a store. … 92 percent said they were "satisfied with my own ethics and character." About 74 percent said that "when it comes to doing right, I am better than most people I know." … the percentage of students who lie, cheat or steal could be higher than the survey found.   When asked, 27 percent of the students admitted that they lied on at least one survey question.

It is hard to see how 3/4 of people could be better than most people they know about anything; moral overconfidence is a far more plausible explanation.

On a similar topic, a clever set of experiments, described in "Exploiting Moral Wiggle Room" (and discussed at Marginal Revolution) shows that people act to appear fair, but are less fair when given even feeble excuses for their unfair behavior.

Are you willing to admit that you are about as moral as most people you know?  I am. 

GD Star Rating
loading...
Tagged as: ,

Why Are Academics Liberal?

In the US at least, academics are more liberal and Democratic than ordinary people.   While among ordinary people the ratio of Democrats to Republicans is about 1:1, academia as a whole has a ratio of 5:1, and the humanities and social sciences have a ratio of 8:1.   These ratios have roughly doubled over the last forty years.  See this 2005 Critical Review paper by my colleague Dan Klein, but also this 2006 Public Opinion Quarterly reply, and this further response.   

Does this difference between academia and the public produce or reflect a bias in the beliefs expressed in academic articles?  

Continue reading "Why Are Academics Liberal?" »

GD Star Rating
loading...
Tagged as: , ,

A 1990 Corporate Prediction Market

Betting markets have been around for a long time, and as far as I know, until recently they were all created to help traders achieve goals such as hedging, gambling, proving themselves, and so on.  What appears to be new is that non-traders are now creating and subsidizing some markets, in order to gain information by believing the market prices.  Such price estimates are remarkably robust against biases, making this a promising approach to reducing bias.

The earliest such market that I know of was one I helped create at Xanadu, Inc. in 1990. 

Continue reading "A 1990 Corporate Prediction Market" »

GD Star Rating
loading...
Tagged as:

The Martial Art of Rationality

I often use the metaphor that rationality is the martial art of mind.  You don’t need huge, bulging muscles to learn martial arts – there’s a tendency toward more athletic people being more likely to learn martial arts, but that may be a matter of enjoyment as much as anything else.  Some human beings are faster or stronger than others; but a martial art does not train the variance between humans.  A martial art just trains your muscles – if you have the human-universal complex machinery of a hand, with tendons and muscles in the appropriate places, then you can learn to make a fist.  How can you train a variance?  What does it mean to train +2 standard deviations of muscle?  It’s equally unclear what it would mean to train an IQ of 132.  But if you have a brain, with cortical and subcortical areas in the appropriate places, you might be able to learn to use it properly.  If you’re a fast learner, you might learn faster – but the art of rationality isn’t about that; it’s about training brain machinery we all have in common.

Continue reading "The Martial Art of Rationality" »

GD Star Rating
loading...

Beware Heritable Beliefs

Some of the differences in our beliefs seem to be heritable.   "The Heritability of Attitutes: A Study of Twins," in Journal of Personality and Social Psychology in 2001, asked 339 twin pairs for their attitudes on 30 topics.  These attitudes had seven common factors, four of which moderate categories of beliefs:   

  • Life: voluntary euthanasia, abortion on demand, birth control, and organized religion.
  • Intellect: books, chess, education, and capitalism
  • Equality: open-door immigration, distinct gender roles, racial discrimination, and getting along with others
  • Punishment: death penalty for murder,  and castration for sex crimes

Genetic differences explained most of differences in attitudes to life and equality (66% and 55% of the variance respectively), but none (0%) of the attitudes to intellect and punishment.   

Continue reading "Beware Heritable Beliefs" »

GD Star Rating
loading...
Tagged as: , , ,

The wisdom of bromides

Apropos Robin’s recent remark that "’No one on their deathbed ever wished they had spent more time in the office,’ the saying goes." and his wondering whether we are really biased to spending too much time in the office.

This makes me wonder about the function of bromides. Consider:

"Look before you leap!"

"He who hesitates is lost!"

Continue reading "The wisdom of bromides" »

GD Star Rating
loading...
Tagged as: