Tag Archives: Uncategorized

Bias in Political Conversation

University of Pennsylvania professor Diana Mutz’s book "Hearing the Other Side: Deliberative versus Participatory Democracy looked at survey evidence as to how often people had conversations with others of differing political viewpoints.  In her words:

One logical conjecture would be to expect this form of political behavior to be much like any other.  In other words, it would be disproportionately the province of well-educated, high-income populations.  Indeed, the frequency of general political discussion tracks closely with these characteristics of high socioeconomic status.  But the correlates of cross-cutting conversation are strikingly different.  As shown in Figure 2.3, there are clear patterns of difference with respect to race, income, and education, but they are not in the usual directions.  Nonwhites are significantly more likely to engage in cross-cutting political conversation than whites.  And as income increases, the frequency of disagreeable conversations declines.   Exposure to disagreement is highest among those who have completed less than a high school degree and lowest among those who have attended graduate school.

As sociologist William Weston notes in discussing Mutz’s findings:

I can testify to how easy it is for conversation among academics, the most educated group of people, to turn into a one-position echo chamber. Liberalism is taken to be an IQ test, and the rare conservative is encouraged to be quiet or go elsewhere. For political disagreement I go to the coffee house, which in our town draws a broader range of people than the faculty club contains.

Of course, one explanation would be that what looks like herd behavior and social conformity is really just what happens when a bunch of superior intellects independently settle on the objectively correct viewpoint.  But that’s rather a self-serving explanation, isn’t it?

GD Star Rating
loading...
Tagged as:

Beware of Brain Images

Via the British Psychological Society’s excellent blog comes news of this study: MCCABE, D., CASTEL, A. (2008). "Seeing is believing: The effect of brain images on judgments of scientific reasoning," Cognition, 107(1), 343-352.

From the abstract:

Brain images are believed to have a particularly persuasive influence on the public perception of research on cognition. Three experiments are reported showing that presenting brain images with articles summarizing cognitive neuroscience research resulted in higher ratings of scientific reasoning for arguments made in those articles, as compared to articles accompanied by bar graphs, a topographical map of brain activation, or no image. These data lend support to the notion that part of the fascination, and the credibility, of brain imaging research lies in the persuasive power of the actual brain images themselves. We argue that brain images are influential because they provide a physical basis for abstract cognitive processes, appealing to people’s affinity for reductionistic explanations of cognitive phenomena.

As the BPS blog elaborates:

David McCabe and Alan Castel presented university students with 300-word news stories about fictional cognitive research findings that were based on flawed scientific reasoning. For example, one story claimed that watching TV was linked to maths ability, based on the fact that both TV viewing and maths activate the temporal lobe. Crucially, students rated these stories to be more scientifically sound when they were accompanied by a brain image, compared with when the equivalent data were presented in a bar chart, or when there was no graphical illustration at all.

This fits in with the theme of how people tend to overvalue something that is dressed up in the attire of science. 

GD Star Rating
loading...
Tagged as:

A Few Quick Links

1.  Via The Situationist, here is a page exploring seven biases of human memory, including the ways in which eyewitness testimony can be biased, how false memories can be implanted in people, the way that consistency bias causes us to misremember our own past beliefs or actions, and more. 

2.  Tyler Cowen has an article in The New Republic that is rather cynical about the value of most published research:

The sad truth is that "non-fiction" has been unreliable from the beginning, no matter how finely grained a section of human knowledge we wish to consider. For instance, in my own field, critics have tried to replicate the findings in academic journal articles by economists using the initial data sets. Usually, it is impossible to replicate the results of the article even half of the time. Note that the journals publishing these articles often use two or three referees–experts in the area–and typically they might accept only 10 percent of submitted papers. By the way, economics is often considered the most rigorous and the most demanding of the social sciences.

3.  Seth Roberts points out that the value of data is not binary, i.e., either convincing or worthless:

A vast number of scientists have managed to convince themselves that skepticism means, or at least includes, the opposite of value data. They tell themselves that they are being “skeptical” — properly, of course — when they ignore data. They ignore it in all sorts of familiar ways. They claim “correlation does not equal causation” — and act as if the correlation is meaningless. They claim that "the plural of anecdote is not data" — apparently believing that observations not collected as part of a study are worthless. Those are the low-rent expressions of this attitude. The high-rent version is when a high-level commission delegated to decide some question ignores data that does not come from a placebo-controlled double-blind study, or something similar.

So considering links 2 and 3, should we really downgrade the evidentiary value of published studies and upgrade the evidentiary value of anecdotes?  (That wouldn’t mean treating them both as equal, of course.) 

GD Star Rating
loading...
Tagged as:

Relative vs. Absolute Rationality

I’m reading Tim Harford’s "The Logic of Life" – it’s the first book I bought on my Kindle.  He uses a definition of rationality which I hadn’t seen before, which is simply that people respond to incentives.  I think this model of people as relatively rational has much more support than the idea that they are absolutely rational – that they choose optimal strategies to reach their goals, that they behave in an unbiased fashion.

And I think this is a good way of squaring the ideas that there is lots of evidence for human rationality and lots of evidence for human irrationality.  Before, I’d been thinking of the resolution as just that sometimes people are rational and sometimes they are irrational, depending on how complex the decisions and which heuristic modules are invoked.  But it feels much more correct to say that people rarely get the answer exactly right, but that they generally respond in the right direction when things change.

This definition rescues the implications of rationality-assuming economic analysis from the "But people aren’t rational!" attack.  Sure, people aren’t (absolutely) rational, but since they are (relatively) rational, policy makers[1] can influence behavior by assuming that people will respond in the correct direction to changes in incentives.  And they had better be wary of creating incentives without considering the consequences on behavior.

[1] Or anyone else engaged in mechanism design for humans.

GD Star Rating
loading...
Tagged as:

Two Meanings of ‘Overcoming Bias’ – For One: Focus is Fundamental. For Second: ?

‘Overcoming Bias’ has two meanings.

First: Right Now, as in ‘You have a mistaken belief, caused by a cognitive bias you don’t know you have, and I will cause you to correct that belief by pointing out the cognitive bias which caused it.’

Almost always, these claims are disguised injunctions to change your Focus

Usually to Expand your Focus:

  • Availability Bias – Expand your Focus to include information besides the striking and vivid information that is carrying you away;
  • Confirmation Bias – Expand your  Focus to include information that lessens  the force of the information (you cherish) that confirms your existing belief;
  • Disconfirmation Bias – Expand your Focus to include information that heightens the force of information (you despise)  that is inconsistent with your existing belief;
  • Fundamental Attribution Error –Expand focus to see Situations as possible causes of others’ behavior, besides the Personality characteristics you are using now;
  • Status Quo Bias – Expand to see alternatives besides the status quo
  • De`formation Professionelle – Expand beyond the conventions of your own profession:
  • Illusion of Control – Expand to see that you may not be able to influence the outcomes of interest;

And maybe 15 others (of 67), but who’s counting?

Rarely an injunction to Narrow your Focus:
Information Bias – Narrow your Focus to seek only information that can affect action.

Second meaning of ‘Overcoming Bias’: In the Future, as in ‘How can I avoid being influenced by my own (not yet known to me ) biases in the future?’
        The only effective way I have found is to invite criticism of my ideas by others – present my ideas in seminars, send them to journals for blind reviewers, bring up with colleagues at lunch, on blogs, etc — because I am blind to the biases that I have, by definition:  if I were not blind to them I wouldn’t ‘have’ them. Of course, this only works if I am free of the Bias Blind Spot Bias. (Some biases I can prevent by avoiding the occasion of bias, as by not gambling to forestall the probability biases.)

GD Star Rating
loading...
Tagged as:

The Judo Principle

The principle of judo is to use your opponent’s strength against him, by guiding it rather than resisting it.  A recent Australian campaign against reckless driving, aimed specifically at young men, has adopted the same approach with respect to cognitive biases.  http://www.timesonline.co.uk/tol/news/world/article1985802.ece

The traditional campaign, emphasizing the risks involved with speeding by showing graphic road crashes, was ineffective.  This is as would be predicted by evolutionary psychology.  Young males of many species engage in risky behaviour in order to signal their extraordinary prowess to women.  A man who succeeds, mates, and one who fails might as well be dead anyway, in evolutionary terms.  The traditional campaign assumes that young male speeders don’t realize their behaviour is risky, when in fact they speed because it’s risky.  I wouldn’t be surprised if the campaign actually increased the incidence of dangerous speeding by young men.

The new campaign encourages women to signal a small penis by wiggling their pinky at speeders, a sign which apparently signals a small penis.  This hits the mark, in evolutionary terms.  But will it work?  If women in fact find men who are successful risk takers to be more attractive, I doubt that an advertising campaign will make men believe otherwise.  If the campaign succeeds it will be a fascinating example of the triumph of culture over nature.  It’s worth a try.

Anyone have any other applications of the judo principle?

GD Star Rating
loading...
Tagged as:

Privacy rights and cognitive bias

Protection of privacy is a hot topic.  Hardly a day goes by without concerns over protection of privacy hitting the headlines with real impact (today’s example is “Google yields to privacy campaign” in setting their cookies to auto-delete.)  It seems clear that there is a general presumption in favour of privacy, in the sense that if something is seen to invade privacy this is a prima facie reason for stopping it, and the person wishing to go ahead bears the burden of justification.  But is this privacy presumption a rational response to the threat of invasive technology, or is it the result of a cognitive bias?

While I don’t work directly in the area, as a (somewhat bemused) observer I’ve always felt that there is a mismatch between the strength of feelings regarding privacy and the strength of the substantive arguments.  I think it is fair to say that in much of the debate as reported in the media no argument at all is made in favour of privacy.  It is just accepted as presumptively good.  This in itself suggests to me that there is a cognitive bias at play, even if there are ultimately good arguments for the privacy presumption. 

Continue reading "Privacy rights and cognitive bias" »

GD Star Rating
loading...
Tagged as:

Goals and plans in decision making

For years, Dave Krantz has been telling me about his goal-based model of decision analysis.  It’s always made much more sense to me than the usual framework of decision trees and utility theory (which, I agree with Dave, is not salvaged by bandaids such as nonlinear utilities and prospect theory).  But, much as I love Dave’s theory, or proto-theory, I always get confused when I try to explain it to others (or to myself):  "it’s, uh, something about defining decisions based on goals, rather than starting with the decision options, uh, …."  So I was thrilled to find that Dave and Howard Kunreuther just published an article describing the theory.  Here’s the abstract:

We propose a constructed-choice model for general decision making. The model departs from utility theory and prospect theory in its treatment of multiple goals and it suggests several different ways in which context can affect choice.

It is particularly instructive to apply this model to protective decisions, which are often puzzling. Among other anomalies, people insure against non-catastrophic events, underinsure against catastrophic risks, and allow extraneous factors to influence insurance purchases and other protective decisions. Neither expected-utility theory nor prospect theory can explain these anomalies satisfactorily. To apply this model to the above anomalies, we consider many different insurance-related goals, organized in a taxonomy, and we consider the effects of context on goals, resources, plans and decision rules.

The paper concludes by suggesting some prescriptions for improving individual decision making with respect to protective measures.

Continue reading "Goals and plans in decision making" »

GD Star Rating
loading...
Tagged as:

Reply to “Libertarian Optimism Bias …”

In this entry on George Bernard Shaw and G. K. Chesterton, I noted that 50 or 100 or 200 years ago, leftists associated progress with material happiness while rightists were more skeptical and tended to say that progress wasn’t always such a good thing.   Nowadays, the debates usually go in the other directions, with people on the left being less positive about material progress and people on the right saying that things are great now and are getting better.

Will Wilkinson replied here, analyzing the current differences between left and right as different views on government intervention:

Nothing beats a "crisis" to rally support for a big government effort. Right statists constantly drum up moral panics about sex and drugs. Also, Mexicans are "invading" and terrorists will surely blow us all up while singing the Star Spangled Banner at baseball games if we don’t allow the executive Jack Bauer to torture military detainees whenever he wants. Similarly, left statists warn that the shores of Manhattan will be inundated by rising oceans and very cute baby polar bears will die in droves. Also, inequality is soaring, threatening the foundations of democracy. And the middle class lives in terrifying "economic insecurity." And so on.

This is an interesting point.  Once again, it might be helpful to compare with attitudes 50, 100, etc. years ago.  Shaw, like many other socialists, supported government intervention in the economy and also thought that material progress would give us higher living standards and better lives.  (To put it in Wilkinson’s framework, Shaw saw problems with society that he thought could be alleviated by government intervention, but he framed this in an optimistic view of material progress, rather than in a "there’s more to life than just money" attitude.)

GD Star Rating
loading...
Tagged as:

One reason why plans are good

One of the small puzzles of decision analysis is that:

(a) Plans have lots of problems–things commonly don’t go according to plan, plans notoriously exclude key possibilities that the planner didn’t think of, plans can encourage tunnel vision, etc.  But . . .

(b) Plans are helpful.  In fact, it’s hard to do much of anything useful without a plan.  (I’m sure people will come up with counterexamples here, but certainly in my own work and life, not much happens if I don’t plan it.  Serentipitous encounters are fine but don’t add up to much.

Beyond this, one could add that economic activity seems to work well with minimal planning (just enough structure and rules to set up "the marketplace") but individual actors plan, and need to plan, all the time.

This puzzle is particularly interesting to me as I do work in applied decision analysis.

So what’s the solution to the puzzle?

Continue reading "One reason why plans are good" »

GD Star Rating
loading...
Tagged as: