Author Archives: Anders Sandberg

Media Risk Bias Feedback

Recently a friend mentioned that he was concerned about health effects from wifi. I pointed out that this was likely an overblown concern, fed by the media echoes of a scare mongering BBC Panorama program, and pointed him at the coverage at Ben Goldacre’s blog Bad Science for a through takedown of the whole issue.

To my surprise he came back more worried than ever. He had watched the program on the Bad Science page, but not looked very much at the damning criticism surrounding it. After all, a warning is much more salient than a critique. My friend is highly intelligent and careful about his biases, yet fell for this one.

There exists a feedback loop in cases like this. The public is concerned about a possible health threat (electromagnetic emissions, aspartame, GMOs) and demand that the potential threat is evaluated. Funding appears and researchers evaluate the threat. Their findings are reported back through media to the public, who update their risk estimates.

In an ideal world the end result is that everybody get better estimates. But this process very easily introduces bias: the initial concern will determine where the money goes, so issues the public is concerned about will get more funding regardless of where the real risks are. The media reporting will also introduce bias since the media favour reporting newsworthy news, and risk tends to cause greater interest than reports of no risk (or the arrival of reviews of the state of the knowledge). Hence studies warning of a risk will be overreported compared to risks downplaying it, and this will lead to a biased impression of the total risk. Finally, the public will have an availability bias that makes them take note of reported risks more than reported non-risks. And this leads to further concerns and demands for investigation.

Note that I leave out publication bias and funding bias here.There may also be a feedback from the public to media making media report things they estimate the public would want to hear about. These factors of course muddy things further in real life but mostly seem to support the feedback, not counter it.

Continue reading "Media Risk Bias Feedback" »

GD Star Rating
loading...
Tagged as:

How Biases Save Us From Giving in to Terrorism

Terrorists are hampered by biases as much as the rest of us. In a Wired commentary "The Evolutionary Brain Glitch That Makes Terrorism Fail" Bruce Schneier discusses the interesting findings of Max Abrams in his paper Why Terrorism Does Not Work (International Security, Vol. 31, No. 2 (Fall 2006), pp. 42–78).

Basically, terrorists run into trouble because people use correspondent inference theory to infer the intentions of others: the results of their actions are assumed to be concordant with their intentions. If a person sweeps the floor we assume he wants it clean (but he could just be working off excess energy). If somebody hits somebody else, we assume the intention was to harm (but it could just be a game). Similarly, people infer that the horrific deaths of innocents is the primary motivation of a terrorist – which likely leads to a misunderstanding of the real goals of the terrorist.

This is bad news for terrorism as an effective coercive means to political or social ends. Although the terrorist can state his demands and goals, people will tend to assume that he is just a sadist rationalising. Possibly a dangerous sadist one has to occasionally acquiesce to, but the goals are not seen as essential to him. His "real" goals are assumed to be the destruction of society, and this makes accepting demands less favorable. Abrams finds empirical support for this in that terrorists are much more likely to succeed with their demands if they focus their attacks on military goals rather than civilian ones, and if they have minimalist goals (evicting a foreign power, winning control of a piece of territory). Attacking civilians or wanting to change the world makes people assume the intention is something else.

This analysis assumes bias among the non-terrorists making them unwilling to play along, but clearly there are plenty of biases among the terrorists too. The correspondence makes them impute evil intentions to governments that behave clumsily or violently. The emotional salience of terror probably introduces a lot of availability bias, impact bias makes terrorists overestimate the emotional effect of their actions, groupthink is likely pretty big within terrorist grooming communities and so on.

It seems that one could probably analyse terrorism in terms of cognitive biases quite fruitfully. Whether that will lead to ways of reducing terrorism is another matter. Maybe unbiased terrorists will simply see that the Bayesian thing to do is simply to go home since terror doesn’t work efficiently – or they would start making non-hyperbolic long-term plans for surgical strikes that simply cannot be misunderstood. Conversely, maybe terrorists could be incited to bias themselves into inefficiency, but highly biased people can occasionally be dangerous. Maybe the real aim should be an unbiased anti-terror strategy – but as long as politicians and public are biased they will likely see the unbiased strategy as wrong.

GD Star Rating
loading...
Tagged as:

Biases are Fattening

In addition to all their other effects, biases can also contribute to obesity. Architectures of Control cite the story of how David Wallerstein discovered how unit bias could help sell more fast food. He observed how people were unwilling to buy two packages, but quite willing to buy a double-sized package. Hence the supersizing of everything.

Geier, Ronzin & Doros demonstrated that people tends to regard a unit of some entity is the appropriate and optimal amount by measuring how much people consumed free Tootsie Rolls or pretzels when provided in different sizes, or M&M’s provided with differently sized spoons. This likely explains why people tend to eat more when served larger portions. The authors suggest that the unit bias in food might be social: people don’t want to seem to be gluttons. Another possibility they suggest is that there is a culture-norm interaction: we package things in appropriate sizes, we learn the appropriate amount by being exposed to standard packages.

A third possibility is of course an aversion to wasting, whether instilled by mother or evolution. I have a fourth neurocognitive possibility: we run on hierarchical motor programs and tend to switch behavior when one of them has concluded. So consuming a unit would presumably be a single iteration of one such program. We can certainly learn more elaborate programs like "take unit; consume until full; leave the rest", but that requires ongoing monitoring that may be cumbersome or easily distracted. I would expect unit bias to generalise outside food too. The researchers point out that double features are rare but long movies are not, and that people take one ride on an amusement park ride regardless of whether it is 1 or 5 minutes long. I would also expect unit bias to tend to round our thinking towards the nearest integer number of convenient units.

Some months ago when I moved to the UK I made the deliberate decision to only buy Coca Cola in sixpacks rather than 1.5 l bottles. The result is that I consume much less, since I now only take a can instead of more or less continually refilling my glass. So clearly unit bias can be used to downregulate food intake too. It is just that there is no incentive for the food sellers to do it. Maybe one solution to obesity would be easier ways of dividing bought food into convenient smaller units?

GD Star Rating
loading...
Tagged as:

Tell Me Your Politics and I Can Tell You What You Think About Nanotechnology

Ronald Bailey has a column in Reason where he describes the results of the paper Affect, Values, and Nanotechnology Risk Perceptions by Dan M. Kahan, Paul Slovic, Donald Braman, John Gastil, Geoffrey L. Cohen. The conclusion is that views on risks of nanotechnology are readily elicited even when people know that they do not know much about the subject and these views become strengthened along ideological lines by more facts. Facts do not matter as much as values: people appear to make a quick gut feeling decision (probably by looking at the word "technology"), which is then shaped by their ideological outlook. Individualists tend to see the risks as smaller than communitarians. There are similar studies showing the same thing about biotechnology, and in my experience the same thing happens when the public gets exposed to discussions about human enhancement.

The authors claim that this result does not fit with "rational weigher" models where people try to maximize their utility, nor with "irrational weigher" models where cognitive biases and bounded rationality dominates. Rational  individualists and communitarians ought not differ on their risk evaluations, and the authors claim it is unlikely that different cultural backgrounds would cause differing biases. They suggest a "cultural weigher" model where individuals don’t simply weigh risks, but rather evaluate what one position or another on those risks will signify about how society should be organized. When people learn about nanotechnology or something similar, they do not update instrumental risk probabilities but develop a position with respect to the technology that will best express their cultural identities.

This does not bode well for public deliberations on new technologies (or political decisions on them), since it seems to suggest that the only thing that will be achieved in the deliberations is a fuller understanding of how to express already decided cultural/ideological identities in regards to the technology. It does suggest that storytelling around technologies, in particular stories about how they will fit various social projects, will have much more impact than commonly believed. Not very good for a rational discussion or decision-making, unless we can find ways of removing the cultural/ideological assumptions of participants, which is probably pretty hard work in deliberations and impossible in public decisionmaking. 

GD Star Rating
loading...
Tagged as:

One Reason Why Power Corrupts

Here is an interesting cognitive bias: people feeling in power tend to not consider the perspectives of other people – quite literally.

In Adam D. Galinsky, Joe C. Magee, M. Ena Inesi, and Deborah H Gruenfeld, Power and Perspectives Not Taken, Psychological Science, 17:12, 1068-1074 2006 researchers primed a group of test subjects by asking them to write down a memory where they held power over other people, while another group were asked to write about a time when others had power over them. Then the subjects were asked to quickly write the letter ‘E’ on their forehead.

High-power subjects were about three times as likely as low-power subjects to draw the letter oriented so it would be readable by themselves rather than readable by others.

In follow-up experiments it was found that high-power subjects also tended to assume other people had the same information that they had (the "telepathic boss" problem – the boss assumes that everybody knows what he knows and want). They were also less accurate than low-power subjects at judging emotional expressions. There were also anticorrelations between reports of general feelings of being in power in one’s life and tendency to take other’s perspective. Overall high-power people seem to anchor too heavily on their own vantage point and this impairs their ability to consider what others see, think and feel.

People with less power likely have to consider other people’s intentions and views more strongly, so perhaps the power bias is actually the real baseline and powerless people concentrate more on mind reading. But given the increase in errors in emotion reading the power mode people had compared to people primed neither with being powerful or powerless, this seems unlikely.

What are the implications of a power bias? In general power bias would make the empowered people tend to think they have more support from others in their views than they have. Altruists in power would be even less concerned with individual variations in goals and values – i.e. they would tend to become more egalitarian and paternalist. Egoists in power would become more concerned about the ambitions of others, i.e. paranoid.

Is this bias rational? When leading other people the cognitive load of taking their perspective might be cumbersome, and the increase in stereotyping that seem to occur in people in the "power mode" might also be a form of attention management. Imposing one’s own goals onto others might also make them obey more effectively. For leading people to perform particular goals this mode might work better. The downside is that if the task is heavily reliant on individual achievements meshing together or more based on voluntary action a lack of perspective risks missing early signs of trouble and will produce rebellion. The researchers suggest that power and perspective taking might not have to exclude each other and that responsible leadership might be possible by learning to take both into account. But they do not cite any actual experiments showing that it works.

Maybe we should just promote people with Asperger syndrome to management in favour of people with intact theory of mind. That way we will not reduce the total human ability to see things from other perspectives.

GD Star Rating
loading...
Tagged as:

The Butler Did It, of Course!

Here is a paper showing the potential practical utility of detecting and reducing biases: Confirmation bias in criminal investigations by O’Brien and Ellsworth. In an experiment subjects read a police file and were asked halfway through about their hypotheses of who the murderer was; practically everybody named the obvious suspect. On completing the entire file, where a second and stronger suspect emerges in the later half, they still tended to suspect the first guy. In a second experiment the subjects were asked to generate counter-hypotheses about why their suspect might be innocent, and this reduced the confirmation bias.

Another troubling source of bias is false confessions triggered by this confirmation bias and then strongly supporting the erroneous conclusion. The Psychology of Confessions by Kassin and Gudjonsson reviews this. During the preinterrogation review police, believing themselves to be better at detecting deception than they are, tend to confidently make false positive detections of deception in innocent people. Once they have convinced themselves they have caught a suspect a police interrogation then becomes guilt-presumptive and rather effective in generating false confessions, in particular in cognitively challenged people. And finally, juries and judges are easily convinced by the confessions.

Nothing of this may be total news to anybody on this blog, but it is still rather worrying how strong biases are accepted in police investigations and the legal system. Maybe the counter-hypothesis trick at least could be made part of police procedure: at certain points during investigations the investigators have to state possible disconfirming hypotheses for the record.

GD Star Rating
loading...
Tagged as:

Supping with the Devil

Funding bias occurs when the conclusions of a study get biased towards the outcome the funding agency wants. A typical example from my own field is Turner & Spilich, Research into smoking or nicotine and human cognitive performance: does the source of funding make a difference? Researchers declaring having tobacco industry funding more often detected neutral or positive cognitive enhancement effects from nicotine than non-funded researchers, who were more evenly split between negative, neutral and positive effects.

There have been some surveys of funding bias. Bekelman, Li & Gross find that 25% of investigators in their material had industry funding sources. Doing a meta-analysis of 8 articles themselves evaluating 1140 original studies they got a 3.6 odds ratio of industry favourable outcomes when there was industry sponsorship compared to no sponsorship. There are also problems with data sharing and publication bias. An AMA 2004 Council Report also points out that sponsered findings are less likely to be published and more likely to be delayed.

A case study of co-authoring a study with the tobacco industry by E. Yano describes both how the industry tried to fudge the results (probably more overtly than in most cases of funding bias) and how the equally fierce anti-tobacco campaigners then misrepresented the results; the poor researcher was in a no-win scenario.

Continue reading "Supping with the Devil" »

GD Star Rating
loading...
Tagged as: