Media Risk Bias Feedback

Recently a friend mentioned that he was concerned about health effects from wifi. I pointed out that this was likely an overblown concern, fed by the media echoes of a scare mongering BBC Panorama program, and pointed him at the coverage at Ben Goldacre’s blog Bad Science for a through takedown of the whole issue.

To my surprise he came back more worried than ever. He had watched the program on the Bad Science page, but not looked very much at the damning criticism surrounding it. After all, a warning is much more salient than a critique. My friend is highly intelligent and careful about his biases, yet fell for this one.

There exists a feedback loop in cases like this. The public is concerned about a possible health threat (electromagnetic emissions, aspartame, GMOs) and demand that the potential threat is evaluated. Funding appears and researchers evaluate the threat. Their findings are reported back through media to the public, who update their risk estimates.

In an ideal world the end result is that everybody get better estimates. But this process very easily introduces bias: the initial concern will determine where the money goes, so issues the public is concerned about will get more funding regardless of where the real risks are. The media reporting will also introduce bias since the media favour reporting newsworthy news, and risk tends to cause greater interest than reports of no risk (or the arrival of reviews of the state of the knowledge). Hence studies warning of a risk will be overreported compared to risks downplaying it, and this will lead to a biased impression of the total risk. Finally, the public will have an availability bias that makes them take note of reported risks more than reported non-risks. And this leads to further concerns and demands for investigation.

Note that I leave out publication bias and funding bias here.There may also be a feedback from the public to media making media report things they estimate the public would want to hear about. These factors of course muddy things further in real life but mostly seem to support the feedback, not counter it.

A little model to estimate how serious the problem is: Imagine that there are N studies published, and that they have probability p of being right. On average we should expect Np correct conclusions and N(1-p) erroneous. Media will report a study with probability P1 if it finds a risk, and with probability P0 if there is no risk (P0 < P1). Finally, the public will notice risk reports with probability Q1 and non-risk reports with probability Q0 (Q0<Q1). If there is actually a risk the public will notice about  NpP1Q1 studies that warn against it, and N(1-p)P0Q0 studies that say there is no danger. As long as p/(1-p)>P0Q0/P1P1 people will get the correct impression that evidence tells them to increase their risk estimates. This is always the case given the above assumptions if p is close to 1. In the case there is no risk the public gets NpP0Q0 studies that say there is no problem, and N(1-p)P1Q1 studies that (erroneously) warn. If p/(1-p) < P1Q1/P0P0 the public will now be convinced that there is a risk.

If we assume that p is around 0.95, that only requires P1Q1/P0P0 to be larger than 19 to produce bias feedback. So if there is about a factor 4-5 overrepresentation of risk and availability bias each, we get a situation where the scientists are actually producing correct conclusions that there is no risk but they get consistently misrepresented to the extent that the public believes the risks are becoming more certain. Over time, the risk may become part of common knowledge (just like factoids like getting cramps from eating too close to swimming), promoting other biases like bandwagon effects and leading to irrational policy.

If p is lower, which is likely in many uncertain fields, the tendency to overestimate risk gets even worse for p=0.9 the bias factor needs to be larger than 9 to cause feedback, for p=0.8 just 4 and for p=0.7 just 2.3. In the last case we only need Q1/Q0 and P1/P0 to be 1.5 to get bias feedback. 50% overreporting and overattending to warnings doesn’t sound unlikely at all.

Even when this does not happen it seems likely that subgroups of the public will still be convinced that there is a risk and demand funding, and through another layer of availability bias (and decision makers tendency to act on demands rather than non-demands) this produces more funding for research that will keep the worried worried. This is apparently what is happening in cellphone-radiation research, where funding priorities are set externally by public concern rather than based on the best risk estimates.

How to overcome? Even if the media in this model were perfectly fair (P1=P0) the Q1/Q0 factor still introduces bias. It seems that the real threat here is the multiplication of these bias factors (we could introduce a publication bias factor increasing the likelihood of researchers publishing pro-risk papers, and a factor for filtering through experts, and so on). It doesn’t seem that unlikely to get a factor of at least 2, and then we just need two or three layers of bias before we risk serious feedback. Since reducing Q1/Q0 is likely hard, it is better to reduce the number of layers – read scientific papers directly, base funding decisions (somehow) not on public concern but on statistical estimates of risk, ignore or punish media for relying on experts (or worse, other media) rather than researchers themselves etc.

The problem here isn’t media per se, but that biases are compounding and possibly leading back to a distortion of the fact-finding process. Media priorities make things worse, but it is just an extra layer of compounding.

Some other things that came up while reading up on this:

Derek J. Paulsen, Wrong side of the tracks: exploring the role of newspaper coverage of homicide in socially constructing dangerous placesJournal of Criminal Justice and Popular Culture, 9(3) (2002) 113-127

Even when the risk concern is right there might be biases due to focus on salient kinds of risk. The above paper shows that crime reporting accurately reports some crime hotspots (the central ones) but misses others. In the case of electromagnetic fields people are concerned with wifi and cellphones (availability again!) but not general fields, so funding becomes targeted at small subsets while the rational approach would be to try to figure out the whole picture.

Combs, B. & Slovic, P. (1979). Newspaper coverage of causes of death. Journalism Quarterly 56, 837-843. & Connie M. Kristiansen, Newspaper coverage of diseases and actual mortality statistics, European Journal of Social Psychology, Volume 13, Issue 2, Pages 193-194, 1983

Frequency of reporting in newspapers about causes of death is essentially uncorrelated with the frequencies of real causes of death (0.13 in the Coombs study). But the correlation between estimates made by people of the frequencies and the newspapers is high (0.7).

Kumanan Wilson, Catherine Code, Christopher Dornan, Nadya Ahmad, Paul Hébert and Ian Graham, The reporting of theoretical health risks by the media: Canadian newspaper reporting of potential blood transmission of Creutzfeldt-Jakob disease, BMC Public Health. 2004; 4: 1.

Newspapers primarily relied upon expert opinion as opposed to published medical evidence, and some of the activity led to policy effects (such as the Red Cross withdrawing blood out of fear of contamination, creating shortages of blood products and around $11 million in costs – but possibly a saved reputation).

Anders af Wåhlberg and Lennart Sjöberg, Risk perception and the media, Journal of Risk Research 3(1), 31-50 (2000)

A dissenting paper, arguing that media might be biased but tends to be biased in a diversity of ways, and in particular might affect people’s risk perception less than commonly believed. However, their argument seems to be that media does not affect personal risk perceptions as much; my argument in this essay is that impersonal risk perceptions might be significant in aggregate – even if someone doesn’t believe they are likely to be hurt by a risk, they might support further research or policy against it.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • savagehenry

    I Had a similar experience with some friends regarding GMOs. I was the only one who participated in the thread on a local forum who was not greatly opposed to continuing research of GMOs or their use in agriculture. I pointed them to some pretty good papers on the risks associated with GMOs, specifically horizontal gene transfer which was what they were arguing was a huge deal that would kill us all. I found much to my dismay that after they read the papers their insistence that GMOs would be the death of us all increased! Except for one person who seems to have, if only slightly, lowered the probability in their mind that GMOs were in general dangerous. It seems that the rest of them the perception of the risk posed by GMOs had been set so high that no amount of contrary evidence would lower the risk in their mind (and their regurgitation of anti-GMO talking points became more desperate when contrary evidence was presented).

    I thought it was a bit weird at the time but I think Anders is on to something here.

  • http://profile.typekey.com/jhertzli/ Joseph Hertzlinger

    One possible reason for the increased certainty in the face of contrary evidence is that they might have thought that contrary evidence could only have come from a Sinister Conspiracy. Increased contrary evidence clearly indicated the Conspiracy was more powerful than they had thought. If somebody regarded as reliable told them otherwise that could only mean the Conspiracy had gotten to him.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    The Sinister Conspiracy? What do they do? I’m part of the Bayesian Conspiracy myself, but this one sounds interesting.

  • Anon

    I think it’s the left-handed version of the Red-Headed League:

    http://en.wikipedia.org/wiki/The_Red-Headed_League

  • http://www.aleph.se/andart/ Anders

    If there is a possible Sinister Conspiracy trying to downplay risks (cellphone companies, aspartame producers, the sugar lobby, etc) people indeed seem to attribute no-risk messages to bias from them while risk messages get attributed to neutral sources. It seems to be asymetric: it is seldom assumed risk messages come from the Sinister Conspiracy and no-risk messages are neutral, even where we might suspect bias on both sides.

    Maybe we have two kinds of bias here. One is the above attribution bias, where the presence of some interest is enough to devaluate all arguments and evidence. Then there is the “underdog bias”: we tend to cheer for the underdog in western culture, and that means that the apparently weaker part gets a bit of benefit of the doubt.

    I guess this is why so many people trust Greenpeace on GMOs. They look like an underdog and they have a Sinister Conspiracy on the opposing side. Of course, to people who doesn’t agree with them and like GMOs the situation is the reverse. But if warnings of risk have a high salience and people tend to assume that the Sinister Conspiracy is on the no-risk claim side, it could explain the advantage of the fearmongers.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    This is an excellent discussion, but you missed the classic paper Availability Cascades and Risk Regulation.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    it is seldom assumed risk messages come from the Sinister Conspiracy

    fnord

  • savagehenry

    Joseph, that is actually sort of what happened in my case. As soon as I got involved in the discussion and presented evidence contrary to what others were saying they not only began to say the risk was even greater than it was before but some of my friends even claimed that any contrary evidence must come from people who are in bed with corporations that grow or want to grow GMOs.

    I also think Anders is spot on again. It is very rare that I see anyone not jump up and shout “That study which concludes there is little or no risk from X is not funded and therefore biased by the producers of X!” Usually it’s quite the opposite. Personally I think it has to do with a general lack of trust of corporations (nearly every time I see people talk about the Sinister Conspiracy it is a corporation behind it, seeking only one thing, profit and the deaths of the innocent). Not that all producers of things like cellphones or aspartame should be trusted, but I believe in America at least there is a strong biase against them.

  • Stuart Armstrong

    I’ve been thinking about your post, Anders, and I can’t say I’ve found out any particularly smart way of solving the issue. It’s a chain of chinese whispers with systematic biases. Three things you can try and do there:
    1) Shorten the chain.
    This is what your post suggests.

    2) If the biases are known, introduce biases early on in the chain to get the desired result at the end.
    Very hard to estimate the final biases reliably ahead of time. And this would involve either journalists or scientists deliberately distorting the evidence in the hope of acheiving an unbiased result somewhere down the line. This goes against all thier training, and is probably unwise.

    3) Consolidate the source.
    This is the equivalent of the starting player in Chinese whispers announcing “ha! you got it wrong! the real mesage is…”. This might be possible; we need to get the basic science to speak with one voice (without enforcing an establishment view). It could proceed by getting anyone involved in a particular study to sign promises that they will only quote their own results if they mention all the other results in the field as well. That they will never accept an interview without informing all those doing similar research, and giving the right to take part in some capacity (video link, for instance). Every paper published on a particular domain should start with the message: Unless you also read ‘this paper, this paper, and this paper too’, your understanding of this subject will be distorted. Modern technology may allow the results of other papers in the same domain to appear as brief summaries, in the introduction of every paper.

    May be tricky to do, but sounds like a worthwhile idea even if its partially implemented…

  • Rob Potter

    I think you’ve hit the “heads I win – tails you lose” situation: If you publish a paper showing risk, you agree with me, but if you publish showing no risk, you are funded by the Sinister Conspiracy!

    Anders is scrupulously fair in not attributing ulterior motives to any side in this, however I would like to point out that there are sources of ulterior motives. Firstly, in the media, fear sells, simple as that. “If it bleeds, it leads” is trite but – sadly – true. Secondly, this fact is known, such that any group with an agenda need only start the boil rolling with a publication detailing a risk to create the impression that it is real. It then becomes much harder to counter with balanced data.

    I have heard it described as there being two groups: the well-intentioned, but ill-informed who make up the majority of the population; and the well-informed, but ill-intentioned who prey on the former.

    Forgive me if I sound cynical and jaded, but personal experience has left me somewhat scarred by the “well-informed, but ill-intentioned” set!

  • Rob Potter

    I think you’ve hit the “heads I win – tails you lose” situation: If you publish a paper showing risk, you agree with me, but if you publish showing no risk, you are funded by the Sinister Conspiracy!

    Anders is scrupulously fair in not attributing ulterior motives to any side in this, however I would like to point out that there are sources of ulterior motives. Firstly, in the media, fear sells, simple as that. “If it bleeds, it leads” is trite but – sadly – true. Secondly, this fact is known, such that any group with an agenda need only start the boil rolling with a publication detailing a risk to create the impression that it is real. It then becomes much harder to counter with balanced data.

    I have heard it described as there being two groups: the well-intentioned, but ill-informed who make up the majority of the population; and the well-informed, but ill-intentioned who prey on the former.

    Forgive me if I sound cynical and jaded, but personal experience has left me somewhat scarred by the “well-informed, but ill-intentioned” set!

  • http://darrenreynolds.me.uk Darren Reynolds

    How someone feels having received a communication can have as much to do with the individual words used as the meaning of a whole report. For example, if I told you that Jim wasn’t ugly, then the next time you met Jim, you might look for signs that he is. Certainly, I would create an association in your mind between Jim and ugliness. (Sorry Jim, if you’re reading.)

    Conversely, if I told you my girlfriend didn’t find Mark attractive, you might wonder whether this denial was evidence of some secret liaison going on. In this case I would be better to say my girlfriend thinks Mark is average in his looks, or even that she doesn’t have an opinion.

    When putting together any communication, it is important to use the words we want our audience to remember. As a starting point, we should avoid using negatives like, “I do not find him attractive.” Sadly, a piece about how there is no association between autism and MMR vaccines, in the wider public context, tends to communicate the opposite.

    Then, some words are so over-used in inappropriate ways that they start to convey the opposite meanings. If a politician says that beef is safe, then it probably isn’t, because things rarely are safe when a politician uses that word.

    It’s a tricky subject.

  • Sandy

    Rob writes: “personal experience has left me somewhat scarred by the ‘well-informed, but ill-intentioned’ set!”

    Rob, can we form a support group?