Recently a friend mentioned that he was concerned about health effects from wifi. I pointed out that this was likely an overblown concern, fed by the media echoes of a scare mongering BBC Panorama program, and pointed him at the coverage at Ben Goldacre’s blog Bad Science for a through takedown of the whole issue.
To my surprise he came back more worried than ever. He had watched the program on the Bad Science page, but not looked very much at the damning criticism surrounding it. After all, a warning is much more salient than a critique. My friend is highly intelligent and careful about his biases, yet fell for this one.
There exists a feedback loop in cases like this. The public is concerned about a possible health threat (electromagnetic emissions, aspartame, GMOs) and demand that the potential threat is evaluated. Funding appears and researchers evaluate the threat. Their findings are reported back through media to the public, who update their risk estimates.
In an ideal world the end result is that everybody get better estimates. But this process very easily introduces bias: the initial concern will determine where the money goes, so issues the public is concerned about will get more funding regardless of where the real risks are. The media reporting will also introduce bias since the media favour reporting newsworthy news, and risk tends to cause greater interest than reports of no risk (or the arrival of reviews of the state of the knowledge). Hence studies warning of a risk will be overreported compared to risks downplaying it, and this will lead to a biased impression of the total risk. Finally, the public will have an availability bias that makes them take note of reported risks more than reported non-risks. And this leads to further concerns and demands for investigation.
Note that I leave out publication bias and funding bias here.There may also be a feedback from the public to media making media report things they estimate the public would want to hear about. These factors of course muddy things further in real life but mostly seem to support the feedback, not counter it.
A little model to estimate how serious the problem is: Imagine that there are N studies published, and that they have probability p of being right. On average we should expect Np correct conclusions and N(1-p) erroneous. Media will report a study with probability P1 if it finds a risk, and with probability P0 if there is no risk (P0 < P1). Finally, the public will notice risk reports with probability Q1 and non-risk reports with probability Q0 (Q0<Q1). If there is actually a risk the public will notice about NpP1Q1 studies that warn against it, and N(1-p)P0Q0 studies that say there is no danger. As long as p/(1-p)>P0Q0/P1P1 people will get the correct impression that evidence tells them to increase their risk estimates. This is always the case given the above assumptions if p is close to 1. In the case there is no risk the public gets NpP0Q0 studies that say there is no problem, and N(1-p)P1Q1 studies that (erroneously) warn. If p/(1-p) < P1Q1/P0P0 the public will now be convinced that there is a risk.
If we assume that p is around 0.95, that only requires P1Q1/P0P0 to be larger than 19 to produce bias feedback. So if there is about a factor 4-5 overrepresentation of risk and availability bias each, we get a situation where the scientists are actually producing correct conclusions that there is no risk but they get consistently misrepresented to the extent that the public believes the risks are becoming more certain. Over time, the risk may become part of common knowledge (just like factoids like getting cramps from eating too close to swimming), promoting other biases like bandwagon effects and leading to irrational policy.
If p is lower, which is likely in many uncertain fields, the tendency to overestimate risk gets even worse for p=0.9 the bias factor needs to be larger than 9 to cause feedback, for p=0.8 just 4 and for p=0.7 just 2.3. In the last case we only need Q1/Q0 and P1/P0 to be 1.5 to get bias feedback. 50% overreporting and overattending to warnings doesn’t sound unlikely at all.
Even when this does not happen it seems likely that subgroups of the public will still be convinced that there is a risk and demand funding, and through another layer of availability bias (and decision makers tendency to act on demands rather than non-demands) this produces more funding for research that will keep the worried worried. This is apparently what is happening in cellphone-radiation research, where funding priorities are set externally by public concern rather than based on the best risk estimates.
How to overcome? Even if the media in this model were perfectly fair (P1=P0) the Q1/Q0 factor still introduces bias. It seems that the real threat here is the multiplication of these bias factors (we could introduce a publication bias factor increasing the likelihood of researchers publishing pro-risk papers, and a factor for filtering through experts, and so on). It doesn’t seem that unlikely to get a factor of at least 2, and then we just need two or three layers of bias before we risk serious feedback. Since reducing Q1/Q0 is likely hard, it is better to reduce the number of layers – read scientific papers directly, base funding decisions (somehow) not on public concern but on statistical estimates of risk, ignore or punish media for relying on experts (or worse, other media) rather than researchers themselves etc.
The problem here isn’t media per se, but that biases are compounding and possibly leading back to a distortion of the fact-finding process. Media priorities make things worse, but it is just an extra layer of compounding.
Some other things that came up while reading up on this:
Derek J. Paulsen, Wrong side of the tracks: exploring the role of newspaper coverage of homicide in socially constructing dangerous places, Journal of Criminal Justice and Popular Culture, 9(3) (2002) 113-127
Even when the risk concern is right there might be biases due to focus on salient kinds of risk. The above paper shows that crime reporting accurately reports some crime hotspots (the central ones) but misses others. In the case of electromagnetic fields people are concerned with wifi and cellphones (availability again!) but not general fields, so funding becomes targeted at small subsets while the rational approach would be to try to figure out the whole picture.
Combs, B. & Slovic, P. (1979). Newspaper coverage of causes of death. Journalism Quarterly 56, 837-843. & Connie M. Kristiansen, Newspaper coverage of diseases and actual mortality statistics, European Journal of Social Psychology, Volume 13, Issue 2, Pages 193-194, 1983
Frequency of reporting in newspapers about causes of death is essentially uncorrelated with the frequencies of real causes of death (0.13 in the Coombs study). But the correlation between estimates made by people of the frequencies and the newspapers is high (0.7).
Kumanan Wilson, Catherine Code, Christopher Dornan, Nadya Ahmad, Paul Hébert and Ian Graham, The reporting of theoretical health risks by the media: Canadian newspaper reporting of potential blood transmission of Creutzfeldt-Jakob disease, BMC Public Health. 2004; 4: 1.
Newspapers primarily relied upon expert opinion as opposed to published medical evidence, and some of the activity led to policy effects (such as the Red Cross withdrawing blood out of fear of contamination, creating shortages of blood products and around $11 million in costs – but possibly a saved reputation).
Anders af Wåhlberg and Lennart Sjöberg, Risk perception and the media, Journal of Risk Research 3(1), 31-50 (2000)
A dissenting paper, arguing that media might be biased but tends to be biased in a diversity of ways, and in particular might affect people’s risk perception less than commonly believed. However, their argument seems to be that media does not affect personal risk perceptions as much; my argument in this essay is that impersonal risk perceptions might be significant in aggregate – even if someone doesn’t believe they are likely to be hurt by a risk, they might support further research or policy against it.
Rob writes: "personal experience has left me somewhat scarred by the 'well-informed, but ill-intentioned' set!"
Rob, can we form a support group?
How someone feels having received a communication can have as much to do with the individual words used as the meaning of a whole report. For example, if I told you that Jim wasn't ugly, then the next time you met Jim, you might look for signs that he is. Certainly, I would create an association in your mind between Jim and ugliness. (Sorry Jim, if you're reading.)
Conversely, if I told you my girlfriend didn't find Mark attractive, you might wonder whether this denial was evidence of some secret liaison going on. In this case I would be better to say my girlfriend thinks Mark is average in his looks, or even that she doesn't have an opinion.
When putting together any communication, it is important to use the words we want our audience to remember. As a starting point, we should avoid using negatives like, "I do not find him attractive." Sadly, a piece about how there is no association between autism and MMR vaccines, in the wider public context, tends to communicate the opposite.
Then, some words are so over-used in inappropriate ways that they start to convey the opposite meanings. If a politician says that beef is safe, then it probably isn't, because things rarely are safe when a politician uses that word.
It's a tricky subject.