Recently a friend mentioned that he was concerned about health effects from wifi. I pointed out that this was likely an overblown concern, fed by the media echoes of a scare mongering BBC Panorama program, and pointed him at the coverage at Ben Goldacre’s blog
Rob writes: "personal experience has left me somewhat scarred by the 'well-informed, but ill-intentioned' set!"
Rob, can we form a support group?
How someone feels having received a communication can have as much to do with the individual words used as the meaning of a whole report. For example, if I told you that Jim wasn't ugly, then the next time you met Jim, you might look for signs that he is. Certainly, I would create an association in your mind between Jim and ugliness. (Sorry Jim, if you're reading.)
Conversely, if I told you my girlfriend didn't find Mark attractive, you might wonder whether this denial was evidence of some secret liaison going on. In this case I would be better to say my girlfriend thinks Mark is average in his looks, or even that she doesn't have an opinion.
When putting together any communication, it is important to use the words we want our audience to remember. As a starting point, we should avoid using negatives like, "I do not find him attractive." Sadly, a piece about how there is no association between autism and MMR vaccines, in the wider public context, tends to communicate the opposite.
Then, some words are so over-used in inappropriate ways that they start to convey the opposite meanings. If a politician says that beef is safe, then it probably isn't, because things rarely are safe when a politician uses that word.
It's a tricky subject.
I think you've hit the "heads I win - tails you lose" situation: If you publish a paper showing risk, you agree with me, but if you publish showing no risk, you are funded by the Sinister Conspiracy!
Anders is scrupulously fair in not attributing ulterior motives to any side in this, however I would like to point out that there are sources of ulterior motives. Firstly, in the media, fear sells, simple as that. "If it bleeds, it leads" is trite but - sadly - true. Secondly, this fact is known, such that any group with an agenda need only start the boil rolling with a publication detailing a risk to create the impression that it is real. It then becomes much harder to counter with balanced data.
I have heard it described as there being two groups: the well-intentioned, but ill-informed who make up the majority of the population; and the well-informed, but ill-intentioned who prey on the former.
Forgive me if I sound cynical and jaded, but personal experience has left me somewhat scarred by the "well-informed, but ill-intentioned" set!
I've been thinking about your post, Anders, and I can't say I've found out any particularly smart way of solving the issue. It's a chain of chinese whispers with systematic biases. Three things you can try and do there:1) Shorten the chain.This is what your post suggests.
2) If the biases are known, introduce biases early on in the chain to get the desired result at the end.Very hard to estimate the final biases reliably ahead of time. And this would involve either journalists or scientists deliberately distorting the evidence in the hope of acheiving an unbiased result somewhere down the line. This goes against all thier training, and is probably unwise.
3) Consolidate the source.This is the equivalent of the starting player in Chinese whispers announcing "ha! you got it wrong! the real mesage is...". This might be possible; we need to get the basic science to speak with one voice (without enforcing an establishment view). It could proceed by getting anyone involved in a particular study to sign promises that they will only quote their own results if they mention all the other results in the field as well. That they will never accept an interview without informing all those doing similar research, and giving the right to take part in some capacity (video link, for instance). Every paper published on a particular domain should start with the message: Unless you also read 'this paper, this paper, and this paper too', your understanding of this subject will be distorted. Modern technology may allow the results of other papers in the same domain to appear as brief summaries, in the introduction of every paper.
May be tricky to do, but sounds like a worthwhile idea even if its partially implemented...
Joseph, that is actually sort of what happened in my case. As soon as I got involved in the discussion and presented evidence contrary to what others were saying they not only began to say the risk was even greater than it was before but some of my friends even claimed that any contrary evidence must come from people who are in bed with corporations that grow or want to grow GMOs.
I also think Anders is spot on again. It is very rare that I see anyone not jump up and shout "That study which concludes there is little or no risk from X is not funded and therefore biased by the producers of X!" Usually it's quite the opposite. Personally I think it has to do with a general lack of trust of corporations (nearly every time I see people talk about the Sinister Conspiracy it is a corporation behind it, seeking only one thing, profit and the deaths of the innocent). Not that all producers of things like cellphones or aspartame should be trusted, but I believe in America at least there is a strong biase against them.
it is seldom assumed risk messages come from the Sinister Conspiracy
This is an excellent discussion, but you missed the classic paper Availability Cascades and Risk Regulation.
If there is a possible Sinister Conspiracy trying to downplay risks (cellphone companies, aspartame producers, the sugar lobby, etc) people indeed seem to attribute no-risk messages to bias from them while risk messages get attributed to neutral sources. It seems to be asymetric: it is seldom assumed risk messages come from the Sinister Conspiracy and no-risk messages are neutral, even where we might suspect bias on both sides.
Maybe we have two kinds of bias here. One is the above attribution bias, where the presence of some interest is enough to devaluate all arguments and evidence. Then there is the "underdog bias": we tend to cheer for the underdog in western culture, and that means that the apparently weaker part gets a bit of benefit of the doubt.
I guess this is why so many people trust Greenpeace on GMOs. They look like an underdog and they have a Sinister Conspiracy on the opposing side. Of course, to people who doesn't agree with them and like GMOs the situation is the reverse. But if warnings of risk have a high salience and people tend to assume that the Sinister Conspiracy is on the no-risk claim side, it could explain the advantage of the fearmongers.
I think it's the left-handed version of the Red-Headed League:
The Sinister Conspiracy? What do they do? I'm part of the Bayesian Conspiracy myself, but this one sounds interesting.
One possible reason for the increased certainty in the face of contrary evidence is that they might have thought that contrary evidence could only have come from a Sinister Conspiracy. Increased contrary evidence clearly indicated the Conspiracy was more powerful than they had thought. If somebody regarded as reliable told them otherwise that could only mean the Conspiracy had gotten to him.
I Had a similar experience with some friends regarding GMOs. I was the only one who participated in the thread on a local forum who was not greatly opposed to continuing research of GMOs or their use in agriculture. I pointed them to some pretty good papers on the risks associated with GMOs, specifically horizontal gene transfer which was what they were arguing was a huge deal that would kill us all. I found much to my dismay that after they read the papers their insistence that GMOs would be the death of us all increased! Except for one person who seems to have, if only slightly, lowered the probability in their mind that GMOs were in general dangerous. It seems that the rest of them the perception of the risk posed by GMOs had been set so high that no amount of contrary evidence would lower the risk in their mind (and their regurgitation of anti-GMO talking points became more desperate when contrary evidence was presented).
I thought it was a bit weird at the time but I think Anders is on to something here.