Via Hamish Barney and Mark Liberman, a forthcoming Journal of Cognitive Neuroscience paper showing people are more willing to accept bad psychology explanations if they include irrelevant brain facts:
Actually, I'd come down on the side of the normal people who get tricked by fake neuroscience. If they're presented with irrelevant neuroscience facts, it may make them feel that they're not fully understanding the logic. Hence, in ignorance, they rate the explanation more highly - "I can't say it's wrong, can't say it's right". So they honestly, and rationally, give it a middling score.
We want the man on the street to defer to those experts and explanations that we know are accurate - in science, economics and such. To do so, he has to trust scientific authority without understanding it. That means he'll also trust it when it's not justified (running this experiment on fundamentalist religious types would be interesting). To us, the difference between the logical flow of a text and the scientific statements in it is evident - but that's because we have the training.
Less excuses for the students, though.
Perhaps this bias could come to popular attention and reduced if we did some studies using functional brain scans of people reading about studies using functional brain scans.
Eliezer, give your co-authors my deepest sympathies for their unfortunate names.
TGGP: The oddness of referring to oneself in the third person in blog comments was investigated by Thnortwhistle, Bladderbock, and me (2008). The me et. al. paper found that it didn't look anywhere near as odd as the alternative.
In scholarly papers it is normal to refer to yourself (or, more accurately, your authorship of other papers) in the third person, but it seems odd in a blog comment.
Bruce, congrats on helping to start this research area.
The seductive details effect was originally demonstrated in a paper by Britton, van Dusen, Gulgoz and Glynn in the Journal of Educational Psychology, from which Garner and Gillingham acknowledge they took off. The Britton et al paper found that the seductive details decreased understanding of a text written by Time-Life writers. The seductive details were inserted because of insistent demands by the next level of Time-Life editors for colorful 'nuggets' of information -- mostly man-bites-dog anecdotes on the topic of the text -- which because they were only tangentially related to the point of the text, served to distract the reader from the actual point of the text, thus decreasing understanding of that point.
Such insistent demands usually need not be made of journalists, because they are trained and shaped to include such 'nuggets' as a matter of course, for which one can find evidence on virtually any page of any newspaper, and television is even more suseptible -- 'if it bleeds it leads', etc. Commentators point out that economics writing has its own seductive details dangers, but we should realize that so do all other fields, especially when journalists are let loose upon them.
Journalists do it because it sells soap.
Neuroscience: Don't Be Intimidated
These days, psychiatrists favorite fig leaf for counter-intuitive claims is to hide behind neuroscience. "You think that serial killers are...
The colors! I feel I must believe it! :-)
Another neuroimaging paper that caught my eye today thanks to colors was "Increased structural connectivity in grapheme-color synesthesia":http://www.nature.com/neuro...Here I think the paper actually contributes something (yes, there is a slight anatomical difference between synesthetes and normals, and "inner" and "outer" synesthetes). But otherwise diffusion tensor imaging produces the most information-dense and wonderous pictures you can wish for, yet they seldom seem to actually tell you anything new:http://bmia.bmt.tue.nl/Rese...http://noodle.med.yale.edu/...http://www.math-inf.uni-gre...http://wwwcg.in.tum.de/Rese...OK, sometimes they help surgery:http://www.brainlab.com/dow...
Enough of this yakking, here are some pretty neuroscience pictures!
There may be something to what Charlton says about neuroeconomics, but really it is just another flavor of the same old stuff that makes ordinary economics "effective as a rhetorical distractor": a seductive details effect, reductionist and materialist explanations, an almost unlimited source of jargon, and it sits at the intersection of several high-status occupations.
Just about everything but the pretty pictures.
But is there any neuroscience information to back up these claims? I didn't see any above. Without any neuroscience information to back it up it just doesn't seem very convincing to me.
Functional brain scanning has contributed very little indeed to medicine, or indeed to understanding the brain - beyond confirming what was already known from other methodologies. (Functional brain scanning has, however, contributed handsomely to many people's academic careers.)
Behind the scenes and off the record there is very substantial skepticism about the scientific value of functional brain imaging to date.
My impression is that the emergence of neuroeconomics is - from the neuroscientists' side - a response to scientific failure, rather than success. From the economists' side neuroeconomics provides access to very high-impact-factor and highly-cited neuroscience journals (*much* more highly cited than economics journals) as well as to very large sources of bioscientific funding.
So - neuroeconomics is likely to thrive for its career-enhancing properties - I would be pretty astonished if it yields any significant advances to scientific understanding, at least in the next decade.
When I first heard about the study I got a nasty feeling: how many bad papers have passed peer review due to this? It is a good thing the experts at least did not fall for the neuroscience; they recognized a bad explanation when they saw one - and became more negative when a good explanation is "supported" by irrelevant neuroscience. So at least papers in neuroscience journals might not have this bias. But considering how often neuroscience explanations appear in other behavioral science papers, there may still be reason to be concerned about their quality.
Based on the diagrams at the end we can make a rough model. Assuming that the probability of a paper being published is equal to the rating plus a constant M > 1 (M=1, no bad non-neurosci paper will ever be published, a high M means editors insensitive to quality) we get (for editors with the same preferences as the students and equal numbers in the different categories) the fraction of good, non-neurosci papers as 1/4, good neurosci papers as 1/4+3/4M, bad non-neurosci papers 1/4-1/4M and bad neurosci papers as 1/4+1/4M. The ratio of good to bad papers will be (8M+3)/(8M-3). So we should expect that for relatively stringent editors (low M) there will be more good papers published than bad, simply because good papers making gratuitious use of neuroscience will slip past so much more easily! Of course, in real life people soon figure this out and always publish with neuroscience, in particular when the paper is bad.
Lucid example of mindless use of neuroscience findings in social scinces is today's article "Neural Responses to Taxation and Voluntary Giving Reveal Motives for Charitable Donations" (Harbaugh, Mayr, Burghart) in Science.
Scientist gave (!) participants $100 and told them, moreover, some of this money would have to go towards some taxes and then scanned their brains as they watched their money go to the food bank through mandatory taxation. Whenever the participants read the taxation scenarios, scientists saw a spike in activity within the brain's reward centres. Naturally, the activation was even larger when people gave the money freely, instead of just paying it as taxes. The conclusion of the study is clear: people like paying taxes more than they admit.
Nevertheless, Harbaugh et al. showed only that people liked paying "a tax" (i.e. money from scientists) that went to a food bank. But how much of our taxes is spent on obviously beneficent purposes without any costs of a redistribution policy?
As above mentioned Skolnick et al. said "Given the results reported [in our study, neuroscientific] evidence presented in a courtroom, a classroom, or a political debate, regardless of the scientific status or relevance of this evidence, could strongly sway opinion, beyond what the evidence can support."