A Few Quick Links

1.  Via The Situationist, here is a page exploring seven biases of human memory, including the ways in which eyewitness testimony can be biased, how false memories can be implanted in people, the way that consistency bias causes us to misremember our own past beliefs or actions, and more. 

2.  Tyler Cowen has an article in The New Republic that is rather cynical about the value of most published research:

The sad truth is that "non-fiction" has been unreliable from the beginning, no matter how finely grained a section of human knowledge we wish to consider. For instance, in my own field, critics have tried to replicate the findings in academic journal articles by economists using the initial data sets. Usually, it is impossible to replicate the results of the article even half of the time. Note that the journals publishing these articles often use two or three referees–experts in the area–and typically they might accept only 10 percent of submitted papers. By the way, economics is often considered the most rigorous and the most demanding of the social sciences.

3.  Seth Roberts points out that the value of data is not binary, i.e., either convincing or worthless:

A vast number of scientists have managed to convince themselves that skepticism means, or at least includes, the opposite of value data. They tell themselves that they are being “skeptical” — properly, of course — when they ignore data. They ignore it in all sorts of familiar ways. They claim “correlation does not equal causation” — and act as if the correlation is meaningless. They claim that "the plural of anecdote is not data" — apparently believing that observations not collected as part of a study are worthless. Those are the low-rent expressions of this attitude. The high-rent version is when a high-level commission delegated to decide some question ignores data that does not come from a placebo-controlled double-blind study, or something similar.

So considering links 2 and 3, should we really downgrade the evidentiary value of published studies and upgrade the evidentiary value of anecdotes?  (That wouldn’t mean treating them both as equal, of course.) 

GD Star Rating
Tagged as:
Trackback URL:
  • Yvain

    Anecdotal evidence could support something’s existence. If my extremely trustworthy friend says he saw Bigfoot, that’s Bayesian evidence for the existence of Bigfoot. Not very strong evidence if there are other possible explanations, but it does count.

    Likewise, anecdotal evidence might be good for trends with correlations close to 1. I’ve never seen a formal scientific study linking beheading to death, but the anecdotal accounts from 18th century France have left me pretty convinced.

    But if it’s got correlation ~ 1, people probably know it already. For things with lower correlations – the interesting things – anecdotal evidence should pretty quickly become swamped by the background noise. It would be technically possible to make anecdotal evidence useful in these cases – something along the lines of “My grandpa smokes and is perfectly healthy at 100. What are the chances of me knowing one healthy centenarian smoker in a world where smoking is correlated to cancer at such-and-such a level, and what are the chances of me knowing one healthy centenarian smoker in a world where smoking has no correlation with cancer?” and then shift the probability. It’d work well enough if you had absolutely no other data on the subject, though it would probably yield a very low level of confidence.

    But I’ve never heard of anyone actually doing that, probably because if they cared enough to do that much math they’d just run a study instead. Usually when I hear “anecdotal evidence” it’s more along the lines of “My Aunt Sally got robbed by a guy with black hair once, so people with black hair are usually criminals” which is clearly bunk.

    And it’s actually even worse than that, since there’d probably be a confirmation bias and people would only remember those anecdotes that fit their theories. And a selection bias, since only people who notice something unusual will consider it.

    Anecdotal evidence about correlations might be useful in deciding whether or not to fund a study, but beyond that I’m doubtful. But I’m not sure I’m properly understanding the kind of evidence Roberts means. Can you give an example of a really good use of anecdotal evidence?

  • And the availability heuristic too!

    I think what Seth Roberts has in mind, though, are the kinds of self-experiments that he has made a specialty. For example, he’ll take different doses of flaxseed oil over a period of months, and make charts of his experiences (including, if I recall, his ability to balance on one foot). So if he observes that certain abilities consistently rise when he takes more than a particular level of flaxseed oil — sure, it’s just an anecdote that’s far from a double-blind study, and should be discounted appropriately, but that doesn’t mean the evidence is worth literally nothing. Right?

    Another example is gardening. There are plenty of places to find anecdotal evidence (both from friends and from various websites) where people explain that in their experience, adding a bit of limestone prevents tomato rot, or that marsh hay makes good mulch, or a gazillion other pieces of similar advice. Is any of this confirmed by a double blind study with a large sample size? No idea. Still, it would be foolish on my part to ignore the experiential wisdom offered by long-time gardeners by saying that it’s all “anecdotal.”

    Maybe a theme here is that an anecdote’s usefulness might be associated with domains where you can try out different strategies and get relatively quick feedback on whether something “works.”

  • “should we really downgrade the evidentiary value of published studies and upgrade the evidentiary value of anecdotes?”

    Who is the “we” here? The scientific community? The general public? The readership of Overcoming Bias? If we’re talking about the general public, I’d say that they already value anecdotes much more highly than published studies.

  • The royal “we,” of course.

    Seriously, the “we” would be educated readers who might have been accustomed to dismissing any anecdote or correlation as entirely worthless, while giving too much credibility to published results that might not, in fact, be reliable.

  • I think the problem with anecdotal evidence, particularly when it comes from a 3rd party, is it’s so easy to exploit. With 6 billion people you can find anecdotal evidence for anything, if I believe the full moons cause heart attacks I can probably find countless instances of heart attacks during full moons to support that claim, unless it happens to you personally the only thing an anecdote indicates is that someone is trying to convince you of something.