Believing Too Little

Seth Roberts:

There are two mistakes you can make when you read a scientific paper: You can believe it (a) too much or (b) too little. The possibility of believing something too little does not occur to most professional scientists, at least if you judge them by their public statements, which are full of cautions against too much belief and literally never against too little belief. Never. If I’m wrong — if you have ever seen a scientist warn against too little belief — please let me know. Yet too little belief is just as costly as too much. … Addendum. By “too little belief” I meant too little belief in facts … there is plenty of criticism of too little belief in this or that favored theory.

Academics live to show they are not "simple" like ordinary people, but instead apply complex theories and data analysis techniques.  So academics tend to reject the simple data and simple theories that persuade most people.

GD Star Rating
a WordPress rating system
Tagged as:
Trackback URL:
  • http://apperceptual.wordpress.com/ Peter Turney

    By “too little belief” I meant too little belief in facts … there is plenty of criticism of too little belief in this or that favored theory.

    The distinction between fact and theory is controversial. In philosophy of science, a popular phrase is “observation is theory-laden”, and there are a number of examples that support this claim. Recent psychology experiments show that human colour perception is affected by the colour terms of the perceiver’s language.

    http://plato.stanford.edu/entries/evidence/
    http://apperceptual.wordpress.com/2008/01/29/language-affects-perception/

  • Caledonian

    There have been such experiments for more than twenty years.

    The Whorf hypothesis, if not totally invalid, has long been recognized as being grossly over-simplistic. Human color perception is not restricted by language. Human color categorization, however, is so affected.

  • Some Mathematician

    I say this to my colleagues all the time. I’m an applied mathematician, but much closer to the physics community than most of the people I work with.

    Physicists have a lots of heuristic calculation devices. Many are completely non-rigorous, but also work fantastically well (Gamow vectors, for instance). My (more pure) mathematical colleagues describe them as nonsense (since they are unproven, and mostly unprovable), but then I point out the agreement with experiment/numerics.

    They also scoff when I say “one cooked up counterexample doesn’t make the theorem false.”

  • Enginerd

    The general scientific mindset is to start from believing absolutely nothing, and require sufficient proof to be convinced of anything.

    Let’s say somebody publishes a paper on something fairly uncontroversial, say they measured some properties of a few different materials. We now some numbers to anchor to for those values, but it could turn out that the experimentalist screwed something up. That happens all the time.

    In order to sufficiently reject that anchoring bias, of which so many people are aware, one almost has to believe the result too little. Until the result is independently confirmed (several times), believing it true too strongly is somewhat high risk.

    Socially speaking, it’s a higher risk to believe something truthful which later turns out to be false than the other way around. Practically, belief in something false can cost you time and resources, while unbelief in something true usually only results in lost opportunity.

    In short, cautiousness as a strategy is lower risk.

    -Enginerd

  • http://julesandjames.blogspot.com/ James Annan

    “So academics tend to reject the simple data and simple theories that persuade most people.”

    What do you mean by this? Can you give examples?

  • http://hanson.gmu.edu Robin Hanson

    James, I thought I was paraphrasing Seth, and so his examples are my examples.

  • http://julesandjames.blogspot.com/ James Annan

    Well, the only thing I see that comes close to an example (in the comments to his post) is a brief mention about smoking and lung cancer, and even there Seth never claims that simple evidence that persuaded most people was rejected by academics, merely using that example to show how more rapid acceptance of a new theory can be beneficial.

    It was all before my time, but ISTM that “the doctors” worked this out fairly efficiently in the early 1950s, although some earlier work in the 30s had not been efficiently promulgated. At which point did “the simple data and simple theory” persuade most people?

  • http://www.alfin2100.blogspot.com Al Fin

    No, scientists do not really reject simple ideas and theories. The simple ideas and theories go underground, into the scientist’s subconscious. They are still there, just not in the highly qualified language of scientific publications.

    Understanding that subterranean reality of the all-too-human scientist, one can observe how the simple ideas are translated into high sounding scientific jargon more easily. It is a common phenomenon.

  • http://www.sethroberts.net Seth Roberts

    Here are two examples of what I was talking about — a tendency to focus on the limitations of new data (what can’t be learned from it) and to ignore its strengths (what can be learned from it).

    1. Everyone’s heard “correlation does not imply causation”. I’ve never heard a parallel saying about what correlation does imply. Such a saying is possible; it would be along the lines of “something is better than nothing.”

    2. Recently I attended a research group meeting in which a postdoc talked about new data she had gathered. The entire discussion was about the problems with it — what she couldn’t infer from it. There could have been a long discussion about how it added to what we already know, but there wasn’t a word about this.

    I’m not saying criticism or skepticism is bad; I’m saying that, when scientists encounter new facts, highly unbalanced critiques (much more negative than positive) are the norm. The lack of exploration of what you can learn from new data is the problem — or the opportunity.

  • http://apperceptual.wordpress.com/ Peter Turney

    1. Everyone’s heard “correlation does not imply causation”. I’ve never heard a parallel saying about what correlation does imply. Such a saying is possible; it would be along the lines of “something is better than nothing.”

    There is a good answer to this question. See the books of Glymour, Pearl, Spirtes, Scheines, etc.:

    http://www.bayesnets.com/CausalityReferences.htm

    Unfortunately, I don’t know how to reduce it to a simple saying.

  • http://zbooks.blogspot.com Zubon

    I’m not saying criticism or skepticism is bad; I’m saying that, when scientists encounter new facts, highly unbalanced critiques (much more negative than positive) are the norm. The lack of exploration of what you can learn from new data is the problem — or the opportunity.

    I would not restrict that to scientists. I recall an undergraduate philosophy class where the professor opened discussion with instructions to start with what was good about the author, what we could use, rather than critique. Silence rang out, and it took several minutes for anyone to start getting comfortable with the concept.

    Reference online discussions, where “I agree” is usually considered annoying spam rather than useful feedback. You must have something to add, which usually means a point of disagreement.

  • http://www.scottaaronson.com Scott Aaronson

    That seems completely wrong to me — scientists have (for example) been remonstrating for decades that people believe too little in the fact of evolution, and the fact of human-caused climate change.

    Closer to home, I’ve had to tell journalists who were inclined to be cautious about the existence of these so-called “superpositions” and “wave functions” that no, they should not be so cautious.

  • http://julesandjames.blogspot.com/ James Annan

    Seth,

    Based on your comments it does not seem that Robin’s attempted paraphrase is reasonable. “Negative critiques are the norm” is a far cry from “academics tend to reject the simple data and simple theories that persuade most people”. Is there anyone here who actually wants to defend the latter statement?

  • http://hanson.gmu.edu Robin Hanson

    Scott, I think evolution and superpositions are more like theories than facts as Seth was using the terms.

    James, I still defend my statement.

  • Gray Area

    “Unfortunately, I don’t know how to reduce it to a simple saying.”

    Some examples:

    (a) Lack of correlation does not imply lack of causation.

    (b) Correlation implies either causation or common cause.

    (c) No causes in — no causes out.

  • http://apperceptual.wordpress.com/ Peter Turney

    (b) Correlation implies either causation or common cause.

    OK, let me take a stab at this: “If A and B are correlated, then either (1) A causes B, (2) B causes A, (3) C causes both A and B, (4) the correlation between A and B is due to random noise and will go away when more data are collected, or (5) A and B are part of a system with feedback loops, and it is not meaningful to ask whether A causes B or B causes A — they cause each other.”

    Academics live to show they are not “simple” like ordinary people, but instead apply complex theories and data analysis techniques. So academics tend to reject the simple data and simple theories that persuade most people.

    Perhaps the academics are right: There are no simple answers. Consider causality and correlation, for example.

  • Mason

    “There are two mistakes you can make when you read a scientific paper: You can believe it (a) too much or (b) too little. ….Yet too little belief is just as costly as too much.”

    Accounting has this same problem; accounts are falling over themselves to give things the lowest possible value. This is best summed up in their favorite phrase, “lower of cost or market.” Understating assets is just as misleading as overstating them, so why do they do it?

    Because the pressure in accounting is to overstate. Being conservative has two advantages; primarily it shows you’re not overstating (adding credibility to the numbers presented), and second, it helps compensate for whatever overstating there is.

    In science the pressure is to make new ground breaking discoveries. Doubting distinguishes one from those who eagerly jump on every new topic and preserve one’s credibility for when a real discovery is made.

    If the pattern were not that many more discoveries were claimed than were made doubting would be a bad, but because most claims are false doubting holds value.

  • http://julesandjames.blogspot.com/ James Annan

    Robin,

    Sorry if it was ambiguous, but my use of the term “defend” was meant as a request for the presentation of some supporting evidence (even anecdotal).