Seen vs. Unseen Biases

I suspect the following issue will be a thorn in our sides for some time to come:  when can we justify seen biases as correcting for unseen biases?   "Seen" biases are relatively easy to see and document, whereas "unseen" biases are said to exist but are harder to clearly see.

The issue showed up in "Hide Sociobiology Like Sex?," where some wanted the seen bias of focusing children on altruism instead of more realistic selfishness, to correct for the unseen bias of children confusing "is" and "ought."   And it shows up here in this recent Washington Post article on drug effectiveness:

Treating schizophrenia with an older, cheaper drug, rather than with heavily promoted newer medications, reduces the cost by as much as 30 percent with no apparent difference in safety and effectiveness, according to the first study to examine the economic implications of antipsychotic drug prescribing practices in the United States. … The findings have roiled the field of psychiatry in a fierce debate over the study’s implications and have triggered concerns it could lead public and private insurers to limit drastically which drugs they will pay for. …the new finding faced stiff headwinds before it was published, and was subjected to an extraordinary level of review.  … several experts said they were very worried, however, that the choice of medications would be taken from physicians and would be decreed by insurers. That would ignore the complexities of treating schizophrenia and the need for flexibility, the experts said.  Patients who have tried perphenazine unsuccessfully, for example, may not be good candidates to go back on it.

The seen effect here is that cheaper drugs seem just as effective, so insurers may limit coverage to only them, to counter the seen doctor bias of low sensitivity to drug prices.   Doctors, on the other hand, resist these new findings, because they fear losing their freedom to choose drugs based on their judgment of detailed patient circumstances.   Since are no clinical trials yet to document the claim that such doctor judgment improves patients on average, this is an unseen bias (if it exists). 

I once told an investment adviser I didn’t want his services because people like him lose money for their clients on average.   He replied,  "But none of my clients are average; are you?"   I guess he thought his seen bias was justified by all those unseen biases he was fixing.

The key issue here is that if it is too easy to believe in unseen biases, we could justify all of our seen biases as countering made-up unseen biases. 

GD Star Rating
Tagged as:
Trackback URL:
  • I don’t understand what is an “unseen bias”; perhaps imaginary or hypothetical phenomena? I’m not sure these examples qualify as “biases” though. In one case we hypothesize that giving doctors full flexibility in determining a patient’s medication will improve his outcome. This is an unverified hypothesis and might or might not turn out to be true if subjected to scientific study. But is it a bias, i.e. a cognitive error that gets in the way of perceiving the truth?

  • Hal, doctors say that limiting coverage to the cheap drugs would create a bias in treatment, by preventing them from using the expensive drugs in the particular cases where those drugs are more cost-effective. The question is whether doctors have a bias to make up such effects to justify keeping limits off of their treatment practice.

  • Paul Gowder

    I think this is a bit of mission creep in the bias-identification business. The doctors aren’t saying there would be a “bias in treatment.” What’s a bias in treatment? I thought biases were about beliefs…

  • Paul, most medical treatment decisions are framed in language that presumes there is a correct medical decision for each case at hand. So wrong treatment actions would be seen as a treatment bias.

  • Paul Gowder

    Fair enough, but all the insurance company is doing is constraining the doctors’ behavior, not their beliefs. It’s a bias in the sense that I may believe that it’s safer (the correct automotive decision) to drive 80 in fast traffic than 65, but the cop who pulls me over is biasing my driving.

  • Paul, yes, we might usefully think of traffic laws as biasing driving, relative to some standard for correct driving.

  • There are probably many biases specific to the medical profession which apply to this particular case. We have an emotional need and desire to think of our doctors as having near-miraculous healing powers. Even 100 years ago people thought like that. Anything that ties a doctor’s hands sounds like a major threat to our well-being because of this bias. And in this case I do mean bias in the sense we use it, an error in cognition and perception of reality.

  • Hal, to the extent that it is hard to document this emotional need to think of doctors as miracle workers, insurance companies limiting coverage to deal with this would be another example of a seen bias, limiting coverage, justified as countering an unseen bias, excess wishful delegation to doctors.

  • In the absence of a gold standard, countering an unseen bias pretty much *requires* a deliberately employed heuristic that tweaks your perceptual judgment in some particular direction, or which discards the perceptual judgment and replaces it with some standard value or function. How else would you correct an unseen bias?

    But I suggested bias be defined (in “…What’s a bias, again?”) as mistakes created by the shape of the wetware; thus this corrective factor would not be so much a “seen bias” as a “rule” or “heuristic”.

  • Eliezer, yes, if you are sure an unseen bias exists, of course you must correct it with a seen bias. The question is how sure are you the unseen bias is really there? If you keep kids away from seeing sex because you think it will corrupt them, how sure are you that seeing sex will corrupt them? And so on.

  • Carl Shulman


    A decentralized system will produce natural experiments in response to this data, as different states, employers, and other health-care providers apply or do not apply this new heuristic. Public education systems testing sex education should conduct actual controlled experiments, rolling it out in randomly selected schools. The key is to leave a control group when implementing the new heuristic against which to compare performance.

    An additional benefit, if the heuristic ‘overshoots’ perfect rationality, overcompensating for the original bias, of retaining a reservoir population is the generation of new hypotheses. Take “analytical egalitarianism,” for instance, adopted to guard against ingroup-outgroup bias and the fundamental attribution error. As the strength of the norm for the anti-bias heuristic increased, it created niches for researchers to incorporate inegalitarian assumptions into their models and significantly improve predictive power. That allows us to consider whether to modify the heuristic, but at the very least provides information that could have been suppressed by its universal application.

    To the extent that bodies of such heuristics form a part of professional training in particular disciplines, this also provides a reason to encourage interdisciplinary work.

  • Greetings:

    IMO, the basic processes for doing science (our best method for understanding nature in a predictable fashion) evolve to minimize the effects of biases of all forms as the impacts of these are recognized. The distinction between observational vs. experimental studies, the use of randomization, blinding observers and participants, assessing the potential effects of random noise through the application of statistical procedures, the anonnymous peer review publication process and so on. Statistical procedures for assessing the potential impact of random variation continually increase in sophistication in parallel with increasing computer capability, both in power and access. The whole discipline of analytical epidemiology developed precisely to better untangle complex cause and effect relationships involving rare risk factors, long lags between exposure and detection and otherwise understand disease in the face great biological variability. A great deal of effort is currently being expended at the intersection between epidemiology and clinical practice in the developing form of evidence-based medicine. With the rapidly evolving systematic review process, this represents the current state of evolution of the best method for understanding “truth” in medicine.

    The Cochrane Collaboration -
    “The reliable source of evidence in health care”
    Cochrane Reviewers’ Handbook (250+ pages on-line)

    Because of biological variability, good decision making in medicine is hard. Physicians are inherently as susceptible to cognitive bias as any human yet they are essentially are limited to continually doing “n of one” experiments. The fact that these may be life or death decisions makes medicine a tough business, pedestal or no. The plural of anecdote isn’t data but what alternative does the individual physician have?

    J Arya et al wrote (Arch Surg 137:1301-1303. (2002)):
    • When a physicist drops a brick out the window, it goes down—every time
    • When a cell biologist plates out endothelial cells, they grow to confluence—most of the time
    • When an aggressive surgical oncologist resects a hepatic metastasis, he or she cures the patient—some of the time
    • When a surgical intensivist infuses tumor necrosis factor–binding protein into a critically ill patient, he or she reverses multiple organ failure—almost never

  • JMG3Y, yes, if doctors in fact followed a good form of evidence-based medicine, there would of course be less concern about their possible biases. The question I highlighted in my post is the tension between the evidence we do have and individual doctor judgments which may conflict in some cases.

  • Robin, IMO that tension will always be there, more in some cases than others. No matter how good the evidence, the question of external validity will always be present. Is this patient sufficiently unique that they are in the tail of the distribution? The relative costs of the two errors are very different – I prescribe the more expensive drug because when it is of no additional benefit to the patient vs. I don’t and it would have been. The Vioxx case illustrates the rare subclass problem; I may believe, rightly or wrongly, that there are subtle differences between the drugs that are very difficult to detect from a statistical power perspective. Thus, I am justified in using the more expensive one.

    IMO, the most interesting question is what would be positive motivations for physicians to adopt EBM? I suspect that more information within the professional curriculum on the weaknesses of human reasoning, the effects of cognitive biases and how to overcome these might. AFAIK, little or none of this information is presented now. In other words, would understanding the presence of these in a metacognitive way improve one’s reasoning and improve the adoption of EBM. Although many physicians would claim that the practice of medicine is based on science and many have undergraduate degrees in science areas, I suspect that most such degrees do not provide a solid understanding the fundamentals of the scientific method or understand the history of science, both of which illustrate why science is currently done the way it is. IMO, good philosophy of science and history of science courses should be core requirements for such undergraduate degrees and for medicine.