“Knowing” Too Much

Page 1,617 of the 2,400-page law signed by President Barack Obama … creates an institute, funded with $500 million or more annually, to spur studies of which drugs, devices and medical procedures work best. … The health bill’s funding builds upon $1.1 billion approved by Congress last year for effectiveness research. The new legislation creates a nonprofit Patient-Centered Outcomes Research Institute … run by a 19-member board of governors with three representatives of drug, device and diagnostic-testing companies as well as patient advocates, doctors and the National Institutes of Health. … Its funding will start at $10 million this year and reach about $500 million in 2013.

More here.  10% of this annual budget would pay the cost of an updated RAND health insurance experiment; will they do it?!  Alas, probably not; they’ll probably say that would cost too much to confirm what they already “know,” medicine’s fantastic value.  Megan McArdle has been similarly failing to get Obamacare proponents to make clear predictions about what they previously suggested were its huge health benefits.  Austin Frakt illustrates a further mental block:

The gold standard of the randomized experiment is not without deficiencies. Such experiments are “time consuming, expensive, and may not always be practical.” … They are also not always decisive. Even the RAND health insurance experiment (HIE) has been critiqued (and defended). That is not to suggest that it is certainly flawed (or certainly perfect), it is merely to say that variations in interpretation exist for results of randomized experiments just as they do for non-experimental studies.

Indeed, Angrist and Pischke (and I) agree with Leamer that “randomized experiments differ only in degree from nonexperimental evaluations of causal effects.” The authors add that “a well-done observational study can be more credible and persuasive than a poorly executed randomized trial.” It is for this and the other foregoing features of randomized experiments that I believe the half-billion dollars or so that some advocate spending on another RAND HIE would arguably be better spent funding well-conceived observational or natural experiment-based studies. (A half-billion dollars could found on the order of 1,000 observational studies.)

The main complaints today about the RAND experiment are that it was done too long ago and contained too few people over too short a time.  The outcome of the experiment was very clear on its initially defined health measure: those with more medicine were not significantly healthier.  The main disputes arise from folks not liking that result, and offering other outcome measures after the fact.  The more time passes, the more folks feel free to dismiss the RAND experiment as hopelessly out of date.

No doubt a thousand “well-conceived” observational studies, neutrally executed and interpreted, could in principle give more total info than one big experiment.  But since a great many funders, researchers, publishers, and meta-analysts seem much more willing to accept pro than anti-medicine results, then having a thousand varied studies would give many thousands of opportunities for such biases to skew their results.

Given the scope for biases, funding a thousand observational studies simply cannot give a clear decisive answer.  The main hope for such clarity is from just a few big experiments focused clear health outcomes agreed on ahead of time.  This is feasible, and would cost only a tiny fraction of the two trillion a year we spend on medicine, but alas probably won’t happen, because people “know” too much.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Roland

    Alas, probably not; they’ll probably say that would cost too much to confirm what they already “know,” medicine’s fantastic value.

    If I didn’t know what the RAND study was from your sentence I would infer that it confirmed the value of medicine.

  • Pingback: Tweets that mention Overcoming Bias : “Knowing” Too Much -- Topsy.com

  • Microbiologist

    Why ratiocinate about the limits of randomized controlled trials? It would be far better to do empirical work on exactly how replicable RCT results are. People should simply try to replicate them, imitating a previously-published study with minute exactitude. That would be in the empirical spirit of RCTs! (If it hasn’t already been done.)

    This would also help us determine whether Obama’s numerous trials can, in aggregate, obviate a RAND repeat. (Theoretically, they pretty much can.) Not that the RAND repeat wouldn’t be interesting anyway.

  • Unnamed

    Following the Megan McArdle link, she’s replying to an Ezra Klein post where he claims that research will show that the Affordable Care Act prevented at least a hundred thousand insurance-amenable deaths between 2019 and 2039. I’m not sure how that counts as failing to make clear predictions.

  • mikem

    Maybe it’s a problem of perception; of marketing. ‘Observational Studies” sounds too clean and neutral, it doesn’t give the average person a sense of the bias that can seep in.

    Let’s change the language, and start calling studies that aren’t randomized, controlled experiments “out-of-control” experiments. Give people the image of mad scientists in league with dictatorial politicians willy-nilly pursuing their predetermined, mad results to the detriment of reason and human decency.

    Use the resentment of ivory tower condescension to our advantage, get people to understand that these rebellious, grass-roots randomized-controlled experimenters are the ones who are going to take back medicine for the people.

  • burning.flamer

    And here I thought that all of science depends on experiment — most importantly those experiments that show that our theories are not wrong! Those that show that theories formulated centuries ago are still working. Hence why we repeat famous experiments — to prove that the explanations given are still true.

    I mean, those people who argue that it is “too expensive to confirm what we already know” is obviously making the mistake that experiments done before cannot be fallible. Just how many times do we need to repeat Millikan’s Oil Drop experiment? Do these “advisors” even understand that? What a waste of air if they do not.

    (For those who are unfamiliar, Millikan, who first showed that electrons are the fundamental charge carriers, faked his numbers, although the fundamental result of quantisation is correct. Then what happened is that successive generations faked their own values to be as close to Millikan’s as possible. Scandal in epic proportions when revealed.)

    Not only are repeats important — massive and expensive repeats are a requirement because the sheer size merits extreme care in dealing with the results. How else are you going to motivate yourself to be careful in analysing what you have received? This is where the fallacy of “keeping your eggs in many baskets” come from. (I mean, it is alright to keep your eggs in many baskets, and in many cases will be the right course of action. However, it is not always the right method because, like squirrels that keep acorns in many places, people will forget to scrutinise the results and so on. Hence, it may be good for one big and well-planned event.)

    Small experiments are good at giving you a rough idea that you are not completely wrong, while the real standard bearer is those extremely huge ones. If your theory fails to predict what happens in a big experiment, it deserves to go into the drain. Also, no matter the size of the experiment, statistical blips tend to be where the most important discoveries hide. Statistical blips are easier to recognise in huge experiments than small ones, so by logical deduction, they are essential, no matter how trivial they are.

    Come on, are we not in the 21st century? Do the leaders of the world not understand these historical lessons? What is history good for if people don’t learn from them? Completely shameful conduct.

  • http://www.permut.wordpress.com Michael Bishop

    Agreed. Thomas Cook has written about why experiments are rare/resisted in education research.

  • John Maxwell IV

    The main complaints today about the RAND experiment are that it was done too long ago and contained too few people. The outcome of the experiment was very clear on its initially defined health measure: those with more medicine were not significantly healthier. The main disputes arise from folks not liking that result, and offering other outcome measures after the fact. The more time passes, the more folks feel free to dismiss the RAND experiment as hopelessly out of date.

    The following criticism doesn’t fall into any of those categories.

    Nyman (2007) points out that … people in the subgroup that had to pay for their health care could voluntarily leave the study if they were sick, returning to their previous insurance regime where they may not have had to pay as much for treatment. And, indeed, he finds that 16 times as many people voluntarily left the pay subgroup as the free subgroup. This would seem to severely throw these findings into question.

    From http://sciencethatmatters.com/archives/30. There are a number of good comments on that blog post.

    • Proper Dave

      The problem is not only cost and and “bias” as Robin maintains. These studies has ethical implications as well.
      EVERY randomized double blind studies have “escape clauses” with no penalty whatsoever, no ethics committee or regulatory agency would approve it otherwise.
      This obviously skews the result of this “gold standard” of studies.

      The study that Robin wants is impossible. Because the study managers is liable for the health of the participants and cannot in effect insure one group less than the other. There is always comprehensive insurance that the participants will be able to fall back on.

      Observational studies is better because the study Robin wants is inherently biased not to mention undoable.

  • Eric Falkenstein

    Now, your hypothesis, if true, is worth hundreds of billions. You would think that would be enough fruit to incent someone to exploit it. Yet, most regulation prohibits such competition via mandates, presumably to protect people from not buying the appropriate amount of insurance we are ‘known’ to need.

  • http://hanson.gmu.edu Robin Hanson

    John and Proper, even if one allows subjects to escape an experiment, one can still track their outcomes; the two arms of the RAND experiment had very different total spending, so clearly the experiment had an effect to observe. The effect would obviously be larger if subjects could not escape, but I wouldn’t have any ethical objection to preventing escape as long as subjects were paid enough on entering the experiment to compensate for such expected costs.

    • Jess Riedel

      The 16 to 1 figure isn’t helpful without knowing the total number of people who escaped as a fraction of the study size (which you would then compare to the observed effect size). If 17 people out of 7700 escaped, then this wouldn’t effect anything.

      Is it difficult to estimate how much this compromised the study?