Page 1,617 of the 2,400-page law signed by President Barack Obama … creates an institute, funded with $500 million or more annually, to spur studies of which drugs, devices and medical procedures work best. … The health bill’s funding builds upon $1.1 billion approved by Congress last year for effectiveness research. The new legislation creates a nonprofit Patient-Centered Outcomes Research Institute … run by a 19-member board of governors with three representatives of drug, device and diagnostic-testing companies as well as patient advocates, doctors and the National Institutes of Health. … Its funding will start at $10 million this year and reach about $500 million in 2013.
More here. 10% of this annual budget would pay the cost of an updated RAND health insurance experiment; will they do it?! Alas, probably not; they’ll probably say that would cost too much to confirm what they already “know,” medicine’s fantastic value. Megan McArdle has been similarly failing to get Obamacare proponents to make clear predictions about what they previously suggested were its huge health benefits. Austin Frakt illustrates a further mental block:
The gold standard of the randomized experiment is not without deficiencies. Such experiments are “time consuming, expensive, and may not always be practical.” … They are also not always decisive. Even the RAND health insurance experiment (HIE) has been critiqued (and defended). That is not to suggest that it is certainly flawed (or certainly perfect), it is merely to say that variations in interpretation exist for results of randomized experiments just as they do for non-experimental studies.
Indeed, Angrist and Pischke (and I) agree with Leamer that “randomized experiments differ only in degree from nonexperimental evaluations of causal effects.” The authors add that “a well-done observational study can be more credible and persuasive than a poorly executed randomized trial.” It is for this and the other foregoing features of randomized experiments that I believe the half-billion dollars or so that some advocate spending on another RAND HIE would arguably be better spent funding well-conceived observational or natural experiment-based studies. (A half-billion dollars could found on the order of 1,000 observational studies.)
The main complaints today about the RAND experiment are that it was done too long ago and contained too few people over too short a time. The outcome of the experiment was very clear on its initially defined health measure: those with more medicine were not significantly healthier. The main disputes arise from folks not liking that result, and offering other outcome measures after the fact. The more time passes, the more folks feel free to dismiss the RAND experiment as hopelessly out of date.
No doubt a thousand “well-conceived” observational studies, neutrally executed and interpreted, could in principle give more total info than one big experiment. But since a great many funders, researchers, publishers, and meta-analysts seem much more willing to accept pro than anti-medicine results, then having a thousand varied studies would give many thousands of opportunities for such biases to skew their results.
Given the scope for biases, funding a thousand observational studies simply cannot give a clear decisive answer. The main hope for such clarity is from just a few big experiments focused clear health outcomes agreed on ahead of time. This is feasible, and would cost only a tiny fraction of the two trillion a year we spend on medicine, but alas probably won’t happen, because people “know” too much.
The 16 to 1 figure isn't helpful without knowing the total number of people who escaped as a fraction of the study size (which you would then compare to the observed effect size). If 17 people out of 7700 escaped, then this wouldn't effect anything.
Is it difficult to estimate how much this compromised the study?
John and Proper, even if one allows subjects to escape an experiment, one can still track their outcomes; the two arms of the RAND experiment had very different total spending, so clearly the experiment had an effect to observe. The effect would obviously be larger if subjects could not escape, but I wouldn't have any ethical objection to preventing escape as long as subjects were paid enough on entering the experiment to compensate for such expected costs.