The 16 to 1 figure isn't helpful without knowing the total number of people who escaped as a fraction of the study size (which you would then compare to the observed effect size). If 17 people out of 7700 escaped, then this wouldn't effect anything.

Is it difficult to estimate how much this compromised the study?

Expand full comment

John and Proper, even if one allows subjects to escape an experiment, one can still track their outcomes; the two arms of the RAND experiment had very different total spending, so clearly the experiment had an effect to observe. The effect would obviously be larger if subjects could not escape, but I wouldn't have any ethical objection to preventing escape as long as subjects were paid enough on entering the experiment to compensate for such expected costs.

Expand full comment

Now, your hypothesis, if true, is worth hundreds of billions. You would think that would be enough fruit to incent someone to exploit it. Yet, most regulation prohibits such competition via mandates, presumably to protect people from not buying the appropriate amount of insurance we are 'known' to need.

Expand full comment

The problem is not only cost and and "bias" as Robin maintains. These studies has ethical implications as well.EVERY randomized double blind studies have "escape clauses" with no penalty whatsoever, no ethics committee or regulatory agency would approve it otherwise. This obviously skews the result of this "gold standard" of studies.

The study that Robin wants is impossible. Because the study managers is liable for the health of the participants and cannot in effect insure one group less than the other. There is always comprehensive insurance that the participants will be able to fall back on.

Observational studies is better because the study Robin wants is inherently biased not to mention undoable.

Expand full comment

The main complaints today about the RAND experiment are that it was done too long ago and contained too few people. The outcome of the experiment was very clear on its initially defined health measure: those with more medicine were not significantly healthier. The main disputes arise from folks not liking that result, and offering other outcome measures after the fact. The more time passes, the more folks feel free to dismiss the RAND experiment as hopelessly out of date.

The following criticism doesn't fall into any of those categories.

Nyman (2007) points out that ... people in the subgroup that had to pay for their health care could voluntarily leave the study if they were sick, returning to their previous insurance regime where they may not have had to pay as much for treatment. And, indeed, he finds that 16 times as many people voluntarily left the pay subgroup as the free subgroup. This would seem to severely throw these findings into question.

From http://sciencethatmatters.com/archives/30. There are a number of good comments on that blog post.

Expand full comment

Agreed. Thomas Cook has written about why experiments are rare/resisted in education research.

Expand full comment

And here I thought that all of science depends on experiment -- most importantly those experiments that show that our theories are not wrong! Those that show that theories formulated centuries ago are still working. Hence why we repeat famous experiments -- to prove that the explanations given are still true.

I mean, those people who argue that it is "too expensive to confirm what we already know" is obviously making the mistake that experiments done before cannot be fallible. Just how many times do we need to repeat Millikan's Oil Drop experiment? Do these "advisors" even understand that? What a waste of air if they do not.

(For those who are unfamiliar, Millikan, who first showed that electrons are the fundamental charge carriers, faked his numbers, although the fundamental result of quantisation is correct. Then what happened is that successive generations faked their own values to be as close to Millikan's as possible. Scandal in epic proportions when revealed.)

Not only are repeats important -- massive and expensive repeats are a requirement because the sheer size merits extreme care in dealing with the results. How else are you going to motivate yourself to be careful in analysing what you have received? This is where the fallacy of "keeping your eggs in many baskets" come from. (I mean, it is alright to keep your eggs in many baskets, and in many cases will be the right course of action. However, it is not always the right method because, like squirrels that keep acorns in many places, people will forget to scrutinise the results and so on. Hence, it may be good for one big and well-planned event.)

Small experiments are good at giving you a rough idea that you are not completely wrong, while the real standard bearer is those extremely huge ones. If your theory fails to predict what happens in a big experiment, it deserves to go into the drain. Also, no matter the size of the experiment, statistical blips tend to be where the most important discoveries hide. Statistical blips are easier to recognise in huge experiments than small ones, so by logical deduction, they are essential, no matter how trivial they are.

Come on, are we not in the 21st century? Do the leaders of the world not understand these historical lessons? What is history good for if people don't learn from them? Completely shameful conduct.

Expand full comment

Maybe it's a problem of perception; of marketing. 'Observational Studies" sounds too clean and neutral, it doesn't give the average person a sense of the bias that can seep in.

Let's change the language, and start calling studies that aren't randomized, controlled experiments "out-of-control" experiments. Give people the image of mad scientists in league with dictatorial politicians willy-nilly pursuing their predetermined, mad results to the detriment of reason and human decency.

Use the resentment of ivory tower condescension to our advantage, get people to understand that these rebellious, grass-roots randomized-controlled experimenters are the ones who are going to take back medicine for the people.

Expand full comment

Following the Megan McArdle link, she's replying to an Ezra Klein post where he claims that research will show that the Affordable Care Act prevented at least a hundred thousand insurance-amenable deaths between 2019 and 2039. I'm not sure how that counts as failing to make clear predictions.

Expand full comment

Why ratiocinate about the limits of randomized controlled trials? It would be far better to do empirical work on exactly how replicable RCT results are. People should simply try to replicate them, imitating a previously-published study with minute exactitude. That would be in the empirical spirit of RCTs! (If it hasn't already been done.)

This would also help us determine whether Obama's numerous trials can, in aggregate, obviate a RAND repeat. (Theoretically, they pretty much can.) Not that the RAND repeat wouldn't be interesting anyway.

Expand full comment

Alas, probably not; they’ll probably say that would cost too much to confirm what they already “know,” medicine’s fantastic value.

If I didn't know what the RAND study was from your sentence I would infer that it confirmed the value of medicine.

Expand full comment