Sunday’s New York Times Magazine: In January 2001, the British epidemiologists … Davey Smith and … Ebrahim … noted that those few times that a randomized trial had been financed to test a hypothesis supported by results from these large observational studies, the hypothesis either failed the test or, at the very least, the test failed to confirm the hypothesis: antioxidants like
A poor basis for your beliefs can be worse than admitting that you do not have the proper evidence to make an informed decision.
I guess the real question is what question do you truly want to answer? If you want to know whether giving free healthcare to people (aside from the very poor) results in overall health increases, the answer according to the Rand study is that it didn't make much of a difference. I do not know if the findings would be able to be replicated today given the advances in medicine, but it would be interesting to find out.
But instead, if you want to know what it means that their health didn't improve and the implications for the benefits of marginal increases in medicine, then GOOD LUCK dealing with all of the important confounders since I study participants were allowed to choose their marginal health increases. This is NOT about study flaws, this is about what questions you can and cannot answer. The Rand study addressed a specific question and to that end, I do NOT believe the study was flawed.
Honestly, the Rand results don't particularly surprise me, but perhaps for very different reasons than you may have.
Before you read the main idea of this paragraph, let me preface it by saying that the study population was fine to address the proposed main objective of the study, which was the effect of varying levels of insurance. From a public policy perspective, you want a study population that is typical of average Americans. But if you want to measure the net benefits of medicine, let's not forget that study participants were average, healthy people and the mean age for study participants was early 30's. Even if there is a net-benefit of medicine, medicine is not designed to improve the overall health of a healthy person. You see the results and say ah ha, this proves that medicine doesn't work and must hurt as many average people as it helps. I see the results and say, yeah, what did you expect what was going to happen.
Robin, I figure that if none of the existing studies answer my question, then I should accept that I still don't know the answer.
If we accept that we don't know, then we can decide what to do about not-knowing.
Konrad, studies that look at non-mortality outcomes gives similar results.
Jor and joe, neither of you answered my question. You can find flaws with any study, but finding a flaw with every study you see does not justify your believing anything you like. You must choose some basis for your beliefs.
Giving anasthesia before surgery may not improve mortality rates, but it certainly improves quality of life. However, mortality is far easier to measure, so that's what gets studied.
If you start with the assumption that medicine is only (or mostly) about saving lives, you may come to agree with Robin Hanson. But would you try to measure the quality of policing by tracking Bad Guys Shot vs. Dollars Expended on Cops? TV dramas focus on police shoot-outs and ER docs because they're dramatic, not because they're representative.
Jor, you have pointed out a serious problem. If medicine has almost completely changed in the last 25 years, so that the 1980's studies are obsolete, what would happen if we did a new study that took 7 years? Would the results from the beginning of the study be approaching obsolescence before the study ended?
It could be argued that if medicine is progressing too fast to do statistics on the results, that it's progressing too fast.
I'm old enough to remember the 1980's. Back then we were saying that the medicine of the 1950's was not very good, it probably did almost as much harm as it did good, but since then we'd improved tremendously. If we're saying the same thing now about then, it leaves me with a certain nameless doubt....
And with no way to dispell that doubt. If new inadequately-tested methods replace old ones faster than we can test the old ones, how can we ever tell how well we're doing?
Robin, as I've mentioned repeatedly, and I think some commenters at CATO also stated -- the RAND study is so old, as to be useless. Medicine, especially the kind being assessed in the RAND study has almost completely changed since then. There are just too many new therapuetics -- drugs and interventions -- that have each individually been shown to improve mortality and reduce morbidity (many in multiple RCTs).
At the turn of the century, Osler (considered by many to be the father of American medicine) thought that there were only 5 or 6 interventions in all of medicine that physicians did that were useful. In terms of the medicine measured in the RAND study, that was probably still the case in the 70's.
If you look at the top 100 mortality and morbidity reducing interventions today,(in the non-acute setting) and see what was available in the 70's -- I'd be surprised if more than 10 of those 100 were available or known in the 70's. Hell, I'd be curious as to how many of the wide-spread interventions in the 70's went on to have rigorous support behind them later on -- probably not many.
J Thomas,Thanks for the clarification. I wasn't quite sure what conclusions you were going to draw about the effect of medicine since you said"But the people who had no copayments received approximately 50% more healthcare," which I took as arguing with my previous statment about the ability to make causal conclusions regarding the effect of increased medical care. I'm glad we cleared that up.
In regards to hospitalizations,"Averaged across all levels of coinsurance, participants(including both adults and children) with cost sharingmade one to two fewer physician visits annually and had20 percent fewer hospitalizations than those with freecare."
We would have to make some assumptions regarding the mechanism resulting in a hospitalization. Doctor's offices are not generally open on the weekend which causes some people to go to the ER, inability to schedule an appointment with your primary care physician could also lead someone to choose to go to the hospital. In regards to admissions, I would be curious to find out how many were kept for observation but not really treated for anything serious. Without the barrier of cost, many probably go to the hospital just to be safe, and when a hospital knows that your insurance is going to cover the whole visit, why not admit the patient... you would be stupid not to.
does it imply that medicine is mainly reactive and if you go to the doctor when you don't really need to, medicine shouldn't have an extra benefit.... if there's nothing really wrong with you, then why should medicine be able to improve your health?
Yes, but the summary claimed that not only doctor's visits but also hospitalization increased by 50%. When there's nothing wrong with you, doctors ought to tell you there's nothing wrong with you and not send you to the hospital.
But while I looked at the details for various other sections of the report I didn't look at that. If you have a complaint and the doctor needs tests done at a hospital, maybe that gets counted as hospitalization. It doesn't have to implay anything is wrong with the medical system, although at first sight it would tend to imply that.
Joe, I thought I was making precisely the point you elaborated.
The study appeared to show that 50% extra doctor's visits and 50% extra hospitalizations, at the patients' initiative, did not improve their health.
Robin wants to interpret this as saying that the first 100% of doctor's visits and hospitalizations also failed on average to improve patients' health.
What I saw the RAND study showing was that the cost of the co-pays in their samples was not large enough to keep patients from getting medical assistance when they needed it. Patients might put off getting eye exams and new glasses when the expense was high, while they didn't put it off when it was free. So their vision was slightly worse. But for most things, when their health was in serious danger they were willing to pay their co-pay sums and get their treatment, whether it actually helped them or not.
It implies that the extra third of medical care that people got when it was free, was probably unneeded. It says nothing about how useful the first two thirds were.
To tell whether the first 2/3 of the medical treatments were useful it would work better to withhold all medical care from one randomly chosen group and let the other group have medical care. Then you'd see whether medical care has on average a beneficial effect.
Or perhaps limit the members of one group to a number of doctor's visits and hospitalizations that's half the average for the area the study is performed over, and let the second group have as many of both as they're willing to co-pay for. Then see whether the second group is healthier on average.
To do the study correctly it would be necessary to keep the patients from paying for private medical care themselves, and keep them from getting medical care from foreign nations. They must not be given illicit medical care; if they get it sneakily they compromise the experiment. I doubt this project is politically feasible. But it could be done with volunteers, who might be subtly different from the rest of the population.
If the Rand study design is inadequate to address your question of interest, then be careful about the conclusions you draw from it and don't rely on it anyway.
I am sure there are some people on here who are intelligent enough to design an ethical study for your question of interest.
J Thomas, you were so close to realizing one of the points I have been trying to make. I agree that patients with no co-pay will see the doctor more often than those who have to pay, but you have to be careful about conclusions drawn about the actual amount of medicine received. If patients with free co-pay see doctors more often than really they need them, does this mean more medicine has no net effect... or does it imply that medicine is mainly reactive and if you go to the doctor when you don't really need to, medicine shouldn't have an extra benefit.... if there's nothing really wrong with you, then why should medicine be able to improve your health? Also, we have to remember that you don't actually get treated with medicine every time we go to the doctor, especially if there's nothing really wrong with you. Thus, doctor's visits don't even directly equate to an increase in medicine received.
In the Rand study, insurance level was randomized, healthcare received was not. Thus you have to be very cautious about any CAUSAL conclusions drawn from the amount of healthcare actually received.
But the people who had no copayments received approximately 50% more healthcare. They asked for 50% more and got it. And their health was not particularly improved according to these various measures, most of which look worthless to me. (11 scales, but 3 of them were for mental health where nobody particularly expects a few years of psychotherapy to do much, and some of them were about the people's view of how healthy they were or their view of how good their healthcare was etc. And how well the extra 50% of doctor's visits or hospital stays helped them quit smoking or lose weight.)
The obvious implication to me is that patients who have a moderate co-pay will see doctors when they really need them, and patients who have free co-pay will see doctors more often than they need them.
This is probably a valid result whether the study actually shows it or not.
g, if the other editors, Nick and Eliezer, told me that thought I was drifting off topic, I would listen carefully. We get complaints about being too specific as well as about being too general. I take "bias" to be "avoidable error" and it seems to me beliefs about medicine are especially prone to avoidable error.
Laura, believing "established observational data" surely isn't a "cognitive bias" in any useful sense. It's usually the right thing to do. Gotchas are not biases. As has been pointed out elsewhere on this blog, even very noisy data are better than no data.
And if it turns out that in fact epidemiological studies are so noisy and biased that there's no useful information to be extracted from them? Why, then believing epidemiological studies is a mistake, just as believing horoscopes is a mistake. But there's no "astrology bias", although there may be biases (e.g., confirmation bias) that make it easier for astrology to get believers; and while telling us that epidemiological studies are useless is valuable (provided it's true) it's not clear that it offers much in the way of useful general cognitive lessons.
Of course, Robin is in overall charge of this place, and even if he wants to use it to post pictures of kittens or descriptions of his favourite movies then I've got no grounds for complaint. It just seems to me that there's some divergence between the stated mission of "Overcoming Bias" and some of what it's used for.
Jor and Joe, I rely most heavily on the RAND aggregate experiment; what would you have me rely on?
I agree with Jor. A study that produces a net result of 'no effect' doesn't mean that everyone in the study had 'no effect'. It means that some had 100% effect and some had 0% effect, with all the ranges in between.
It indicates that more study is required to improve the individual cases where below 50% effect was observed.