[I finally begin to post on the "Hansonian" view of medicine, mentioned here, here, here, here.] How useful is medicine, to the average person, wondering if he should go to the doctor or skip it? We have perhaps a million medical studies, but how do we combine them into a total estimate of the value of medicine? It is hard to see how to correct for many potential biases such as fraud, funding bias, treatment selection bias, publication selection bias, and so on.
I don't follow you. Up to 10% lower risk of dying can be explained by a 0.8 mmHg difference in blood pressure? How's that? Where does it say that the improved vision is explained by improved glasses? And just what is wrong with using the most recent available research for the "risk of dying index"? I would have expected you to use much more careful reasoning than this!
The Rand HIE study did NOT compare patients who got care free with patients who had to pay for the actual cost of their care; it investigated ONLY the effects of co-pays at the time of service. ALL the groups had excellent insurance. The MAXIMUM out-of-pocket annual cost was only $1000.
The study concluded that co-pays reduced "inappropriate or unnecessary" medical care, but also reduced "appropriate or needed" medical care.
As a doctor this seems obvious to me. Regardless of what they pay (full cash to completely free) patients come in if 1) they think they need help and 2) they think they can afford to be seen. Generally they have no way of knowing whether care is really needed. I have been in practice 30 years and have known a number of patients who though they needed help, couldn't afford it, and died.
I'm not sure what the hold-up is... maybe they have re-thought their stance on how this is going to actually make the company any money. Or perhaps their lawyers pointed out the liability of providing agents a platform to stick their feet in their mouth. Whatever it is, it's hardly something I'd claim as being "Well done".www.jebshouse.com
I agree with anon. Anecdotal evidence serves almost entirely as appeal to (confirmation) bias. There is an alternative to running a new, improved RAND study. It's to say "I intuitively think" or "in my unsubstantiated opinion". That way one is appropriately labeling a model or hypothesis that hasn't been shown to be supported by quality empirical methods.
Because anyone can find an anecdote which supports his view. If you go searching for evidence to support your view and find it, it will only serve to further increase your belief in the view you set out to prove. Statistics is about rising above anecdotal evidence.
I have no doubt that anecdotal evidence contributes to disagreements about the effectiveness of medicine... it's because people actually think that anecdotal evidence is evidence.
"Do you know of a way to think about the causes of such disagreements that doesn't involve anecdotes?"
Yes, people not understanding statistics and misinterpreting study results.
Anon, what evidence is there that using anecdotal evidence will "surely" cause confirmation bias? How do alternative approaches (short of running a new, improved Rand study) reduce the problem of confirmation bias?Part of the reason for my comment was to provide a hint about why people disagree about the effectiveness of medicine. Do you know of a way to think about the causes of such disagreements that doesn't involve anecdotes?
Anon, in all serious, good thread policing. *thumbs up*.
Anon, I would not claim exactly zero marginal effects of medicine. With limited statistical power, one can just infer a small effect, but not a zero effect.
I see that you only responded to part of Ash's concern
"It's not clear what health effects would be expected in a young population with three to five years of care."
... probably biased by your own views of medicine... you are completely ignoring other interpretations and limitations of the Rand study in favor of using the study to confirm your own views.
I also see that you dismissed Jor's concerns in regards to statistical power on account of your belief of the existence of biases. Why don't you think a little bit more about the statistical concern raised instead of side-stepping his argument by bringing up the existence of biases. The existence of the biases you bring up is ONE possible explanation, but there are other possible explanations. You really shouldn't dismiss them so quickly.
How can you be so sure you are right? What biases must you suffer from?
Two people can have rational explanations for the same phenomenom... the unbiased person will realize that and admit that the evidence is not definitive.
Thanks, Peter. Does anyone else have any anecdotes that they would like to share with us and waste more space on this board? :)
THERE IS NO SUCH THING AS ANECDOTAL EVIDENCE... anyone who searches for anecdotal evidence to back up his/her own views will surely fall victim to confirmation bias.
Here's one more piece of anecdotal evidence against the effectiveness of medicine: http://fallenpegasus.livejournal.com/612622.html.
Michael, if you think we'd see more effects in a longer experiment, I presume that you will then sign our petition for a longer version?
Just FYI, the RAND HIE only enrolled non-elderly people, a point that RH omits in the otherwise excellent precis of the experiment. (IIRC, Medicare legislation made it impossible to deny Medicare to the eligible, 65+ population, which precluded their enrollment in the HIE.)
The experiment only provided three-to-five years of insurance, as RH does note.
It's not clear what health effects would be expected in a young population with three to five years of care. Successful blood-pressure screening and remediation seems like a pretty plausible result. BP improvement was declared in advance, not mined after the fact, as an outcome variable. BTW, what would be the appropriate multivariate significance test?
A reasonable model of health care is that a lifetime of good care yields better health in the expensive, older years, and this would simply have no chance at all of showing up in the HIE.
Yes, no evidence of disease is not the same as evidence of no disease. With conclusing evidence it can be difficult
no evidence of disease is not the same as evidence of no disease.
No evidence of disease, in a situation where there would be evidence of that disease if the disease were present, is evidence of no disease.
If I claim there is a (normal) lion in your room, and you can't see him, smell him or touch him, that is evidence that he isn't there.
I defintiely lost site of the biases in clinical trials, however the RAND study is still under-powered (the commonly cited HTN trials have 6000 patients, the largest has 40,000) and really can't be used as evidence for lack of efficacy of medicine even in the whole.
Vaguely reminds me of soemthing I read this week (blackswan or here), no evidence of disease is not the same as evidence of no disease. No evidence of efficacy, can not be used in this case as evidence of no efficacy --- especially when we have trials showing evidence of efficacy.