33 Comments

There is some criticism of the Rand study here (http://www.marginalrevoluti..., basically arguing that those participants with significant health costs were more likely to leave the study to regain full coverage, thus skewing the results. If that criticism is correct, the entire premise of the argument collapses.

Expand full comment

Floccina, they asked 60% of their subjects how much they exercised, at the start of their participation in the study. Then they asked all of them how much they exercised at the end of the study. They folded this information into a bigger combined variable. They found that having 50% more visits to doctors and 50% more hospitalisations didn't have much effect on the bigger health variable.

It makes sense that wouldn't have much effect on how much exercise people said they did, doesn't it?

Expand full comment

As to exersize Dr. Dean Edell once said that a study found that health people like exersize more and thus exersize more. The 10 years increase in life due to exersize seems to high and I say this as an advocate for exersize. How did the studies separate such things.

Expand full comment

Henry, the RAND study is available -- for free! -- as a .pdf download.

download

Your earlier question about edge effects doesn't apply, what they did was to randomly offer people different insurance plans. Practically everybody who was offered the plan with no copayments at all took it. Three quarters of the ones who were offered the plan with the most expensive copayments took it. They compared the people who took the plays with copayments against those without and couldn't find any important difference. Also they tried to cmopare the people who took the copayment plans against the ones who refused and couldn't tell a difference there either.

So it was random by insuree and not by area. You get insurance from them and they offer you a plan at random, you take it or leave it.

One bias I haven't examined closely is that for 60% of their patients they did physical exams at the start and at the end to base their statistics on, but for a randomly-chosen 40% they did the exams only at the end, and they guessed the numbers for the beginning. I haven't seen why they chose to do that. I'd expect the result would be to make any changes less statisticly significant. Imagine that they guessed at baselines for all the patients, and then they looked at the difference at the end compared to the beginning. Imagine that they assigned everybody the same initial state (which they didn't, they guessed from questionnaires and very general data). Imagine that everybody changes by 2%, but they measure it as 48% decreasing from the universal baseline while 52% increase. The extra noise would make the result look weaker than it is.

The study was too small to do much about actual deaths. So they predicted deaths based on obesity, smoking, etc and used the predictions as a measure. Since medical care as of 1982 did not do much to reduce obesity or smoking etc the predicted mortality was not changed much, except the observed small blood pressure decrease.

Expand full comment

"Unit and Henry, many studies looking at the effect of medicine certainly do control for obesity, etc."

In this case, I don't mean *controlling* for obesity, but trying to empirically (or theoretically for that matter) estimate to what extent healthcare and healthy living are substitutes. To what extent do increases in medical technology enable people to be more obese? Is obesity endogenous?

Maybe that's what you meant, but I wasn't sure.

Expand full comment

OK, sorry, I was interested in the other study because it's so obviously wrong-headed. You're talking about the RAND study which can be downloaded free. I started looking at that.

They allow medical care at various costs, free, $150/year, $1000/year (or some percentage of income, whichever is less).

They note that without free care there were 2/3 as many doctor visits and 2/3 as many hospitalisations.

And with 2/3 the care, they got no significant difference on most of their metrics. However, when they tried to do that for subgroups -- just poor people or just sick people etc -- the confidence intervals got too big. There could have been important benefits to extra care for some subgroups and it wouldn't show up as statisticly significant.

For a rich person the difference between free care and $150/year or even $1000/year might not matter, they would reduce the significance for the whole study, and there weren't enough poor people to get a good baseline.

The eleven measures were "physical health", "role functioning", "mental health", "social contacts", and "general health measures": smoking behavior, weight, cholesterol level, diastolic blood pressure level, visual acuity, and a death index.

Right offhand I wouldn't expect free psychiatry to quickly affect mental health, social contacts, or role functioning. And over 1975 through 1981, would we expect more medical care to affect weight, cholesterol, blood pressure, or get people to quit smoking? What did MDs do to get people to quit smoking in 1981 besides tell them to quit smoking? What did they do to get them to lose weight or gain weight? Would you expect 3 doctors visits to do more about that than 2 visits?

More later.

Expand full comment

Robin, in that case please tell us what you're talking about and what it said, because you have left that confused.

Expand full comment

J, you are completely confused about which study you are talking about, and about how it worked.

Expand full comment

Tom, the RAND experiment was randomized, and they also independently checked on and controlled for initial health status.

I haven't paid to read that study yet. But I notice that they threw out the data for people who survived more than 6 months.

Suppose that the result of an extra day in intensive care was that the healthiest patients *didn't die*. Then those would be out of the study, and you're left comparing all the ones who died with the cheaper care against only the sickest of the more expensive care.

It makes good sense to use this study when you're making policy about patients who will die within 6 months. But you shouldn't use it for patients who might survive longer than six months. The study doesn't say anything about them.

Expand full comment

OK, yes, the RAND experiment was randomized, but then it got better outcomes in 4 categories of 30. I'll grant that dentistry and optometry are more predictable in nature than general medicine. That leaves hypertension and serious symptoms. I've read your argument that hypertension at .03 significance may be a fluke. But it's not 1 significant result in 30, there were 4 significant results in 30.

It could be that you are right, but the RAND study alone doesn't seem to prove it.

Expand full comment

Tom, the RAND experiment was randomized, and they also independently checked on and controlled for initial health status.

Expand full comment

This seems to be a result that everybody has trouble believing.

Me, I can't help wondering if more chronically ill patients were placed in the better funded group of these studies, either by their own choices (eg, moving to that region that paid for longer hospital stays) or by well-meaning, locally-optimizing health care providers. In my experience, it is common that health care providers do so. I'd call it gaming the system, except that's a little harsh when they're just trying to help sick people.

Perhaps I'm missing something, but I don't see where any of the studies prevented that.

One suggested test on the existing data: Did hospitals near the edges of the pay-for-one-extra-day regions, which are presumably more likely to experience this effect, show this effect more than hospitals far from the edges? And similarly for other measures of easy/difficult entrance into the better-funded groups.

Expand full comment

I'd be happy to publish elsewhere if invited.

I hope you do get invited. The US government seems to be trending Democratic at the moment, and they're the ones proposing changes to the health-care system. If you can get your ideas injected into that debate...

Expand full comment

Speaking of investment advice... If index funds tend to do well, then doesn't that just shift the expertise away from mutual fund managers to those who define the indexes? How do the people who decide the composition of indexes decide which stocks make the cut and which don't? Could one apply their algorithm to a hypothetical unindexed stock exchange and make returns that are as good as known indexes?

Expand full comment

To help people imagine that we could be this wrong, I'd suggest looking at investment advice. A generation ago I suspect most people believed there was value in paying a decent person with apparent expertise to manage your investments or advise you about it. Better measurement of results has changed informed opinion, but there's still a large industry selling expertise that's worth approximately zero.

Expand full comment

Ken is saying that the prices for equivalent services in the US are greater than in other countries. He's implying that even though the US spends more on medicine than other countries, we're still getting basically the same care, but at a higher price.

One way to interpret this is that he's claiming that the supply of medicine is extremely inelastic, and that we're just allocating our supply of medicine to higher bidders in the same way that star athletes are allocated to the highest bidders. If we allocated some medicine on a basis other than ability to pay, we'd get more efficient care.

Another way to interpret this is that medicine acts like a price-discriminating monopoly. If Microsoft charges students $50 for a copy of Windows and a corporation $500, then two different prices exist for the same good. If the same bottle of pills costs $100 in the US and $10 in Canada, then the US is getting whatever benefit the pills provide less efficiently. (Intellectual property laws are not Pareto efficient; isn't there a better way to subsidize useful idea creation?)

I'm not saying any of these descriptions are accurate, but I think that's what Ken means.

Expand full comment