An editor asked me to write this OpEd, but then he never responded when I gave it to hm. So I submitted it to several other editors, but now I’m out of contacts to try. So I’m giving up and posting this here:
Europeans in 1600 likely prided themselves on the ways in which their “modern” medicine was superior to what “primitives” had to accept. But we today aren’t so sure: seventeenth century medical theory was based on the four humors, and bloodletting was a common treatment. When we look back at those doctors, we think they may well have done more harm than good.
When we look at our own medical practices, however, we tend to be confident we are in good hands, and that the money that goes to buying medical care–in 2020, it was 19.7% of our G.D.P. –is well spent. Most of us know of a family member who credits their life to modern medicine. My own dad said this about his pacemaker, and I, too, am a regular customer: I’m vaccinated, boosted, and recently had surgery to fix a broken arm.
We believe in medicine, and this faith has comforted us during the pandemic. But likewise the patients of the seventeenth century; they could probably also have named a relative cured by bloodletting. Yet health outcomes are typically too random for the experience of one family to justify medical confidence. How do we know our belief is justified?
This might seem like a silly question: in Europe of the seventeenth century, the average lifespan was in the low 30s. Now it’s the low 80s. Isn’t that difference due to medicine? In fact, the consensus is now that historical lifespan gains are better explained by nutrition, sanitation, and wealth.
So let’s turn to medical research. Every year, there are a million new medical journal articles suggesting positive benefits of specific medical treatments. That’s something they didn’t have in the seventeenth century. Unfortunately, we now know the medical literature to be plagued by serious biases, such as data-dredging, p-hacking, selection, attrition, and publication biases. For example, in a recent attempt to replicate 53 findings from top cancer labs, 30 papers could not be replicated due to issues like vague protocols and uncooperative authors, and less than half of the others yielded results like the original findings.
But surely modern science must have some reliable way to study the aggregate value of medicine? Yes, we do. The key is to keep a study so simple, pre-announced, and well-examined that there isn’t much room for authors to “cheat” by data-dredging, p-hacking, etc. Large trials where we randomly induce some people to consume more medicine overall, and then track how their health differs from a control population–those are the key to reliable estimates. If trials are big and expensive enough, with lots of patients over many years, no one can possibly hide their results in a file drawer.
Thankfully, we do have a few such studies. Yes, they have limits. They may not include all patient ages, or all kinds of medical care, and they can only see marginal health effects, of the medicine that some get that others do not. But for now, they are the best we have.
Which brings us to the biggest medical news of the 2021, at least for those less inclined to give medicine the benefit of the doubt. We now have one new such study: the Karnataka hospital insurance experiment. From May 2015 to August 2018, 52,293 non-poor but otherwise typical residents of the Karnataka region of India were randomly assigned to get free hospital insurance, an option to buy such insurance, or a control condition.
While the study saw large effects on hospital insurance purchases and on hospital visits, when looking at 82 health outcome changes over a five-year period the study authors “cannot reject the hypothesis that the distribution of p-values from these estimates is consistent with no differences. (P=0.31)” That is, they saw no net effects; people who got more medicine were not on average healthier.
This result is, alas, consistent with most other high quality randomized aggregate medical experiments. For example, few health effects were seen in the 1974-1982 RAND health insurance experiment on 7700 U.S. residents over 3-5 years each, or in the 2008 Oregon Health Insurance experiment wherein 30,000 of 75,000 Oregon poor were randomly allowed to apply for Medicaid. In both studies, more health care did not translate into more health.
A 2019 U.S. tax notification experiment did, maybe, see an effect. When 0.6 of 4.5 million eligible households were randomly not sent a letter warning of tax penalties, the households warned were 1.1% more likely to buy insurance, and 0.06% less likely to die, over the next two years. Now that last death result was only significant at the 1% level, which is marginal. So there’s a decent chance this study is just noise.
Bottom line: we spend 20% of G.D.P. on medicine, most people credit it for their long lives, and millions of medical journal articles seem to confirm its enormous value. Yet our lives are long for other reasons, those articles often show huge biases, and when we look to our few best aggregate studies to assuage our doubts, they do no such thing. And the biggest news of 2021 is: we now have one more such study.
It seems we have three options: we can stick our head in the sand and ignore this unwelcome news, we can accept the difficult truth that medicine just isn’t that useful, or we can hope there’s some mistake here and check again. A mere 0.1% of U.S. annual medical spending, or $4.2 billion, could fund a far larger experiment, and hopefully settle the matter. What do you choose?
The "low-hanging fruit" was picked some time ago with improved sanitation, vaccination, nutrition; and now we're faced with spending larger and larger amounts of money on interventions that add years, or months, to our life expectancy instead of decades. Even worse, our medical successes may be worsening our genetic fitness.
That is the study I discuss in 3rd to last paragraph above.