Supplements Kill

Curious to see what we know about the health benefits of vitamin supplements, I went looking with my assistant David Ortiz.  We found this fantastic 2007 JAMA meta-analysis of 385 publications regarding 232,000 subjects. Bottom line: on average supplements increase an ordinary adult’s chance of dying by 7%. While selenium reduces death rates by 0.2%, beta carotene increases them by 9%, vitamin C and E by 6%, and vitamin A by 20%.  Maybe you should take selinium, but mostly just stay away from supplements. From now on, I’ll avoid multivitamins. Quotes:

We searched electronic databases and bibliographies published by October 2005. All randomized trials involving adults comparing beta carotene, vitamin A, vitamin C (ascorbic acid), vitamin E, and selenium … were included. … Randomization, blinding, and follow-up were considered markers of bias in the included trials. … We included 68 randomized trials with 232 606 participants (385 publications). When all … were pooled together there was no significant effect on mortality.

Multivariate meta-regression analyses showed that low-bias risk trials (RR, 1.16; 95% CI, 1.05-1.29) and selenium (RR, 0.998; 95% CI, 0.997-0.9995) were significantly associated with mortality. In 47 low-bias trials with 180 938 participants, the antioxidant supplements significantly increased mortality (RR, 1.05; 95% CI, 1.02-1.08). In low-bias risk trials, after exclusion of selenium trials, beta carotene (RR, 1.07; 95% CI, 1.02-1.11), vitamin A (RR, 1.16; 95% CI, 1.10-1.24), and vitamin E (RR, 1.04; 95% CI, 1.01-1.07), singly or combined, significantly increased mortality. Vitamin C and selenium had no significant effect on mortality. …

The pooled effect of all supplements vs placebo or no intervention in all randomized trials was not significant (RR, 1.02; 95% CI, 0.98-1.06). … Univariate meta-regression analyses revealed significant influences of dose of beta carotene (RR, 1.004; 95% CI, 1.001-1.007; P = .012), dose of vitamin A (RR, 1.000006; 95%CI, 1.000002-1.000009; P = .003), dose of selenium (RR, 0.998; 95% CI, 0.997- 0.999; P = .002), … on mortality. None of the other covariates (dose of vitamin C; dose of vitamin E; single or combined antioxidant regimen; duration of supplementation; and primary or secondary prevention) were significantly associated with mortality. …

In trials with low-bias risk mortality was significantly increased in the supplemented group (RR, 1.05; 95% CI, 1.02-1.08). … In high-bias risk rials (low-methodological quality in >=1 of the 4 components) mortality was significantly decreased in the supplemented group (RR, 0.91; 95% CI, 0.83-1.00). …

At event rates below 1%, the Peto odds-ratio method appears to be the least biased and most powerful method when there is no substantial imbalance in treatment and control group sizes within trials. … When we applied Peto odds ratio, we found even stronger support for detrimental effects of the supplements (for all 68 trials: 1.05; 95% CI, 1.02-1.08; for the 47 low-bias risk trials: 1.07; 95% CI, 1.04-1.10; after exclusion of high-bias risk trials and selenium trials, for beta carotene: 1.09; 95% CI, 1.06-1.13; for vitamin A: 1.20; 95% CI, 1.12-1.29; for vitamin C: 1.06; 95% CI, 0.99-1.14; and for vitamin E: 1.06; 95% CI , 1.02-1.10).

Added 9:30p: I’m reminded that Phil Goetz complained the study should have tried to fit higher order moments in dose-response estimates, becuase “mean values used in the study of both A and E are in ranges known to be toxic.” Well not only is it hard to believe that authors of 385 studies chose to test dosage levels known to be deadly, it has been three years since this JAMA paper and no critic has yet bothered to redo the analysis with higher order moments, or restricted to low-dose studies. Really, for any stat analysis, you can always complain that they didn’t try some particular variation you like.  If you think some further analysis would be insightful, it should usually be up to you to try it, not up to them to try all possible variations you can imagine.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Carl Shulman

    There was a Less Wrong discussion of this study in March.

  • tylerh

    I agree this shows there is little support for taking supplements. But why does this support avoiding supplements?

    Perhaps this got compressed out of the summary and is discussed i the article, but ill people are more likely seek interventions. So selection bias dictates that most of these studies will show increase mortality, unless the study population is VERY carefully drawn from the generalized population.

    For illustration, look at the mortality rates for people taking chemotherapy drugs, even in carefully blinded, randomized trials.

  • Jim Babcock

    Hey, I remember that study. It was previously discussed and refuted by PhilGoetz on Less Wrong, in Even if you have a nail, not all hammers are the same. The problem is that it uses a linear regression, inappropriately. Vitamins have a J-shaped response curve – that is, there’s a large initial benefit in avoiding deficiency, then flat for awhile where there’s neither deficiency nor overdose, and then increasing mortality as the dose gets excessive. And some of the doses used in the study were quite excessive. Quoting from that analysis:

    The mean values used in the study of both A and E are in ranges known to be toxic. The maximum values used were ten times the known toxic levels, and about 20 times the beneficial levels.

  • Jim Babcock

    This is a great example of one of the major problems with medical research today. When a study is refuted, it sticks around, and keeps getting cited, so there’s no easy way to find the refutations.

  • jsalvatier

    I find it rather baffling that Robin doesn’t actively read LessWrong.

  • matt

    Robin read and commented on that post.

    One can complain about empirical studies in dozens of ways. Yes, for any linear regression one can complain that they should have included higher order moments for all of the variables. But if readers can feel justified in ignoring any analysis for which one can make such a complaint, then readers can feel justified in ignoring pretty much any such data analysis. That is way too low a standard for ignoring data.

    If you suspect that this lack has seriously skewed the results of some particular study, then you should get the data and do your own analysis the way you think it should be done, and then publish that. Then readers can at least compare the prestige of the two publications in deciding who is right.

  • the quiet one

    I find it rather baffling that someone who can’t spell ‘selenium’ judges themselves competent to write off supplements as killers.

  • Jim Babcock

    Robin, you really can’t expect bloggers to redo the statistical analysis; it’d be a lot of work for not much status payout, and just getting the full article text costs $30. But you could do it yourself. So please, if you really think that there will still be an increase in mortality after the error is corrected, then redo the analysis using a piecewise regression as described in PhilGoetz’s article, and publish the result. Since this article supports one of your favorite conclusions (that we spend too much on medicine), it will look like bias if you don’t.

    • Constant

      Robin, you really can’t expect bloggers to redo the statistical analysis; it’d be a lot of work for not much status payout

      This also applies to academics. Quoting from David Freedman:

      … the vast majority of published research is never replicated or validated, or if it is, there is no record of it in research journals. All but the most prominent research tends to enter the records and forever persist as apparently legitimate by default. Martinson estimates that more than 95 percent of medical research findings aren’t replicated. No wonder: replication is more or less unfundable, and if someone does it on his own nickel, the results probably won’t come to light. Even a study that fails to replicate a published result, stated Nature in an editorial, “is unlikely ever to be published, or even submitted for publication.”…

    • http://silasx.blogspot.com Silas Barta

      Good point. i’d add that even if I, the random blogger, actually did this re-analysis, and did it correctly, what journal would actually publish me? You need the right credentials first.

    • http://entitledtoanopinion.wordpress.com TGGP

      Robin should provide Phil with the data so he can analyze it.

  • sshervais

    I wonder what proportion of these studies used ‘lab quality’ vs ‘commercial’ supplements. They were done at a time when most of our commercial supplement pills were coming from China, from sources now held to be suspect.

  • Doug Colkitt

    Rerun with log-transformed values. Virtually all data which takes non-negative value should be log-transformed before any statistical analysis.

  • http://disputedissues.blogspot.com Stephen R. Diamond

    Why would hundreds of medical researchers (on whose studies the meta-analysis was based) choose variations likely (according to the critics) to prove harm, when the incentives in medical research go in the opposite direction by creating pressures to prove efficacy, which here means decreased mortality? Such a question needs to be asked because the inability to answer it bears on the probability you assign to the claim that the underlying studies’ doses were toxic.

    The medical consensus is that these the doses “officially” set as toxic are far too low, and doses 2 or three times the maximum dose used in the study of Vitamin A are, for specific conditions, commonly prescribed by doctors without the patient experiencing toxic reactions.

    Toxicity, moreover, has well-defined symptoms. Are we to assume that the subjects developed toxic reactions that the doctors conducting the underlying studies disregarded? This, repeated in hundreds of underlying studies?

    The study’s severe critics must say: high doses produce a kind asymptomatic life-shortening toxicity. But this is an assumption that its asserter must prove, when toxicity producing immediate symptoms is the only kind of toxicity that anyone has been able to identify.

  • Jess Riedel

    Although I think Robin is generally right here (that if you can justify dismissing this study, then you can justify ignoring just about any study you don’t like), it seems like a waste of time to argue about it. Why not just find out?

    How much would it cost for LW/OB readers to commission someone to redo the analysis with PhilGoetz’s objection in mind? Employing one grad student for a month only costs about $2000 (and we could take donations through a WePay-like service which only charges credit cards if enough people pledge to donate).

    • Constant

      Although I think Robin is generally right here (that if you can justify dismissing this study, then you can justify ignoring just about any study you don’t like)

      I don’t actually see this. I think Robin’s reply can be turned back on itself: if you can brush off Phil’s criticism by saying “you can always complain that they didn’t try some particular variation you like”, then you can brush off just about any criticism you want. 

      I think that on the contrary, not all criticisms of an analysis are equal. Some are better than others. And choosing an appropriate analysis is not a question of personal taste (“some particular variation you like”). There really are good reasons for choosing one over another apart from arbitrary whim. If it were a matter of whim, it would not be science. Given that some are better than others, then it is possible to err, and possible for someone like Phil to catch the error, and that catch does not deserved to be brushed off as if it were a mere idiosyncratic fancy. 

      Robin suggests producing a competing analysis. Well, suppose one is produced. Then what? How are we to decide between the two analyses? How else but by an argument much like the one Phil already made? But if such an argument should carry weight then (and it must, for without it, we cannot choose between analyses), then logically it must carry weight now.

  • Ray

    If Vitamin A, for example, is not bad for a person in and of itself, then taking a Vitamin A supplement would not be bad for a person.

    So in all of this condensed data, what makes the difference between getting Vitamin A naturally, and via a supplement?

    Are supplements somehow themselves harmful? I haven’t seen that stated outright. Is there an inordinate amount of unhealthy people taking them in a last ditch effort to regain their vitality? Is it simply a matter of dosage? Maybe a large percentage of people are taking supplements, and behaving unhealthily in other ways under the misconception that the supplements are offsetting the behavior.

    But since vitamins are themselves not harmful, it seems rather dishonest to dismiss vitamin supplements as unhealthy simply because they are supplements.

    • http://williambswift.blogspot.com/ billswift

      Enough vitamin A is toxic no matter the source. The problem with supplements is that you are getting far more of the vitamin than in most natural foods (also in a more concentrated dose, if that makes a difference). I remember reading about a health food faddist about 15 or 20 years ago who had managed to eat enough carrots in a short enough time to encounter vitamin A toxicity. And survival manuals and books about eskimos point out that polar bear liver is dangerous because even fairly small amounts can cause vitamin A toxicity.

      I did megadoses of vitamins for a couple of years in the mid to late 1980s, I quit largely because there was pretty good data coming out that large doses had no additional benefit over normal dietary levels and some definitely caused problems.

      • Ray

        Enough vitamin A is toxic no matter the source.
        I agree, and that’s my point. The focus is on one source when in fact the real issue lies with each individual item being consumed.

        A much larger problem in today’s world is the availability of so much information, and yet we still over simplify matters to such a degree as this.

  • Chad

    “Toxic” has several meanings. Robin is reading “toxic” as “deadly” when Phil clearly intends the “harmful” meaning. Robin’s dismissal of Phil’s detailed criticism is flippant and seems to fall under the “But somebody would have noticed” category. Agreed with Jim Babcock that this smacks of confirmation bias on Robin’s part as well.

  • Robert Koslover

    If supplements really are, on average, harmful, then it is important to learn why this is so. After all, a substantial fraction of the foods in your local supermarket have been “enriched” with added vitamins, and some minerals too. Just read the ingredients labels and you’ll see them there. This practice is normally defended as yielding large reductions in the incidence of vitamin deficiency diseases (such as rickets, scurvy, etc). If consuming all those “supplements” in your food is actually good for you, then why should it be harmful in pill form? Alternatively, does this study argue for ending or limiting the widespread practice of adding vitamins to foods?

  • http://timtyler.org/ Tim Tyler

    It only looked at 5 supplements, at least two of which were previously well known to be bad for you.

    So: *some* supplements are bad for you – and scientists have known that for a long time.

  • NIck Walker

    Is there a selection bias here? Admittedly I haven’t read the study.

    You’ve remarked “After getting high-efficiency washers, consumers increased clothes washing by nearly 6 percent.”

  • http://n/a Michael C Price

    Look at the motivation of the JAMA study: to assess *antioxidant* supplements. All this shows is that antioxidants are not the answer (hint: free-radicals are not the cause of aging). Some of us already knew that. It also tells us that beta-carotene, vitamin A & E are toxic in large doses. And guess what, we already knew that as well.

    So it’s not really news. I shall continue taking supplements of essential nutrients.

  • Chris M

    Well not only is it hard to believe that authors of 385 studies chose to test dosage levels known to be deadly, it has been three years since this JAMA paper and no critic has yet bothered to redo the analysis with higher order moments, or restricted to low-dose studies.

    Except that is exactly the problem with medical research, or so this guy says:

  • Pingback: Don’t miss these great links!