Search Results for: smoking trials

Smoking Trials Again

Recently I talked about checking on smoking skeptics.  I described three studies:

  1. A randomized trial of 1400 high risk smokers.  After 10 years one half had half the smoking rate of the other, and after 20 years it had an insignificant 7% lower mortality (13% less heart disease, 11% less lung cancer).
  2. MRFIT randomized multifactor trial of 8000 smokers.  After 6 years one half quit 49% (vs. 29%), and after 16 years had an insignificant 6% lower mortality (11% less heart disease, and -15% less lung cancer).
  3. A randomized multifactor trial of 1200 high risk men.  After five years one half reduced smoking by 3/8 (vs 2/9), but had twice the mortality (10 vs. 5 count).

I’ve now had time to look over seven more studies:

  1. A randomized trial of 6000 smokers with “asymptomatic airway obstruction”, i.e., weak lungs. (HT Karl.)  After 5 years in two-thirds, 22% (vs 5%) stopped smoking, and after 14.5 years they died a (3% level significant) 15% less (20% less of heart disease, 15% less of lung cancer, and 50% less of “respiratory disease other than cancer.”) (More details here, which I don’t have.)
  2. WHO collaborative multifactor randomized trial of 61,000 men.  After six years one half had 2% fewer smokers, 7% among highest risk men, giving an insignificant 5% lower mortality (7% in heart disease).
  3. Gotenborg multifactor randomized trial of 30,000 men.  After ten years one third had 9% fewer smokers (32.5 vs. 35.4%) than the other two thirds, and an insignificant 2% lower mortality (0% heart disease, 15% cancer).
  4. Norwegian multifactor randomized trial of 1200 men.  After five years one side had 1/8 less smoking, and after 28 years it had 46% more mortality (95 vs 65 count).
  5. Oslo mulitfactor randomized trial of 1200 men.  After 8.5 years one side had 45%(?) less smoking, and 40% less mortality (19 vs. 31 count).  (This just from abstract; anyone have the paper?)
  6. A non-randomized study of 1600 men over 26 years. Initial lung quality was unrelated to mortality for non-smokers, but high smokers with initially bad lungs died 62% more than initially good lungs.
  7. A non-randomized AER ’06 study of WWII vetrans.  Its key “identifying assumption is that cohort and age effects in the smoking equation are the same for men and women” and that the entire increased mortality of WWII veterans is due to their smoking more. (HT Alex T.)  It finds “a nonveteran average annual mortality rate of 13.1 per 1,000 men and a veteran … rate of 16.6” (1.2 vs. 2.2 for lung cancer), suggesting “36 to 79 percent of the excess veteran deaths due to heart disease and lung cancer are attributable to military-induced smoking”.  Since heart disease and lunch cancer were 38% of deaths, this suggests ~4-12% higher smoking mortality.

OK, so how best to summarize this evidence?  Based on study #4, I tentatively estimate smoking raises mortality for folks with bad lungs, about 10 to 25% of folks, by 50-100%.  (This affect appears to not work mainly via lung cancer.)  This is supported by study #9 and could explain a 5-25% overall smoking mortality increase.

In the rest of the studies, if we assume the entire effect seen was from smoking, we can collect smoking mortality affect estimates.  Setting aside #8, as I haven’t read the paper, #1 had the biggest change in smoking rates, and suggests a ~20% mortality.  The next biggest change was #2, and suggests ~30% mortality.  Study #6 had the next less change, and suggests ~22% mortality.  The rest were all across the map, as expected from their small count and change.

So, we seem to see a 50-100% smoking mortality increase on bad lungs, which predicts a 5-25% overall smoking mortality increase.  If we attribute to smoking the full benefit seen in our three most relevant multifactor randomized trials, we get crude smoking harm estimates of 20,22,30%.  And if, from study #10, we attribute the entire higher mortality of WWII veterans to their smoking more we get ~4-12% mortality effect.

Bottom line:  a randomized trial suggests a large smoking harm on bad lungs, which can explain the entire apparently average smoking harm seen elsewhere.  My best guess: smokers die ~10-30% more on average, living about 2-6 months less, but there’s much less net harm to strong lung folks.

Added 10a: Wikipedia says

Male and female smokers lose an average of 13.2 and 14.5 years of life, respectively. .. The risk of dying from lung cancer before age 85 is 22.1% for a male smoker and 11.9% for a female current smoker, in the absence of competing causes of death. The corresponding estimates for lifelong nonsmokers are a 1.1% probability [20 times less] of dying from lung cancer before age 85 for a man of European descent, and a 0.8% probability [15 times less] for a woman.

Other sources mention risk factors of 15, 23 or 100. Such figures are common and, it seems, rather misleading. The above studies clearly suggest that the causal effect of smoking on mortality, even for lung cancer, is much less than the factors of 15+ often thrown around.

GD Star Rating
loading...
Tagged as:

Random Smoking Trials

Hal Finney recently commented:

[Johnstone & Finch’s] Scientific Scandal of Antismoking … makes the case that smoking is not bad for your health. … [It has] the superficial appearance of referencing scientific studies and claiming the the mainstream misrepresents the results.

Yes, they are superficially credible.  Their New Scientist letter:

WHO … claims … “an epidemic of chronic illnesses … could be prevented through simple changes in diet, by being more active and by not smoking.” … There have been a number of such studies, with various combinations of these three lifestyle factors, including the WHO collaborative trial (60,881 subjects, 6 years), the Goteborg trial (30,022 subjects, 11.8 years) and the Multiple Risk Factor Intervention trial (12,866 subjects, 7 years).  These and another eight trials were conducted over three decades, one of the most expensive and sustained series of biological experiments in the history of medical science. … None showed any improvement in life expectancy and two showed a significant reduction in life expectancy in the test group.

So I dug further; bottom line:  Johnstone & Finch are right.  We usually see strong correlations between death and smoking, and we see those same correlations within each random arm (i.e., group) of a randomized trial.  Nevertheless, we see no significant net death differences between control arms and arms induced to smoke less.

So we don’t have clear evidence that smoking kills on net; it could be that most or all of the death-smoking correlation is due to selection effects, and not smoking causing death.  Experts say there is a substantial causal component, and for now I’m accepting that claim, but this lack of clear evidence is suspicious, and disturbing.  Now for some details. Continue reading "Random Smoking Trials" »

GD Star Rating
loading...
Tagged as: ,

Is most research a waste?

Over at 80,000 Hours we have been looking into which research questions are most important or prone to neglect. As part of that, I was recently lucky enough to have dinner with Iain Chalmers, one of the founders of the Cochrane Collaborations. He let me know about this helpful summary of reasons to think most clinical research is predictably wasteful:

“Worldwide, over US$100 billion is invested every year in supporting biomedical research, which results in an estimated 1 million research publications per year

a recently updated systematic review of 79 follow-up studies of research reported in abstracts estimated the rate of publication of full reports after 9 years to be only 53%.

An efficient system of research should address health problems of importance to populations and the interventions and outcomes considered important by patients and clinicians. However, public funding of research is correlated only modestly with disease burden, if at all.6–8 Within specific health problems there is little research on the extent to which questions addressed by researchers match questions of relevance to patients and clinicians. In an analysis of 334 studies, only nine compared researchers’ priorities with those of patients or clinicians.9 The findings of these studies have revealed some dramatic mismatches. For example, the research priorities of patients with osteoarthritis of the knee and the clinicians looking after them favoured more rigorous evaluation of physiotherapy and surgery, and assessment of educational and coping strategies. Only 9% of patients wanted more research on drugs, yet over 80% of randomised controlled trials in patients with osteoarthritis of the knee were drug evaluations.10 This interest in non-drug interventions in users of research results is reflected in the fact that the vast majority of the most frequently consulted Cochrane reviews are about non-drug forms of treatment.

New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence. Many researchers do not do this—for example, Cooper and colleagues 13 found that only 11 of 24 responding authors of trial reports that had been added to existing systematic reviews were even aware of the relevant reviews when they designed their new studies.

New research is also too often wasteful because of inadequate attention to other important elements of study design or conduct. For example, in a sample of 234 clinical trials reported in the major general medical journals, concealment of treatment allocation was often inadequate (18%) or unclear (26%).16 In an assessment of 487 primary studies of diagnostic accuracy, 20% used different reference standards for positive and negative tests, thus overestimating accuracy, and only 17% used double-blind reading of tests.17

More generally, studies with results that are disappointing are less likely to be published promptly,19 more likely to be published in grey literature, and less likely to proceed from abstracts to full reports.2 The problem of biased under-reporting of research results mainly from decisions taken by research sponsors and researchers, not from journal editors rejecting submitted reports.20 Over the past decade, biased under-reporting and over-reporting of research have been increasingly acknowledged as unacceptable, both on scientific and on ethical grounds.

Although their quality has improved, reports of research remain much less useful than they should be. Sometimes this is because of frankly biased reporting—eg, adverse effects of treatments are suppressed, the choice of primary outcomes is changed between trial protocol and trial reports,21 and the way data are presented does not allow comparisons with other, related studies. But even when trial reports are free of such biases, there are many respects in which reports could be made more useful to clinicians, patients, and researchers. We select here just two of these. First, if clinicians are to be expected to implement treatments that have been shown in research to be useful, they need adequate descriptions of the interventions assessed, especially when these are non-drug interventions, such as setting up a stroke unit, off ering a low fat diet, or giving smoking cessation advice. Adequate information on interventions is available in around 60% of reports of clinical trials;22 yet, by checking references, contacting authors, and doing additional searches, it is possible to increase to 90% the proportion of trials for which adequate information could be made available.22

Although some waste in the production and reporting of research evidence is inevitable and bearable, we were surprised by the levels of waste suggested in the evidence we have pieced together. Since research must pass through all four stages shown in the fi gure, the waste is cumulative. If the losses estimated in the fi gure apply more generally, then the roughly 50% loss at stages 2, 3, and 4 would lead to a greater than 85% loss, which implies that the dividends from tens of billions of dollars of investment in research are lost every year because of correctable problems.”

His assessment was that the research profession could not be expected to fix up these problems internally, as it  had not done so already despite widespread knowledge of these problems, and had no additional incentive to do so now. It needs external intervention and some options are proposed in the paper.

There is a precedent for this. The US recently joined a growing list of countries who have helped their researchers coordinate to weaken the academic publishing racket, by insisting that publicly-funded research be free and openly available within a year. So long as academics are permitted to publish publicly-funded research in pay-for-access journals, established and prestigious journals can earn big rents by selling their prestige to researchers – to help them advance their careers – in exchange for copyright on their publicly-funded research. Now that researchers aren’t permitted to sell that copyright, an individual who would refuse to do so out of principle won’t be outcompeted by less scrupulous colleagues.

Likewise, rules that require everyone receiving public money to do the public-spirited thing, for instance by checking for systematic reviews, publishing null results,  pre-registering their approach to data analysis, opening their data to scrutiny by colleagues, and so on, would make it harder for unscrupulous researchers to get ahead with corner-cutting or worse chicanery.

GD Star Rating
loading...
Tagged as:

Beware Cancer Med

Chapter 2 of Ken Lee’s thesis compares med spending and age-adjusted deaths across the 50 US states from 1980 to 2007. Lee’s baseline model finds that deaths increases with smoking use, alcohol use, population density, and med spending: a 10% increase in med spending increases deaths by 0.85%. Breaking down this med spending death effect by drug vs. non-drug spending, and by four causes of death (cancer, heart attack, injury, and other), Lee finds (in Tables 5,6) that med spending hurts mainly because increasing non-drug med spending by 10% increases cancer deaths by 2.1%:

Cause of Death, Drug vs Non-Drug Med Spending

The apparent lesson: avoid cancer docs, and especially their non-drug cancer treatments. It seems some places tend to spend more on med overall, and when they spend more on cancer patients, those patients die no less, and maybe more. That fits with cancer patients living longer when they go to hospice and get no cancer treatment and with randomized trials of cancer screening consistently showing no effect on total mortality. Other explanations, however, are that high med spending places tend to classify more deaths as due to cancer, or that med treatment of all sorts tends to cause cancer.

For you stat whizzes, Lee uses state and year fixed effects, and uses per capita physicians, beds, and dental spending as med spending instruments to disentangle the direction of causation.  He picked that instrument set because it had the smallest bootstrap variance, and passed many tests. Here is Lee’s baseline model (from Table 3):

Continue reading "Beware Cancer Med" »

GD Star Rating
loading...
Tagged as:

Strange Salt

A new JAMA study finds a strong correlation: the third of folks who eat the least salt die over three times as often as the third of folks who eat the most salt. Yet other studies almost as big find contrary effects. I find it quite disturbing that such big studies can show such different results; something is very wrong in big diet correlation study land. Details:

Among 3681 participants followed up for a median 7.9 years, [heart attack] deaths decreased across increasing tertiles of 24-hour sodium excretion, from 50 deaths in the low, 24 in the medium, and 10 in the high excretion group (P < .001). … In multivariable-adjusted analyses, this inverse association retained significance (P=.02): the [hazard ratio] in the low tertile was 1.56 (95% CI, 1.02-2.36; P=.04). Baseline sodium excretion predicted neither total mortality (P = .10) [though 118, 64, 37 total deaths for low, medium, high tertiles sure looks significant to RH]. … All hazard ratios were adjusted for study population, sex, and baseline variables: age, body mass index, systolic blood pressure, 24-hour urinary potassium excretion, antihypertensive drug treatment, smoking and drinking alcohol, diabetes, total cholesterol, and educational attainment. …

Our current observations on cardiovascular mortality are consistent with several other reports. The National Health and Nutrition Examination Surveys (NHANES) I and II demonstrated an inverse association of cardiovascular and total mortality with salt intake as assessed from dietary recall with a similar trend in NHANES III. Alderman and colleagues followed up for 3.5 years 2937 patients with mild to moderate hypertension. There was an inverse association between the incidence of myocardial infarction and 24-hour urinary sodium excretion at baseline for the total population and for men, but not women. …

At variance with our current findings, other prospective studies suggested that a high-salt intake may lead to a worse outcome. … Cook and colleagues analyzed the long-term results of dietary sodium restriction on cardiovascular outcomes by combining 10 to 15 years of follow-up of 744 and 2382 participants randomized in the Trials of Hypertension Prevention, phases 1 and 2. Net sodium reductions during the intervention period (from 18 to 48 months) were 44 mmol and 33 mmol per day, respectively. … With adjustments applied for trial, clinical site, race, sex, and age, the [hazard ratios] for intervention vs control were 0.80 (95% CI, 0.51-1.26; P = .34) for total mortality. … In a 19-year follow-up study of 3126 Fins, the multivariable- adjusted [hazard ratios] associated with a 100- mmol increase in 24-hour urinary sodium were 1.26 (95% CI, 1.06-1.50) for total mortality, 1.45 (95% CI, 1.14- 1.84) for CVD, and 1.51 (95% CI, 1.14- 2.00) for coronary heart disease. (more)

GD Star Rating
loading...
Tagged as:

Only Trust Us

First they came for the communists, and I did not speak out—because I was not a communist;
Then they came for the trade unionists, … for the Jews, …
Then they came for me—and there was no one left to speak out for me.

PLoS Medicine:

While we continue to be interested in analyses of ways of reducing tobacco use, we will no longer be considering papers where support, in whole or in part, for the study or the researchers comes from a tobacco company.

Eric Crampton:

As good a [bias] case can be made … against tobacco industry funding. How many anti-tobacco public health researchers would be able to continue getting grants from Ministries of Health if their research found that smoking isn’t as bad as the Ministry might have thought?

John Tierney:

Many scientists, journal editors and journalists see themselves as a sort of priestly class untainted by commerce. … This snobbery was codified by the Journal of the American Medical Association in 2005, when it … refused to publish such work unless there was at least one author with no ties to the industry who would formally vouch for the data.  That policy … looked especially dubious after a team of academic researchers (not financed by industry) analyzed dozens of large-scale clinical trials in previous decades and reported that industry-sponsored ones met significantly higher standards than the nonindustry ones.

More:

As Gary Taubes nicely illustrates in his book, “Good Calories, Bad Calories,” scientists who disagreed with the accepted wisdom on the evils of fat in the diet were accused of being corrupted by industry grants even if they had received most of their money from government agencies that were looking — unsuccessfully — for evidence to back the fat-is-bad theory. Meanwhile, scientists who went along with the conventional wisdom on fat weren’t criticized for the corporate money they’d received from food companies.

Mr. Taubes has also found some wonderful examples of selective journalism in the dispute over sugar’s health effect: An article stressing the harms of sugar would make dissenting scientists look bad by stressing their connections to the sugar industry, whereas an article exonerating sugar would make the other side’s scientists look bad by stressing the money they received from companies making sugar substitutes. …

“Scientists were believed to be free of conflicts if their only source of funding was a federal agency, but all nutritionists knew that if their research failed to support the government position on a particular subject, the funding would go instead to someone whose research did.” … Not-for-profit advocacy groups … “are rarely if ever accused of conflicts of interest, even though their entire reason for existence is to argue one side of a controversy as though it were indisputable.”

If the new principle is that we mustn’t publish research not funded by groups committed to proving our official beliefs, how long before “our” beliefs exclude yours?  How long before interdisciplinary journals like Science or Nature refuse to publish papers by economists, known for their suspiciously right-wing leanings, unless non-economist co-authors vouch for them?  Do you really think that can’t happen?

GD Star Rating
loading...
Tagged as: , ,

Ignoring Advice

When do people listen to advice?  I teach my health econ students about studies showing no effect from randomized trials giving (or not giving) advice to teens about smoking, to heart attack victims about healthy living, and to new mothers about caring for their low birth weight babies.   Here is a new related result:

Affari Tuoi is the Italian prototype of the television show Deal or No Deal …114 television episodes … with large monetary stakes. When faced with a decision problem in Affari Tuoi, a contestant may seek advice from the audience, which comes in a form of the vote results. While there is a positive trend between contestants’ decisions and advice, this relation is not statistically significant. … When contestants do not have an opportunity to use advice or when the option of advice is available but not used, they make ex post "wrong" decisions in 52.9% and 54.6% of cases respectively. However, when they choose to consult the audience, the fraction of ex post "wrong" decisions decreases to 36.1%. Moreover, … by following advice contestants increase their earnings (Table 1). Subjects make ex post "wrong" decisions in 46.2% of cases when they neglect the advice and only in 30.4% of cases when they follow the advice.

However, the literature does show that in some situations people seem to listen too much to advice: 

Schotter (2003) surveys several laboratory studies on advice when nonoverlapping “generations” of subjects play ultimatum and coordination games. In these studies (e.g. Schotter and Sopher, 2004, 2007) subjects often rely on the advice of naïve advice. … who hardly possess more expertise or knowledge than we do.

So why do we not listen sometimes and listen other times?

GD Star Rating
loading...
Tagged as:

Supping with the Devil

Funding bias occurs when the conclusions of a study get biased towards the outcome the funding agency wants. A typical example from my own field is Turner & Spilich, Research into smoking or nicotine and human cognitive performance: does the source of funding make a difference? Researchers declaring having tobacco industry funding more often detected neutral or positive cognitive enhancement effects from nicotine than non-funded researchers, who were more evenly split between negative, neutral and positive effects.

There have been some surveys of funding bias. Bekelman, Li & Gross find that 25% of investigators in their material had industry funding sources. Doing a meta-analysis of 8 articles themselves evaluating 1140 original studies they got a 3.6 odds ratio of industry favourable outcomes when there was industry sponsorship compared to no sponsorship. There are also problems with data sharing and publication bias. An AMA 2004 Council Report also points out that sponsered findings are less likely to be published and more likely to be delayed.

A case study of co-authoring a study with the tobacco industry by E. Yano describes both how the industry tried to fudge the results (probably more overtly than in most cases of funding bias) and how the equally fierce anti-tobacco campaigners then misrepresented the results; the poor researcher was in a no-win scenario.

Continue reading "Supping with the Devil" »

GD Star Rating
loading...
Tagged as: