Wednesday, October 28, 2009

Failing to Report Adverse Effects of Treatments

We have frequently advocated the evidence-based medicine (EBM) approach to improve the care of individual patients, and to improve health care quality at a reasonable cost for populations. Evidence-based medicine is not just medicine based on some sort of evidence. As Dr David Sackett, and colleagues wrote [Sackett DL, Rosenberg WM, Muir Gray JA, Haynes RB, Richardson WS. Evidence-based medicine; what it is and what it isn't. BMJ 1996; 312: 71-72. Link here. ]


Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

One can find other definitions of EBM, but nearly all emphasize that the approach is designed to appropriately apply results from the best clinical research, critically reviewed, to the individual patient, taking into account that patient's clinical characteristics and personal values.

When making decisions about treatments for individual patients, the EBM approach suggests using the best available evidence about possible benefits and harms of treatment, so that the treatment chosen is most likely to maximize benefits and minimize harms for the individual patient. The better the evidence about specific benefits and harms applicable to a particular patient, the greater will be the likelihood that a particular decision based on this evidence will result in the best possible outcomes for the patient.

A new study in the Archives of Internal Medicine focused on how articles report adverse effects found by clinical trials. [Pitrou I, Boutron I, Ahmad N et al. Reporting of safety results in published reports of randomized controlled trials. Arch Intern Med 2009; 169: 1756-1761. Link here.] The results were not encouraging.

The investigators assessed 133 articles reporting the results of randomized controlled trials published in 2006 in six English language journals with high impact factors, that is, the most prestigious journals, including the New England Journal of Medicine, Lancet, JAMA, British Medical Journal, and Annals of Internal Medicine. They excluded trials with less common designs, such as randomized cross-over trials. The majority of trials (54.9%) had private, or private mixed with public funding.

The major results were:
15/133 (11.3%) did not report anything about adverse events
36/133 (27.1%) did not report information about the severity of adverse events
63/133 (47.4%) did not report how many patients had to withdraw from the trial due to adverse events
43/133 (32.3%) had major limitations in how they reported adverse events, e.g., reporting only the most common events (even though most trials do not enroll enough patients to detect important but uncommon events).

The authors concluded, "the reporting of harm remains inadequate."

An accompanying editorial [Ioannidis JP. Adverse events in randomized controlled trials: neglected, distorted, and silenced. Arch Intern Med 2009; 169: 1737-1739. Link here] raised concerns about why the reporting of adverse events is so shoddy:
Perhaps conflicts of interest and marketing rather than science have shaped even the often accepted standard that randomized trials study primarily effectiveness, whereas information on harms from medical interventions can wait for case reports and nonrandomized studies. Nonrandomized data are very helpful, but they have limitations, and many harms will remain long undetected if we just wait for spontaneous reporting and other nonrandomized research to reveal them. In an environment where effectiveness benefits are small and shrinking, the randomized trials agenda may need to reprogram its whole mission, including its reporting, toward better understanding of harms.

Pitrou and colleagues have added to our knowledge about the drawbacks of the evidence about treatments that is publicly available to physicians and patients when making decisions about treatment. Even reports of studies with the best designs (randomized controlled trials) in the best journals seem to omit important information about the harms of the treatments they test.

It appears that the majority of the reports that Pitrou and colleagues studied received "private" funding, presumably meaning most were funded by drug, biotechnology, or device companies and were likely meant to evaluate the sponsoring companies' products. However, note that this article did not analyze the relationship of funding source to the completeness of information about adverse effects.

Nonetheless, on Health Care Renewal we have discussed many cases in which research has been manipulated in favor of the vested interests of research sponsors (funders), or in which research unfavorable to their interests has been suppressed. Therefore, it seems plausible that sponsors' influence over how clinical trials are designed, implemented, analyzed and reported may reduce information about the adverse effects of their products reported in journal articles. Trials may be designed not to gather information about adverse events. Analyses of some adverse events, or some aspects of these events may not be performed, or if performed, not reported. The evidence from clinical research available to make treatment decisions consequently may exaggerate the ratios of certain drugs' and devices' benefits to their harms.

Patients may thus receive treatments which are more likely to hurt than to help them, and populations of patients may be overtreated. Impressions that treatments are safer than they actually are may allow their manufacturers to overprice them, so health care costs may rise.

The article by Pitrou and colleagues adds to concerns that we physicians may too often really be practicing pseudo-evidence based medicine when we think we are practicing evidence-based medicine. We cannot judiciously balance benefits and harms of treatments to make the best decisions for patients when evidence about harms is hidden. Clearly, as Ioannidis wrote, we need to "reprogram." However, what we need to reprogram is our current dependence on drug and device manufacturers to pay for (and hence de facto run) evaluations of their own products. If health care reformers really want to improve quality while controlling costs, this is the sort of reform they need to start considering.

NB - See also the comments by Merrill Goozner in the GoozNews blog.

4 comments:

InformaticsMD said...

These lessons apply to studies of health IT as well as.

-- SS

Anonymous said...

A timely post. Looking tat the Oct. 28, 2009 WSJ we find a discussion of antipsychotics and weight gain in the article Antipsychotics Cause Weight Gain in Kids. The whole article is troubling from the inclusion of children as young as 4 in the study, to the 19 pound weight gain in 11 weeks reported in one drug, and the recommendation that children be checked after three months. According to my slow business mind this would be after the metabolic damage has already occurred.

It was also noted in the article that Lilly agreed to pay $1,420,000,000.00 to settle questions concerning improper marketing of Zyprexa. Collectively this class of drugs generated $14,600,000,000.00 in US sales last year.

Looking again at those who can least protect themselves we see the AP story dated Oct. 27, Hospitals are set to tighten delivery rules. It does seem that two weeks do make a difference in a baby's development as the study found delivering in the 37th week:

"had more than double the risk of ending up in neonatal ICU, suffering respiratory distress, even needing a ventilator."

One does not have to be a doctor to question medical necessity of these treatments and procedures given the very blatant negative side effects. From there it is not a far leap to question the cost, not only in dollars, but in pain and suffering of those subject to these protocols.

We now have young children with blood chemistry rivaling that of an elderly person. Do doctors then prescribe the same multi-drug cocktail to control these issues? Infants are starting life in the ICU because it becomes convenient to set a delivery date.

I am simply appalled. I am appalled at the risk. I am appalled at the waste. I am appalled that this is happening to those who least can protect themselves in our society.

Steve Lucas

Bernard Carroll said...

Reporting of adverse events in clinical trials is problematic enough, as you describe. And, the problem is worse in meta-analyses. Here, multiple clinical trials are bundled for an omnibus estimation of efficacy. Only efficacy. Here is a stark recent example – a meta-analysis of atypical antipsychotic drug use for augmentation of antidepressant drugs in treatment resistant depression.
Nelson JC, Papakostas GI. Atypical antipsychotic augmentation in major depressive disorder: a meta-analysis of placebo-controlled randomized trials. Am J Psychiatry. 2009 Sep;166(9):980-91. Epub 2009 Aug 17

The most striking weakness is the innumerate approach to considering risk. Evidence of the potential efficacy of AAP drugs in refractory depression is reviewed in extensive meta-analytic detail and display, whereas evidence of risk is addressed only in cursory and inaccurate generalizations. So, dangers to patients like tardive dyskinesia go under the radar, while the ensemble of unconvincing studies is presented as more efficacious than the data really justify.

Anonymous said...

In looking at my local PBS preview for the month of November I see that they are going to broadcast a repeat of The Medicated Child. I would recommend this program and pay special attention to the actions of the doctors prescribing these medications.

Financial conflicts abound.

This is truly one of the most frightening documentaries I have ever seen,

Steve Lucas