WHERE would you expect to find the results of a landmark drug trial more accurately reported — in the general news media or in the scholarly literature?
There’s no doubt media outlets often get it wrong when reporting medical stories, but an analysis published by JAMA Internal Medicine last week highlights one instance where the journalists actually did slightly better than the medical experts.
The study looked at reporting of the landmark ACCORD-lipid trial, a publicly funded trial assessing whether the addition of fenofibrate to statin therapy improved cardiovascular outcomes for patients with type 2 diabetes.
Here’s what the researchers wrote when they published their results in 2010: “The combination of fenofibrate and simvastatin did not reduce the rate of fatal cardiovascular events, nonfatal myocardial infarction, or nonfatal stroke, as compared with simvastatin alone. These results do not support the routine use of combination therapy with fenofibrate and simvastatin to reduce cardiovascular risk in the majority of high-risk patients with type 2 diabetes.”
Seems pretty clear, but apparently not clear enough for all the journalists and journal authors who wrote about the study.
Researchers from Yale University and the Mayo Clinic have found fewer than a third of articles published over the 15 months following the trial’s publication said fenofibrate had been found to be ineffective.
The journalists and the scientists performed equally dismally on this measure (29.9% of news articles, compared with 29.8% of journal articles).
A substantial minority of journal authors went the other way and claimed the drug had actually been shown to be effective (19.9%). Here, the journalists did slightly better, with only 16.4% claiming effectiveness.
It gets worse.
Of the news articles that made a therapeutic recommendation based on the trial findings, 52.2% recommended fibrate use.
Journalists, right? Nobody with proper scientific training would recommend prescribing a drug that had been found to be ineffective.
Well, actually …
A phenomenal 67.5% of journal articles that made a recommendation supported fibrate use.
What is going on here?
Commercial relationships between journal authors and industry may, unfortunately, be part of the answer.
The JAMA Internal Medicine authors found that, in 60% of the journal articles, at least one author had a conflict of interest — that is, a relationship with one of the pharmaceutical companies that had a commercial interest in the product.
One-third of the conflicts they identified were not disclosed in the paper, but were instead found by searching for other contemporary publications by the same author.
Researchers and clinicians often claim their relationships with industry do not affect research outcomes or clinical decision making, though there is evidence to suggest otherwise.
In this particular case, the evidence is disturbing. Papers where a conflict of interest was identified were three times as likely to describe fenofibrate as effective (27.1% compared with 8.9%).
They were also more likely to recommend use of the drug (77.4% compared with 45.8%).
Journalists, of course, may also have conflicts of interest but these were not addressed in this study due to a lack of disclosure.
The thing I find hardest to understand is, not that people are influenced — consciously or otherwise — by their relationships with commercial bodies, but that the journals are not more rigorous in assessing the articles submitted to them.
When two-thirds of published papers recommend a drug that has been shown to be ineffective in a major trial, that would seem to indicate the peer review system is not doing its job.
Jane McCredie is a Sydney-based science and medicine writer.