TEN years ago, a new regime began in medical publishing that was designed to rein in manipulation of published trial results.
From July 2005, the International Committee of Medical Journal Editors (ICMJE) announced member journals would only consider a paper for publication if the trial was registered before recruitment began, and included information on the outcomes to be measured, sample sizes and funding sources.
This would make it harder for vested interests to hide unfavourable trials or report on outcomes other than those the trial had originally been set up to study.
It would also help to address the problem of “publication bias”: trials with positive outcomes are far more likely to be published, a distortion that can lead to benefits of a treatment being overestimated.
“If all trials are registered in a public repository at their inception, every trial’s existence is part of the public record and the many stakeholders in clinical research can explore the full range of clinical evidence”, the editors of the participating journals, including the MJA
, said in a public statement at the time.
A decade on, how successful has the initiative been?
The number of registered trials has certainly increased dramatically: the largest registry, run by the US National Library of Medicine
listed 12 000 trials at the start of 2005. Five years later, the number had reached 83 000 and it is now approaching 200 000.
But problems remain, according to an editorial in The BMJ
Many journals have still not signed up to the policy and even those that have are sometimes “very generous in allowing exceptions”, the authors write.
They looked at 69 non-complying papers submitted to their own journal over the past 2 years and found authors made a range of excuses for not having prospectively registered their trial.
Senior authors blamed junior staff for the omission. Academic researchers claimed the requirement should not apply to trials without industry funding, or just said they were too busy.
Those excuses won’t wash at The BMJ, which devotes considerable resources to checking not only that trials were registered before they began, but that published reports match details given at the registration stage.
But not all journals are that rigorous or have the resources to carry out those kinds of checks.
A 2013 study
of 200 randomly selected journals from the Cochrane Central Register of Controlled Trials database found only 28% explicitly required prospective registration.
Qualitative interviews with a small number of editors indicated one reason for not requiring registration was they feared missing out on interesting articles that would then be picked up by competitors.
Another recent study
found that even journals that had endorsed the requirement for prospective registration did not consistently apply the rule.
Only 51% of 747 journals studied included the requirement in their instructions to authors. In a follow-up survey, just 18% said they checked to see if submitted papers matched their registration details.
Two-thirds said they would consider retrospectively registered studies for publication, a concession that pretty much undermines the whole system.
So perhaps it’s not surprising, though more than a little depressing, that another study
found that discrepancies between original registration details and those in a published paper had no significant effect on its chance of being published.
The move towards requiring all trials to be registered before they started was an immensely positive one — “the single most valuable tool we have to ensure unbiased reporting”, as the BMJ editorialists put it — but it seems we’re not there yet.
Jane McCredie is a Sydney-based science and medicine writer.