How reliable are randomised trials?
When regulatory authorities are deciding whether to authorise the use of a drug or its reimbursement, they almost always demand evidence from at least one major randomised controlled trial (RCT). But should we rely so heavily on them to provide the rigorous evidence we need for policy and therapeutic decisions? Probably not, says a researcher from the London School of Economics, in a new paper published in the Annals of Medicine.
In his article, Dr Alexander Krauss looks at the ten most cited RCTs worldwide, covering areas such as stroke, insulin therapy, breast cancer and chemotherapy, bowel cancer, cholesterol and coronary heart disease. He identifies a range of biases across these highly-influential and often paradigm-changing RCTs.
The emphasis on RCTs, the author says, shifts the focus of medical research towards a small set of questions where RCTs may be able to provide an answer, such as whether single simple treatments with few confounders are effective at an individual level. But RCTs are poorly suited to more complex areas, such as genetics, immunology, mental states, rare diseases, one-off interventions such as health reforms, or interventions with lagged effects such as long-term diseases, he notes.
Below are some of the key biases that Dr Krauss has found even in the most influential of RCTs:
Initial sample selection bias
Most of the studies examined don’t say how the initial sample was selected before randomisation. Others merely say that “patient records” were used or that patients were “recruited from 29 centres”, without saying anything about the quality, diversity or location of these centres. The trial on cholesterol was conducted in just one district of the UK, and the one on insulin therapy was based on patients from one single ICU in Belgium. And yet all these studies assume their treatment outcomes can be scaled up to the general population
Poor distribution in the randomisation
One-off randomisation in a small sample can lead to a poor distribution of background traits. For example, in the stroke trial, those in the treatment arm were 3% more likely to have heart congestion, 8% less likely to be smokers, 14% more likely to have been on aspirin therapy, and 3% more likely to have survived a previous stroke. Factors such as these could be driving the trial’s main outcomes.
Lack of blinding bias
Some of the 10 trials did not double-blind, while others were initially double-blinded but later partly unblinded, or were only partially blinded for one arm of the trial. In some cases, this was due to poor trial design, but in other cases may be inevitable. For example, in the insulin trial, modifying insulin levels required monitoring glucose levels, which in turn required some unblinding.
Small sample bias
Trials of just a few hundred people may be too small to produce robust results. But among the top 10 trials, several had small samples, with one only having 188 participants. Small samples are more likely to have imbalances in background influencers, and their outcomes are more likely to be influenced by mere chance.
Unique time period bias
Outcomes can vary considerably depending on when exactly the investigators collect baseline and endpoint data. The data collection points can also vary considerably within trials. In half of the trials examined, the total length of follow-up varied between patients, and could be up to three times longer for some participants compared to others.
Average effects bias
Although RCTs emphasise the average effect of an intervention, this can sometimes be positive even when the majority of participants are not affected or are negatively affected, because outcomes can be driven by a minority having large effects.
Best results and funder biases
Funders and journals tend to be less interested in negligible or negative results. A funder bias towards positive results has been shown in systematic reviews, and yet seven of the top ten cited trials were funded by drug companies.
Major trials usually randomise a therapy against placebo or conventional treatment only, and therefore do not identify whether the therapy is actually any better than other available treatments.
Dr Krauss says given these and a number of other biases he identifies, no single study should ever be used to inform policy. We need to support RCTs with other tools such as subsequent observational trials and single case studies. Dr Krauss notes that many of the advances in medicine were made without any evidence from an RCT. These include most surgical procedures, antibiotics and aspirin, smallpox immunisation, immobilising broken bones and confirmation that smoking is associated with lung cancer.
You can access the full study here.