EVERYONE these days seems to know the term “evidence-based medicine”.
Originally conceived from within the medical community in an attempt to rationalise practice according to what was supported by research, we now find the term just as often used as a rod for our backs.
Do the critics actually understand what evidence-based means?
Evidence-based medicine does not mean that everything we do has to be supported by a specific randomised controlled trial (RCT).
Much of what we do in medicine is underpinned by the clinical sciences — particularly physiology, pathology and pharmacology. A therapy is science based if it is based on our understanding of how the body works, and can be demonstrated to act in a specific way.
An example here might be the drainage of a tension pneumothorax. There is no need for an RCT to show that this is a life-saving manoeuvre. Other evidence, like antibiotic effectiveness, is obtained in the laboratory.
The RCT is the highest rank in the research scale only for research questions that lend themselves to that methodology.
Then there is the issue of “peer-reviewed evidence”. Publication of a paper in a peer-reviewed journal is just the first step in the process of peer review.
Reviewers agree that a paper is suitable for publication in a particular journal but publication has never meant a study is automatically considered valid or “right”. It just means that the work is on display for the informed reader to critique.
True peer review occurs when the paper is critically appraised by real-life clinical or research peers with knowledge of the other research in the area.
In the wider community, there has been debate about drug company sponsorship and vested interests in research. But this is just one aspect of a published paper that requires critical review, along with all the other potential sources of bias.
Most of these are not ethical biases — they merely relate to the way the study was constructed. How were the subjects sourced? What were the inclusion and exclusion criteria? How were measurements made?
With research papers so easily available on the internet, many of the people reading them do not have critical appraisal skills in that area. They rely on a paper’s title or conclusions to convey a message. This message may then be used in ideological debate, cited as “evidence”.
If it were that easy to appraise studies, most of us could be spared the time and effort of going to journal clubs and seminars, or reading critiques of published papers.
The world might be simpler without knowing about positive and negative predictive values, non-normal distributions, area under the curve, likelihood ratios and number needed to treat, but at what cost?
Most of us in the medical community have the benefit of courses, conferences, professional associations and newsletters to help us keep up with the literature in our own practice areas.
We must keep in mind the need for quality and relevance in research, not just content. We need to educate the general community, and particularly the press, about the complexity of research methodology and the non-intentional sources of potential bias.
People looking for clarification should be encouraged to go to those who are experts in both the practice in that area and in research methods.
Just like research findings on climatology should be critically interpreted by climatologists and climate scientists, research findings in the medical and clinical sciences should be interpreted by doctors and medical scientists.
We are the true peers, and we are responsible for true peer review.
Dr Sue Ieraci is a specialist emergency physician with 30 years’ experience in the public hospital system. Her particular interests include policy development and health system design, and she has held roles in medical regulation and management.
Posted 12 June 2012