HOW a piece of research changes clinical practice, rather than how many times it is cited or which journal it appears in, is the best way to judge its merit, say Australian experts.
Professor Chris Del Mar, professor of public health at Bond University, told MJA InSight that the issue of how research is judged was of pressing concern to all academics.
“Everyone’s trying to find a way of deciding what’s good research and what isn’t”, he said. “Perhaps it’s time to judge a piece of research on its societal impact rather than its scientific impact.”
Dr Virginia Barbour, former chief editor of PLOS Medicine and now its medical editorial director, agreed, saying there needed to be multiple ways of assessing papers both pre- and post-publication.
“None are perfect but all may have some value if it is possible to know exactly what is being assessed”, she told MJA InSight.
Professor Del Mar and Dr Barbour were responding to research published in PLOS Biology that found three methods of assessing the merit of a scientific paper — subjective postpublication peer review, number of citations gained, and the impact factor (IF) of the publishing journal — were “poor, error-prone, biased, and expensive”. (1)
The authors compiled data from two sources — 716 papers from the Wellcome Trust dataset, each scored by two assessors, and 5811 papers from the F1000 database, 1328 of which had been assessed by more than one assessor. All papers were published in 2005.
They compared assessor scores between the two datasets, as well as the correlation between assessor scores and IF scores, and the correlation between the assessor scores and the number of citations. They found that “scientists are poor at judging scientific merit and the likely impact of a paper, and that their judgement is strongly influenced by the journal in which the paper is published”.
“The number of citations a paper accumulates is a poor measure of merit and we argue that although it is likely to be poor, the impact factor, of the journal in which a paper is published, may be the best measure of scientific merit currently available”, the authors concluded.
Professor Del Mar said there were two kinds of journal — researcher-to-researcher, and researcher-to-clinician.
“Researcher-to-researcher papers are cited a lot”, he said. “But researcher-to-clinician papers that actually change clinical practice don’t get cited much because clinicians just go and do it.”
Dr Barbour felt the PLOS Biology paper’s conclusion about the IF was “concerning”.
“I am concerned about their conclusion that the IF is the least bad, especially as they could not really control for the effects of IF on other assessments such as the F1000 reviews”, she said.
“More importantly, I think this paper misses the point about what the scientific literature is about and what is needed in its assessment.
“Any journal-level metric will always be problematic for understanding the ‘value’ of an individual paper. That is why article-level metrics, which are transparent and which assess a range of measures, are so important and represent a substantial step forward in assessment.”
She cited a recent editorial she wrote for PLOS Medicine in which she documented research papers they had published which had considerable societal impact. (2)
“Among these are papers on: why lethal injection is not a humane method of killing, as it probably asphyxiates prisoners, which led to lethal injection being ruled unconstitutional in the state of Tennessee; a method of measuring how bad a war is, which led to a change in NATO procedures in Southern Afghanistan; and a forensic dissection of articles we helped bring to light, though a legal intervention, on how companies manipulate doctors through ghostwritten journal articles”, she wrote.
“The paper on lethal injection has had just eight academic citations to date; this single measure of impact — of either the article or the journal — misses the paper’s demonstrable effect on public policy.”