Sign in with your email address username.


The case for measuring the outcomes of what we do


Archie Cochrane, the Scottish medical epidemiologist after whom the Cochrane Collaboration that develops the evidence base for clinical medicine is named, came out of the Spanish Civil War and World War Two sceptical about the outcomes of his medical care.

Cochrane said, “I knew that there was no real evidence that anything we had to offer had any effect on tuberculosis, and I was afraid that I shortened the lives of some of my friends by unnecessary intervention.”

He changed career, moving into public health and conducting epidemiological research into TB and occupational lung diseases. He became especially sceptical about screening and, as Wikipedia puts it, “his ground-breaking paper on validation of medical screening procedures, published jointly with fellow epidemiologist Walter Holland in 1971, became a classic in the field”.

Cochrane recalled in his 1972 book Effectiveness and Efficiency: Random Reflections on Health Services being puzzled by a crematorium attendant he met who was permanently serenely happy. Cochrane asked why: the attendant said that each day he marvelled at seeing “so much go in and so little come out”.  Cochrane suggested that he consider working in the National Health Service.
In Australia we assess how much work we do in hospitals through activity-based funding.  Money flows in direct proportion – so many coronary grafts, so many strokes treated. But little attention, at least in routine care, is paid to what we achieve. There are examples that contradict this general assertion, but mainly it is true.
Recently, the Bureau of Health Information in the NSW Ministry of Health made available statewide mortality data for five conditions treated in NSW public hospitals, taking account of variations in severity. Such data begin to fill the blanks in our knowledge about outcomes, and prompt discussion about why these variations occur.

The 1 February edition of The Economist, in an article entitled Need to Know (about health outcomes), took up the theme. The article observed that in Germany, its biggest insurer made available data in 2011 about outcomes for all to see.

Among the outcomes, the data showed five-year survival after treatment for prostate cancer was uniform across the nation – 94 per cent. But the data collected by the insurer went further: while the national average for subsequent erectile dysfunction was 76 per cent, at the best-performing clinic it was just 17 per cent. “For incontinence, the average was 43 per cent: the best 9 per cent,” The Economist wrote.

Armed with data such as these, prospective patients can choose where to be treated. The same data form the basis for discussion between those who provide and those who pay for health care.

Once, clinical trials of new cancer drugs were concerned principally with the survival of patients treated versus those not treated with new medications. But they now measure more than life expectancy.

For over 25 years mortality data have been supplemented by quality of life assessments.

But the excellence in clinical trial outcome measurement has not spread to routine care.

So much goes in, but what comes out?
In the US, health care expenditure is a huge worry for individual citizens, for Government (which spends as much as a proportion of GDP/GNP as ours does on health), and for industry, which pays for a lot of health insurance for employees. In response, comparative effectiveness research – CER – has recently evolved.

Wikipedia advises that “The Institute of Medicine committee has defined CER as ‘the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition, or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels’.”

There are many agencies and individuals now in the US committed to CER, including Dr John Wennberg at the Dartmouth Institute for Health Policy and Clinical Practice.

He and his colleagues have studied variations in medical practice across the US with a view to ironing out the wrinkles caused by inferior care.

They claim that 30 per cent of health care costs could be saved by correcting care that falls below expected outcomes.

Australia has not been entirely idle, and we have led the world in aspects of outcome measurement in relation to drugs.

Since 1953, Australia’s Pharmaceutical Benefits Advisory Committee (PBAC) has constructed the formulary of publicly funded medicines. Since 1990, the PBAC has made cost and effectiveness (outcome) assessment a mandatory prelude to listing. Pricing and other political decisions follow, but the solid outcome data are necessary. Others are now following our example.

When we have a health care system that is fully connected electronically, the task of measuring outcomes and using them to good effect in managing the system will be far easier. Outcome data are critical to achieving real financial efficiency. They can be used to help us stop doing things that achieve nothing, or cause harm, and instead use the resources saved for clinical care with good outcomes. 

But assessing outcomes, as the prostate surgery example demonstrates, extends well beyond financial efficiency and, indeed, beyond life expectancy. When we confidently explain what we achieve with what we do – quantity and quality of life gained –  patients are empowered to make choices.