Issue 44 / 12 November 2012

AS Australian authorities consider whether to include quality and safety measures in local hospital funding formulae, a UK analysis shows a pay-for-performance program has delivered a small but clinically significant reduction in mortality.

The study, published in the New England Journal of Medicine, found a 1.3% drop in 30-day, in-hospital, risk-adjusted mortality in 134 435 patients admitted to 24 hospitals for pneumonia, heart failure or acute myocardial infarction. Data were compared with 722 139 patients admitted for the same three conditions to 132 other hospitals in England. (1)

The 24 hospitals, all based in the north-west of England, had implemented a quality improvement program developed in the US — the Medicare Premier Hospital Quality Incentive Demonstration (HQID) program. An earlier assessment of that program found improved process-quality measures initially, but a 6-year follow-up found no effect on 30-day mortality. (2)

Australia’s Independent Hospital Pricing Authority (IHPA), the body established in 2011 to oversee the implementation of activity-based funding for hospitals from July this year, confirmed that it was looking into pay-for-performance-type measures.

“The IHPA is working with the Australian Commission on Safety and Quality in Health Care to examine the UK quality pricing policy, as well as other hospital pricing systems from around the world”, IHPA CEO Dr Tony Sherbon told MJA InSight.

“The joint working party will develop a discussion paper summarising the options for Australia, which will be released for public consultation in 2013. This will inform the IHPA and commission board in making a decision on whether to incorporate safety and quality into the pricing framework.”

Associate Professor Ian Scott, director of internal medicine research and associate professor of medicine at the University of Queensland, said there was a push in Australia to pursue pay-for-performance programs.

He said he suspected the formulae for national activity-based funding to hospitals for undertaking clinical work would in time be tweaked to tie at least some portion of the funding to compliance with various quality targets.

While Professor Scott said he was not convinced that this was the best way to achieve quality care, of even greater concern were suggestions that deducting funding for non-compliance with quality measures could improve hospital performance.

“Such non-pay for non-performance is the mirror image of pay-for-performance, and I suspect there are some folk who think that is going to be an even more powerful incentive for clinicians to do the right thing”, Professor Scott said.

“Clinicians would prefer carrots rather than sticks, but you probably need a mix”, he said. “In the current fiscal climate, particularly in Queensland, there is a strong desire to achieve better quality at lower cost pretty promptly, and not paying people for inappropriate care is felt to have more of an effect than paying them for appropriate care.””

Professor Scott said it was also important to consider sustainability in any improvements achieved with pay-for-performance programs. He said the initial benefits seen with the US program gradually faded with longer term follow-up. It remained to be seen if the reductions in risk-adjusted mortality shown in this UK study would be sustained over the longer term.

The UK researchers said that participating hospitals had adopted a range of quality-improvement strategies in response to the pay-for-performance program, including the development of new or improved data-collection systems linked to regular feedback about performance to clinical teams.

Compared to the HQID, the UK program had larger bonuses and greater probability of earning bonuses, which “may explain why hospitals made substantial investments in quality improvement”.

“The largest bonuses were 4%, as compared with 2% in the HQID, and the proportion of hospitals that earned the highest bonuses was 25%, as compared with 10% in the HQID”, they wrote.

An accompany editorial said the contrasting findings for the UK and US programs came down to “striking differences” in how the two programs were implemented. (3)

“In addition, British hospital leadership agreed to invest awarded money internally towards efforts to improve clinical care”, the editorial said. These measures included investment in specialist nurses, new data-collection systems and regular shared-learning events.

– Nicole MacKee

1. NEJM 2012; 367: 1821-1828
2. NEJM 2012; 366:1606-15
3. NEJM 2012; 367: 1852-1853

Posted 12 November 2012

Sorry, there are no polls available at the moment.

8 thoughts on “Pay-for-performance closer

  1. Rob.the.physician says:

    NO…first get rid of the “top-heavy” administration,then minimise political interferance,and finally take notice what the ‘clinicians’ say…!!!

  2. Ken Carlile says:

    Not sure about clinical outcomes, but it sounds a great
    idea to examine the performance of hospital administrators

  3. Bruce says:

    Great idea and simple with existing comparable precendents.
    It works well in private hospitals and would provoke industry and efficiency in service delivery. A strong effort-reward relationship is one of the leading motivators for job satisfaction.

  4. Anonymous says:

    My worry with this is that it might lead to “gaming” by medical administrators, causing further distraction and confusion for clinical staff. This problem has been raised before in the MJA.

    I’m all for monitoring and maybe providing extra assistance for hospitals that don’t meet performance goals.

  5. Dr Horst Herb says:

    What I have seen repeatedly was that those hospitals receiving DRG funding or similar “performance” pays learn very quickly to play the game and aggressively cook up diagnoses that lead to bonuses. This behaviour also implies that overdiagnosing more serious (and hence more lucrative) conditions will possible have a “better” nominal outcome (since the outcome would have been good to begin with).

    Anybody who claims that 1000 patient diagnosed with X will have the same prognosis as any other random group of 1000 patients diagnosed with X by somebody else will have the same outcome with same treatment .. is a liar or incredibly lucky to find such rare coincidence. Most diagnosed are not necessarily correct, and most assessments of severity highly subjective. Since there is a severe directional bias by participants rather than a normal distribution of such variance, statistical analysis becomes anything from utterly meaningless to outright fraud.

    Given the lack of control (and understanding) of too many variables, such “studies” as the one mentioned above are best ignored and definitely not to be used to get wrong ideas about further political intrusion in to a domain politicians and bureaucrats don’t understand at all – medicine.

  6. Rose says:

    We have seen evidence of fraud ,such as falsifying waiting times in the media over the years. Re “..improved data collection ..” I would like to see a data collection system where patients give feedback on a questionnaire provided at points of service delivery to be completed online, or even by text, confidentially to an independent body such as ? the IHPA, (so that waiting times may include those patients who presented but were never seen,) questions rating the hospitals performance from the time of initial presentation.The study quoted could have hidden any rate of mortality in patients who were not admitted or were discharged without correct diagnosis after presentation with pneumonia, heart failure, or AMI, so is meaningless in my opinion.

  7. Ned Iceton says:

    In any profession (where the professional person acts on the basis that patients’ or clients’ rights are paramount) any decline into monetary bribery is seriously undermining of all respect and quality. In the days when I worked in the NT Medical Service, this aspect was not understood by bureaucrats at various levels right up to Canberra ­ basically because they were not accepting that our primary responsibility was to the life and death of our patients, and NOT to them. Life was always more important than money.

  8. Jon Phipps says:

    No, I think such studies are highly suspect and too easily manipulated.
    Our private hospital system works this way ans is good for those that can afford it.the public system struggles to achieve the same results and is top heavy with Admin.

Leave a Reply

Your email address will not be published.