Doctors spend a lot of time learning facts but don’t get much formal education on how to think.
Ideally, this skill is learnt ― or “picked up” ― during the years of clinical teaching at the bedside, in the clinic and in the operating theatre.
Errors in clinical interpretation and reasoning can occur at any point during patient care. These are often due not to a lack of knowledge or competence, but to the decision-making processes of humans in situations that are clinically complex, uncertain, and pressured by time and emotion.
In the latest issue of the MJA, an interesting study looks at the clinical reasoning skills of junior and senior emergency medicine staff by testing their ability to accurately interpret electrocardiographs (ECGs) when provided with either no clinical history, a history with a positive bias towards the correct diagnosis, or a history with a negative bias towards an alternative diagnosis.
Overall, doctors made the correct diagnosis about half of the time. Worryingly, this 52% correct rate may be the “best-case scenario”, as less than half of the doctors approached agreed to participate in the study. One might surmise that these results therefore reflect the doctors who perceive themselves as most competent at interpreting ECGs.
Accuracy was affected by knowing the clinical history and by the seniority of the clinician — results that were to be expected. In reality, however, the pressures associated with the modern emergency department, including overcrowding and the 4-hour rule, make it likely that the ECG is taken before the patient is seen by a doctor and then interpreted by a junior doctor without the benefit of the history to inform the decision.
The authors did not determine the impact of incorrect diagnoses. If ventricular tachycardia is mistaken for supraventricular tachycardia, the consequences could be severe. Likewise, if a diagnosis of myocardial infarction is made in a patient with pericarditis, it is possible that the patient would undergo invasive, potentially dangerous and unnecessary investigations.
In reality, if a doctor is unsure of the diagnosis, one hopes that he or she would be able to consult and collaborate with colleagues about the diagnosis, with knowledge of the patient history and clinical examination.
This study also invites consideration of the risks and costs associated with the inappropriate use of investigations to screen for illness and to guide management, particularly if the test has a low sensitivity — like the ECG. For example, many a clinical dilemma has arisen when the result of a D-dimer test, done as part of a workup, is positive for a patient in whom a pulmonary embolism or deep vein thrombosis was not even part of the differential diagnosis.
The statistics about errors in medical reasoning are sobering. The correct diagnosis is missed or delayed in up to 14% of acute admissions. If the diagnosis is correct, up to 43% of patients do not receive recommended care, and about $800 billion — nearly one-third of all health care spending — is wasted on unnecessary diagnostic tests, procedures and extra days in hospital.
Wilson and colleagues’ landmark analysis of the cause of adverse events in the Australian health care system reported that almost half of reported adverse events involved errors of reasoning.
Clearly, we overestimate our ability to correctly deploy tests, interpret test results, and act appropriately on the results of clinical interactions and subsequent investigations.
The study involving ECG interpretations confirms the need to actively critique our methods of clinical reasoning, and to teach these skills.
Formal courses will provide a theoretical background, but the need for this to be included in the curriculum at the “bedside” remains. We all make mistakes but if we don’t understand why they occurred it is likely that we will repeat them.
Dr Annette Katelaris is the editor of the MJA.
This article is reproduced from the MJA with permission.
Posted 6 August 2012