THREE years ago, the Washington Post reported on IBM’s ambitions for its much-hyped artificial intelligence system: Watson.
“The idea is to use Watson’s increasingly sophisticated artificial intelligence to find personalized treatments for every cancer patient by comparing disease and treatment histories, genetic data, scans and symptoms against the vast universe of medical knowledge,” the newspaper wrote, describing how the plan was being implemented at the University of Texas’ MD Anderson Cancer Center.
Individual treatment plans that would take a team of human researchers weeks to formulate could be prepared by Watson in a matter of seconds, the article said.
Three years is a long time in the fast-moving world of artificial intelligence.
MD Anderson withdrew from the Watson collaboration in 2017 and recent coverage of the system’s performance in cancer care has been less than positive, culminating in a report on health website Stat News in July this year claiming internal IBM documents showed that Watson’s treatment suggestions were often inaccurate and sometimes dangerous.
For example, Watson reportedly suggested giving cancer drug bevacizumab to a 65-year-old man with lung cancer, apparently overlooking the risk of haemorrhage in a patient who already had severe bleeding.
On the face of it, that sounds like a fixable system error, and also one you would hope would be picked up by clinicians before the computer’s recommendations were implemented.
But, more than anything, the Watson saga shows how far we still have to go in working out how medicine can best harness the undoubted power of artificial intelligence, and the big data it rests on, while minimising the equally undoubted risks.
Technological advances may bring great benefits, but they also, always, bring loss.
The popularisation of the motor car gave us freedom and mobility. It also marked, for most Western humans, the end of any connection to another large mammal species.
The abrupt termination of our millennia-old relationship with the horse must have had profound consequences including, perhaps, a reduction in our capacity for empathy with other species.
As we increasingly live our lives in concert with artificial intelligence, we might wonder how this new intimacy with an alien mind could change us yet again.
In medicine and elsewhere, will it make us lazier, less curious, more willing to rely on the algorithm? Or will it be a magnificent tool to expand the capacity and reach of our ever-searching minds?
Chances are it will be a bit of both.
It’s often said the one thing computers cannot replace is a doctor’s intuition, the hunch that something is just not right.
In August this year, computer scientists from the Massachusetts Institute of Technology reported research showing doctors’ gut feelings played a big role in determining how many tests they ordered for patients in intensive care (though the researchers did not apparently look at how this translated into outcomes for patients).
“There’s something about a doctor’s experience, and their years of training and practice, that allows them to know in a more comprehensive sense, beyond just the list of symptoms, whether you’re doing well or you’re not,” one of the researchers said. “They’re tapping into something that the machine may not be seeing.”
Not yet, anyway. It’s perhaps a mistake to think intuition is a uniquely human quality, one that could not be taught to a machine.
In examining the decision making processes of our human brains, Argentinian neuroscientist Mariano Sigman has concluded intuition is not some mysterious, magical quality but an effective cognitive strategy for choosing between various options.
“For those questions that matter most to us, should we trust our hunches or our rational deliberations?” he asks in his book The secret life of the mind. “The answer is conclusive: it depends.”
For simple decisions with few parameters – which brand of toothpaste to buy, for example – we’re better off using our conscious mind to assess the pros and cons, he writes.
But for more complex choices, the situation is reversed.
“The conscious mind is fairly limited in size and can hold little information,” Sigman writes. “Our unconscious, however, is vast.
“In situations where we can mentally evaluate all the elements at the same time, the rational decision is more effective, and therefore better … when there are many more variables in play than our conscious mind can juggle at once, our unconscious, rapid, intuitive decisions are more effective …”
In this view, the unconscious mind is our internal big data processor, producing hunches and gut feelings we are unable to rationally explain but are nonetheless based on experience and learning.
If it’s true that clinical intuition results essentially from the unconscious mind’s capacity to process larger datasets than the conscious mind can manage, there’s no obvious reason why a computer shouldn’t be able to emulate, and eventually outperform, humans on even that front.
What is less clear is whether an artificial intelligence system could ever display the creativity of humans at their best.
As we increasingly rely on clever machines, what could we lose of ourselves?
Radiologists might no longer need to pore over thousands of scans, relying instead on computer programs to alert them to anomalies. This would undoubtedly make diagnostic services more efficient and accessible to a greater number of people.
But will a computer program ever have that human capacity to perceive something unexpected in the data and utter the words that have led to so many scientific innovations: “That’s funny …”
Jane McCredie is a Sydney-based health and science writer and editor.
The statements or opinions expressed in this article reflect the views of the authors and do not represent the official policy of the AMA, the MJA or MJA InSight unless that is so stated.
To find a doctor, or a job, to use GP Desktop and Doctors Health, book and track your CPD, and buy textbooks and guidelines, visit doctorportal.