THREE years ago, the Washington Post reported on IBM’s ambitions for its much-hyped artificial intelligence system: Watson.

“The idea is to use Watson’s increasingly sophisticated artificial intelligence to find personalized treatments for every cancer patient by comparing disease and treatment histories, genetic data, scans and symptoms against the vast universe of medical knowledge,” the newspaper wrote, describing how the plan was being implemented at the University of Texas’ MD Anderson Cancer Center.

Individual treatment plans that would take a team of human researchers weeks to formulate could be prepared by Watson in a matter of seconds, the article said.

Three years is a long time in the fast-moving world of artificial intelligence.

MD Anderson withdrew from the Watson collaboration in 2017 and recent coverage of the system’s performance in cancer care has been less than positive, culminating in a report on health website Stat News in July this year claiming internal IBM documents showed that Watson’s treatment suggestions were often inaccurate and sometimes dangerous.

For example, Watson reportedly suggested giving cancer drug bevacizumab to a 65-year-old man with lung cancer, apparently overlooking the risk of haemorrhage in a patient who already had severe bleeding.

On the face of it, that sounds like a fixable system error, and also one you would hope would be picked up by clinicians before the computer’s recommendations were implemented.

But, more than anything, the Watson saga shows how far we still have to go in working out how medicine can best harness the undoubted power of artificial intelligence, and the big data it rests on, while minimising the equally undoubted risks.

Technological advances may bring great benefits, but they also, always, bring loss.

The popularisation of the motor car gave us freedom and mobility. It also marked, for most Western humans, the end of any connection to another large mammal species.

The abrupt termination of our millennia-old relationship with the horse must have had profound consequences including, perhaps, a reduction in our capacity for empathy with other species.

As we increasingly live our lives in concert with artificial intelligence, we might wonder how this new intimacy with an alien mind could change us yet again.

In medicine and elsewhere, will it make us lazier, less curious, more willing to rely on the algorithm? Or will it be a magnificent tool to expand the capacity and reach of our ever-searching minds?

Chances are it will be a bit of both.

It’s often said the one thing computers cannot replace is a doctor’s intuition, the hunch that something is just not right.

In August this year, computer scientists from the Massachusetts Institute of Technology reported research showing doctors’ gut feelings played a big role in determining how many tests they ordered for patients in intensive care (though the researchers did not apparently look at how this translated into outcomes for patients).

“There’s something about a doctor’s experience, and their years of training and practice, that allows them to know in a more comprehensive sense, beyond just the list of symptoms, whether you’re doing well or you’re not,” one of the researchers said. “They’re tapping into something that the machine may not be seeing.”

Not yet, anyway. It’s perhaps a mistake to think intuition is a uniquely human quality, one that could not be taught to a machine.

In examining the decision making processes of our human brains, Argentinian neuroscientist Mariano Sigman has concluded intuition is not some mysterious, magical quality but an effective cognitive strategy for choosing between various options.

“For those questions that matter most to us, should we trust our hunches or our rational deliberations?” he asks in his book The secret life of the mind. “The answer is conclusive: it depends.”

For simple decisions with few parameters – which brand of toothpaste to buy, for example – we’re better off using our conscious mind to assess the pros and cons, he writes.

But for more complex choices, the situation is reversed.

“The conscious mind is fairly limited in size and can hold little information,” Sigman writes. “Our unconscious, however, is vast.

“In situations where we can mentally evaluate all the elements at the same time, the rational decision is more effective, and therefore better … when there are many more variables in play than our conscious mind can juggle at once, our unconscious, rapid, intuitive decisions are more effective …”

In this view, the unconscious mind is our internal big data processor, producing hunches and gut feelings we are unable to rationally explain but are nonetheless based on experience and learning.

If it’s true that clinical intuition results essentially from the unconscious mind’s capacity to process larger datasets than the conscious mind can manage, there’s no obvious reason why a computer shouldn’t be able to emulate, and eventually outperform, humans on even that front.

What is less clear is whether an artificial intelligence system could ever display the creativity of humans at their best.

As we increasingly rely on clever machines, what could we lose of ourselves?

Radiologists might no longer need to pore over thousands of scans, relying instead on computer programs to alert them to anomalies. This would undoubtedly make diagnostic services more efficient and accessible to a greater number of people.

But will a computer program ever have that human capacity to perceive something unexpected in the data and utter the words that have led to so many scientific innovations: “That’s funny …”

Jane McCredie is a Sydney-based health and science writer and editor.

 

The statements or opinions expressed in this article reflect the views of the authors and do not represent the official policy of the AMA, the MJA or MJA InSight unless that is so stated.

To find a doctor, or a job, to use GP Desktop and Doctors Health, book and track your CPD, and buy textbooks and guidelines, visit doctorportal.

 

 

4 thoughts on ““That’s funny”: the power of human intuition

  1. Stephen Page says:

    What has been overlooked in the seduction by evidence based medicine is what Greek philosophers termed phronesis [practical wisdom – what separates the new graduate from the 20 year veteran] – just as there is a pyramid of quality of evidence (that which seen), so should there also be a pyramid describing the quality of experience (that which is felt – intuition – and that which is gained subconsciously over a career). There could be an MJA review of the benefits (and limitations) of phronesis – I don’t think it has been tackled yet.

  2. Terence Barrington PAUL (MB.,BS. Syd. 1962) says:

    When I was 17 years of age (circa 1954), I read an article on logic in a popular science fiction magazine of the day, “Astounding Science Fiction”. Three types of logic were mentioned-
    1. Aristotelian, or 2-valued logic, which is too simplistic.
    2. n-valued logic, which is what we normally use, for relatively straight forward tasks, nearly all the time.
    3. Gestalt logic. This is otherwise known as ‘thinking outside the box’, ‘not flying by the book’ and of course, ‘intuition’.
    The normal or n-valued logic operates in progressive steps toward the answer sought by means of a
    line of logic using high probability steps or data.
    The Gestalt logic uses many lines of reasoning (logic), based on low probability data, but each datum
    or step in each line, all seem to lead in the same direction. I realized at the time that I’d been doing
    this unconsciously all my life but determined to use the method at all times from now on!
    I’ve found that the answer just pops into ones head and as often as not, one can only remember
    a couple of the lines of reasoning used, the rest being lost in the subconscious mind.
    Gestalt logic is an unbelievably valuable tool: and it goes without saying, that it will work best in the minds of those people in jobs which require keen observation, or in people who are just naturally themselves keen observers.
    As professor Julius Sumner Miller said, one must always ask, “Why is it so?”

  3. Dr David De Leacy says:

    Terence,
    Thank you for the SciFi lead in. Jane perhaps Star Wars and Darth Vader comes into play wrt to AI at the moment. “Jedi mind games only works on weak minds Luke.” Resist the Empire and work for the good side of the force! Thank you for another of your “light sabre” reflections Jane.
    Ps Julius was a fun and really educational scientist wasn’t he Terrence. I also pay homage to Neil’s Bohr’s insightful aphorism, “predicting is very difficult, especially the future.”

  4. Marcus Aylward says:

    Intuition = good
    Unconscious bias = bad

    One and the same.

Leave a Reply

Your email address will not be published.