THE development of full artificial intelligence could spell the end of the human race, physicist Stephen Hawking warned last week.
“It would take off on its own, and redesign itself at an ever increasing rate”, he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Professor Hawking, who has motor neurone disease, was commenting on new machine learning technology applied to his computer-generated voice that analyses the scientist’s words to predict what he is likely to say next.
Fears that machines might take over the world have long been the stuff of science fiction.
Stanley Kubrick’s 2001: A Space Odyssey was a classic of the genre and, although we have yet to produce a computer capable of independent thought in the way Hal was in that movie, we are perhaps moving in that direction.
In a world awash with data, machine learning is becoming a powerful tool to help us aggregate and analyse the vast amounts of information we produce.
Where once computers could only deal with neatly defined data sets, they are now moving beyond number crunching to analysing “big data” — petabytes of heterogeneous information from sources as varied as genetic testing, censuses, electronic health records, scientific papers, Google search terms and Twitter.
For the health sector, this brings potential benefits ranging from early warning of disease outbreaks, to more comprehensive data on rare conditions, or a better understanding of genetic disease risk.
One study, for example, found analysis of Twitter traffic and other informal online news would have provided accurate warning of the cholera outbreak that followed the 2010 Haiti earthquake well ahead of any official announcement.
Informal reports were available up to 2 weeks before government reports, potentially allowing for an earlier response to the outbreak, the researchers from Harvard Medical School said.
Another study examined Wikipedia searches, finding these would not have helped identify the Haiti outbreak, but might accurately predict incidence of influenza, dengue fever and possibly tuberculosis.
The masters of online data analysis at Google have similarly aggregated search terms to create Google Flu Trends and Google Dengue Trends, which estimate current disease activity around the world “in near real-time”.
It’s not just infectious disease surveillance that could be changed by big data — data analysis could replace clinical trials in some areas of clinical decision making, particularly in rare conditions where quality information can be hard to come by.
A medical team from Stanford University faced precisely that problem when trying to decide whether to initiate anticoagulation therapy in a child with lupus-related kidney failure.
Although they assessed the patient as being at risk of thrombosis, “we were unable to find studies pertaining to anticoagulation in our patient’s situation and were therefore reluctant to pursue that course, given the risk of bleeding”, they later wrote.
Instead, they turned to a database of paediatric lupus patients and, in just 4 hours, were able to conduct an “automated cohort review” that led them to decide the risk of thrombosis outweighed the risk of bleeding.
Could this kind of on-the-spot data analysis become common, as more data become available and computers become ever better at interrogating it?
Canadian doctors argued in JAMA last year increased use of big data in health care was inevitable, given how quickly and cheaply it could provide large quantities of real-world observational evidence.
There are difficult issues, of course — privacy, quality, and the fact that even vast quantities of observational data cannot replace a properly designed clinical trial when it comes to showing a causal effect.
But what does seem clear is that the machines are increasingly going to be doing some of our thinking for us, despite Professor Hawking’s reservations.
Jane McCredie is a Sydney-based science and medicine writer.