icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
27 Apr, 2020 15:24

AI healthcare shouldn’t get more trust than self-driving car; it’s useful amid pandemic emergency, but speed is not always good

AI healthcare shouldn’t get more trust than self-driving car; it’s useful amid pandemic emergency, but speed is not always good

AI is now being used to triage Covid-19 patients. But we must bear in mind that the pandemic is no reason to self-isolate our critical faculties and accept AI in healthcare as the future without question.

The Covid-19 pandemic has turned into a gateway for the adoption of AI in healthcare. Staff shortages and overwhelming patient loads have fast-tracked promising new technologies, particularly AI tools that can speed triage. But this accelerated process contains dangers: regulatory oversight, which has hampered innovation in healthcare over the years, nevertheless remains critical. We are not dealing with harmless standards – this is about life and death – oversight and rigorous testing is vital.

The experience of the Royal Bolton Hospital in the UK provides one example, where a pre-Covid-19 trial, initiated by Rizwan Malik, the hospital’s lead radiologist, aimed to test whether a promising AI-based chest X-ray system could speed up diagnosis times. Patients were having to wait several hours for a specialist to examine their X-rays. An initial reading from this AI-based tool, it was hoped, would dramatically shorten diagnosis times. After four months of reviews from multiple hospital and NHS committees and forums, the proposal was finally approved. But the trial never took place because the Covid-19 pandemic struck.

Also on rt.com The speed at which the British media became McCarthyite authoritarians has been almost as fast as the spread of Covid-19

In the face of the pandemic, regulatory procedures were jettisoned. Within weeks, the AI-based X-ray tool was retooled to detect Covid-19-induced pneumonia. Instead of a trial to double-check human diagnosis, the technology is now performing initial readings.

If this speeds up diagnosis, that is to be welcomed. Many more healthcare facilities around the world are turning to AI to help manage the coronavirus pandemic. This has ignited an AI healthcare ‘arms race’ to develop new software or upgrade existing tools in the hope that the pandemic will fast-track deployment by side-stepping pre-Covid-19 regulatory barriers.

Covid-19 has certainly speeded up AI in healthcare. Before the pandemic, AI in healthcare was booming. According to the British Journal of General Practice, in 2016, healthcare AI projects attracted more investment than AI projects within any other sector of the global economy.

Also on rt.com As scientists break down borders to fight Covid-19, nationalist leaders build them back up

And it is not surprising why. In specific areas, AI tools like machine learning have the capacity to simultaneously observe and rapidly process an almost limitless number of inputs beyond human capability. Furthermore, these systems are able to learn from each incremental case and can be exposed, within minutes, to more cases than a clinician could see in many lifetimes. AI-driven applications are able to out-perform dermatologists at correctly classifying suspicious skin lesions. AI is also being trusted with tasks where experts often disagree, such as identifying pulmonary tuberculosis on chest radiographs.

AI can’t replace doctors 

What this shows is that AI in healthcare excels in areas with well-defined tasks, with clearly defined inputs and binary outputs that can be easily validated. They can, in short, support doctors, but not replace them.

And this is where it gets complicated and a great deal of caution needs to be exercised. The real problem facing AI in healthcare, and all AI-led innovation in general, is the identification of what problem is being solved.

Also on rt.com What will 6 months of Covid-19 do to our society? Only certain thing is we'll be in a state... and the STATE will be IN CONTROL

The deployment of AI tools in the Covid-19 pandemic has mainly been based upon supporting poorly resourced or over-stretched services. AI systems are ideally suited to situations where human expertise is a scarce resource, like in many developing TB-prevalent countries where a lack of radiological expertise in remote areas is a real problem.

But AI tools are not the same as human intelligence. They are algorithms designed by humans. An AI system, for example, could never replace a surgeon precisely because when the body is cut open, things might not meet pre-programmed expectations. Surgeons need to think on their feet. Algorithms rely on people sitting on their rear ends programming them.

But in many cases, the people creating algorithms to use in real life aren’t the doctors that treat patients. Programmers might need to learn more about medicine: clinicians might need to learn about the tasks a specific algorithm is or isn’t well suited to.

Also on rt.com Scientists must look dispassionately at Covid-19 so they can see what it is, not what they fear: Rushed science can be bad science

Many algorithms are intricately based on difficult to deconvolute mathematics. They are not transparent. Indeed, many companies developing these have every interest in keeping them so to protect their intellectual property. How does any regulator who cannot unpack the inner workings of an algorithm approve a trial that relies on AI? What happens when an AI-healthcare tool misdiagnoses a patient that not only puts them at risk but impacts their ability to get health insurance after treatment?

The issues of privacy and misdiagnosis are areas that remain fraught with risk and difficulty. The biggest impediment to AI’s widespread adoption remains the public’s hesitation to embrace an increasingly controversial technology. And for good reason. Being diagnosed by a machine or a computer interface is not going to build trust.

Dehumanising healthcare through AI will be resisted for a very good reason: doctors are human, computers are not. Healthcare is not an exact science and much of it cannot be reduced to algorithmic certainty. Instincts and experience are more important. As Dr. Lisa Sanders, an associate professor at the Yale University School of Medicine, the inspiration behind the Netflix docuseries Diagnosis, puts it: diagnosing a patient is a “conference between two experts… I am the expert on bodies, how bodies work, how bodies don’t work and what we can do about it. What the patient is the expert on is that body and how that body feels… There is no one who can tell you how the patient feels except the patient.”

There is no doubt that AI healthcare can add enormously to the future of healthcare. Its deployment in the Covid-19 crisis reveals some of this potential. But it needs careful consideration because it has implications that go way beyond healthcare. Emergencies require speed, but speed can result in bad outcomes. Like anything new, this requires constructive scepticism, not blind faith.

Think your friends would be interested? Share this story!

The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.

Podcasts
0:00
23:13
0:00
25:0