Healthcare systems have adopted artificial intelligence in fits and starts. For years, emergency rooms have haltingly tested AI systems that collect information on patients’ symptoms and medical histories, weigh it against data about similar cases, and make recommendations about who should be rushed in for treatment first. Doctors see the potential, but are wary of algorithms that don’t have years of medical training.
But the risk of Covid-19 transmission in ERs, along with shortages of staff and resources, have left some hospitals with no choice. The pandemic has dramatically accelerated the use of AI triage. And crucial as these tools have been in recent months, their rapid adoption comes with risks.
“The healthcare space is relatively conservative,” says Yonatan Amir, CEO of Israeli health tech firm Diagnostic Robotics. In normal times, Amir said, it might take six months for Diagnostic Robotics to close a deal with a major hospital interested in its AI triage tools. But in a three-week span between March and April, the company closed over 40 new contracts. In the first five weeks of the pandemic, its algorithms triaged 2.5 million patients.
“Those are numbers that as a young startup we’re not used to seeing,” Amir said. “In terms of the adoption rate, we’re projecting it’s going to be much higher.”
The change in pace makes sense in the middle of a pandemic. “From a safety perspective, you almost had to have some sort of system where you could triage patients before they came in,” said Bill Fera, a medical consultant with Deloitte, which sells AI triage tools and advises health systems on how to use them. “Eliminating human contact, which was seen before as a barrier or a hindrance, all of a sudden became a benefit.”
While business has boomed for AI vendors, some health systems have already been developing in-house triage tools slowly and carefully, wary of introducing a new source of medical mistakes. The Mayo Clinic, for example, has spent the last three years researching an emergency room triage algorithm that can assess a patient’s symptoms, recommend tests the doctors should run, and suggest possible diagnoses. It’s still working out the kinks.
Daniel Cabrera, an associate professor of emergency medicine at the Mayo Clinic, says the institution is being cautious because there are clear risks in letting algorithms make suggestions. “There’s a danger that providers will follow the recommendations from the AI blindly, without applying any critical assessment to those recommendations,” he said.
AI sellers acknowledge that their machines can make mistakes—but argue that they’ll make fewer mistakes than humans might. “It’s a way of augmenting the capability of very stressed and tired physicians,” said Amir, the Diagnostic Robotics CEO.
Deloitte’s Bill Fera put it more bluntly, pointing to research from Johns Hopkins University that suggests medical errors are the third leading cause of death in the US. “So to the idea that machines are going to come in and do this worse,” he said, “there’s some room for improvement, I’ll put it that way.”
But Cabrera says that AI systems aren’t simply less fallible versions of human doctors. To be sure, they’ll never miss a key detail because they’re distracted, or write down the wrong treatment plan because they’re tired. But they’ll make other kinds of mistakes that healthcare workers never would. He calls these mistakes of context.
Cabrera gave an exaggerated example to illustrate his point: A patient walks into the ER with a knife sticking out of his chest. His main complaint is a stabbing chest pain. But medical records show he’s also a smoker with a history of high cholesterol. An AI system might infer, based on his symptoms and medical history, that he’s having a heart attack and recommend chest X-rays. A human doctor, on the other hand, would immediately begin treating his stab wound.
Medical schools and training programs, he said, don’t teach providers how to interact with AI. “You need to have some understanding of how the algorithms work and how the decisions are made, and you need to be prepared to be critical,” Cabrera said. “For some percentage of patients, we’re going to get the wrong recommendations.”
Even so, Cabrera said that, used correctly, algorithmic triage can save healthcare workers time and help them treat patients faster. He compared a well-run emergency room to a fast-food kitchen or a factory assembly line—the goal is to coordinate a lot of people’s efforts as quickly and smoothly as possible. “We’re not offering the holy grail,” he said. “What we’re trying to do is give tools to humans to make decisions and speed up the entire process.”