Who’s to blame when a machine botches your surgery?

Illustration of the debate over whether humans or machine manufacturers should be responsible for AI medical malpractice.
Illustration of the debate over whether humans or machine manufacturers should be responsible for AI medical malpractice.
Image: Zack Rosebrugh for Quartz
We may earn a commission from links on this page.

Medicine is an imprecise art, and medical error, whether through negligence or honest mistake, is shockingly common. Some experts believe it to be the third-biggest killer in the US. In the UK, as many as one in six patients receive an incorrect diagnosis from the National Health Service.

One of the great promises of artificial intelligence is to drastically reduce the number of mistakes made in the world of health care. For some conditions, the technology is already approaching—and in some cases matching and even exceeding—the success rates of the best specialists. Researchers at the John Radcliffe Hospital in Oxford, for instance, claim to have developed an AI system capable of outperforming cardiologists in identifying heart-attack risk by examining chest scans. The results of the study have yet to be published, but if the AI is indeed successful, the technology will be offered, for free, to NHS hospitals all over the UK. And this is just one of the latest in a string of successful medical image-reading AIs, including one that can diagnose skin cancer, another that can identify an eye condition responsible for around 10% of global childhood vision-loss, and a third that can recognize certain kinds of lung cancer.

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

“We cannot have the ‘move-fast-break-things’ mantra of Silicon Valley in healthcare,” adds Matthew Fenech, a former NHS doctor and current AI policy researcher at think tank Future Advocacy. “We need to put patients and safety first.” Currently, AIs used in medicine are billed as “decision aides,” meaning they are intended (at least in legal terms) to complement, not replace, a specialist’s opinion. “To an extent this is an immunizing tactic, but really it’s saying we are expecting doctors to challenge decisions,” says Charlotte Tschider, a fellow in health and intellectual property law at DePaul University. “These devices aren’t positioned right now as supplanting knowledge.”

In turn, this means the health care provider using the AI (typically a physician) remains liable for anything that might go wrong in all but a select number of cases. “Just because terrible things happen doesn’t mean the doctor has erred,” says Froomkin. “They might have, but that alone doesn’t prove it.”

Image for article titled Who’s to blame when a machine botches your surgery?
Image: Zack Rosebrugh for Quartz

The first scenario is when the physician (or the AI, in this case) acted as any other reasonable physician would have if in the same position. This is what is known as the “standard of care,” and is “what any reasonable doctor would be expected to do,” Froomkin explains. “If a doctor doesn’t follow this then they should be expected to justify and explain their decision.” For example, an algorithm might assess a patient, diagnose a bacterial infection, check the patient’s records, and prescribe a routine antibiotic. If that patient turns out to be allergic to that antibiotic, but that allergy was unknown to the patient and wasn’t in their records, the AI should be blameless—it couldn’t have known about the allergy any more than a competent human doctor would have.

Another category of scenarios is when the fault demonstrably lies with the developer of the product. This is like how car manufacturers are liable for faulty seatbelts or airbags, or how the producers of a pacemaker are liable for a malfunction (assuming it was properly implanted). Consider an AI that is found to read blood pressure levels wrong any time the algorithm is run on a weekend day.

But as long as there’s a doctor or another health professional in the loop to exercise their judgment, we’re unlikely to find ourselves in this latter situation, says Froomkin. The legal expectation is that a doctor will weigh the AI’s results and recommendations alongside other evidence, such as traditional diagnostic tests, medical history, and perhaps the advisory opinion of other doctors. “We do expect a degree of due diligence,” says Tschider. “It’s a tool, after all. Say you’re a surgeon and you look at your scalpel to find it’s all bent and messed up. You’re not just going to cut a patient open without thinking. You have responsibility there too.”

And if found negligent, it is the doctor who will be punished, and potentially banned or suspended from practice. “If [an AI] is used as decision support, then the doctor is 100% liable,” Froomkin says.

It will get tricky, Froomkin admits, when, or if, an AI becomes the standard of care. “The safe thing for the doctor to do then is to go with the standard,” he says. “Obviously if it says to cut off the patient’s head you wouldn’t do that, but it does put doctors in a tough position. In the future, we might want to consider making changes to liability law to protect doctors who overrule machines.”

Determining the levels of legal responsibility for AIs as a whole is a fairly new area and one that has yet to be seriously tested in court. What’s more, in a health care context, AIs’ current status as “decision aides” make it difficult for anyone to test their medical liability in the court system. “At the moment, it all looks quite uncertain and up in the air,” says Nicholson Price, an assistant professor of law at the University of Michigan. The first serious challenges related to the legal liability of artificial intelligence are most likely to be levied at autonomous vehicles, especially as they become more common on the roads.

Image for article titled Who’s to blame when a machine botches your surgery?
Image: Zack Rosebrugh for Quartz

A side effect of the way machine-learning algorithms work is that many function as black boxes. “There’s an inherent opacity about what exactly can and cannot be known with these systems,” Price says. In other words, it’s impossible to know precisely why an AI has made the decision it has—all we can ascertain is its conclusion, and that its conclusion is based on the information put into it. Add the fact that many algorithms (and the data used to train them) are proprietary, and it becomes impossible for a healthcare professional to assess the reliability of the “diagnostic aide” they’re using.

This opacity raises a number of questions. For example, how do we determine at what point an AI’s error crosses over from an unfortunate (yet inevitable) medical mistake to an unacceptable, possibly negligent one? And, can doctors or health care institutions using the AI be truly held liable for what might go wrong when they don’t even know the inner workings of the tool?

One solution would be to hold the AI system itself responsible in all cases, by considering it a kind of legal personhood. This would be tricky and “a little weird,” says Price. For starters, how does one punish or reprimand a machine? They don’t exactly have bank accounts or earn paychecks, so monetary penalties mean nothing. Nor do they have bodies to incarcerate. “Maybe someone has come up with an explanation as to why [legal personhood of medical AIs] is useful, but if they have then I certainly haven’t heard of it,” Froomkin says. “What’s in it for any of us? I just don’t see the point. It’s a liability shield. Why would we as a society want that? Personhood is just a legal fiction.”

That said, their manufacturers certainly do have both bank accounts and paychecks. “It’s not like [IBM’s]Watson has a bunch of money sitting around,” Price says. But, “IBM does—a lot.”

Holding accountable the company behind an AI would be basically what happens today when manufacturers of medical equipment produce faulty goods or do not adequately warn users of risks. For example, in October 2017, hundreds of patients in the UK sued the manufacturer of allegedly “defective” hip replacements in one of the largest product-liability group actions in the country’s history. Though the court ruled against the patients in May 2018, determining that they “did not suffer an adverse reaction to metal debris,” such cases against manufacturers are relatively common. Johnson & Johnson, for instance, is currently at the center of numerous legal battles worldwide over its vaginal mesh implants. The company’s purported failure to conduct proper clinical trials coupled with its aggressive marketing that allegedly downplayed complications resulting from the procedure, have left many women debilitated and in severe pain, often requiring numerous surgeries to try and remove the mesh.

Going after an AI manufacturer in the case of medical error might make sense given the current state of machine learning. In the future, though, it may be feasible to hold the AI itself accountable.

Image for article titled Who’s to blame when a machine botches your surgery?
Image: Zack Rosebrugh for Quartz

Price speculates that many years from now, it may be possible to license a given AIs for medical practice—meaning they could also be “disbarred” after making too many mistakes, much like real physicians are through negligence. “It’s a little like driverless cars,” he says. “You can imagine a scenario where an autonomous vehicle has to register with the [Department of Motor Vehicles], and if it makes too many mistakes, then its license is taken away. Similarly, you can imagine a similar set up where we treat an algorithm as we would, say, an oncologist.”

Whatever regulations are implemented, it’s vital that they ensure both the continued trust in the medical establishment and also facilitate the swift development and rollout of safe and effective AI systems. More widespread use of AI in medicine could mean you are less likely to end up with scissors, sponges, or scalpels left inside you after surgery—but we still need to know who’s at fault in the rare cases that happens and you inevitably want them taken back out.


This story is one in a series of articles on the impact of artificial intelligence on health care and medicine. Click here to sign up to get alerted when new stories are published.