Culture

AI is helping doctors make decisions but patients seldom know about it

Should patients know that an algorithm is making recommendations about their treatment?

xavierarnau/E+/Getty Images

A new report in STAT News reveals that doctors and clinicians around the U.S. are making decisions regarding patient care based in part on recommendations from algorithms. Many hospitals don't inform their patients of the computer-based aid because they say the software only provides one signal that's ultimately paired with others to make a final decision regarding care.

Bugs are fine in apps, not medical decisions – Algorithms are making ever more decisions in our lives, from the news articles we see to the music we listen to. But where the downsides to those algorithms making a faulty recommendation are low, we can't afford to have algorithms that make more consequential, potentially life-altering decisions slip up. It turns out, however, that these medical support algorithms aren't subject to approval by the FDA like new drugs are, even though both could hugely affect people's health.

Bias comes standard — STAT News asked several hospital systems to provide data on the accuracy of the algorithms they use to predict sepsis but they all either declined or said their evaluations aren't completed yet.

What's concerning here is that artificial intelligence is notoriously fraught with bias. Image recognition algorithms misclassify Black people and other people of color disproportionally more than they do white people, and the decisions algorithms make are based on inputs made by humans, who are fallible and prone to making mistakes, and whose blindspots work their way into those algorithms.

ProPublica previously reported that algorithms used in courtroom sentencing, for instance, made risk assessments for recidivism based in part on crime rates in a defendant's zip code – essentially stacking the deck against certain demographics from the start.

Uncharted territory – Patients generally trust their doctors are doing the right thing. One patient interviewed for the story said, “I don’t monitor how doctors do their jobs. I just trust that they’re doing it well.” But how can someone trust their doctor if they're potentially relying on an algorithm that's unproven – maybe they're even relying on it in decisions more heavily than they admit. An algorithm could make a wrong decision about a patient from a minority demographic because it wasn't trained on sufficiently representational data, for example.

Doctors are required to get informed consent that's given in response to having complete information about a diagnosis and treatment. But they're not really required to reveal the source of the information that helped determine the treatment – we expect them, with their years of training, to make the right call. But we're in uncharted territory if they're consulting algorithms. Critics hope the field will skew towards transparency going forward and that legislation will emerge to help provide the necessary scrutiny for the technology. Algorithms aren't necessarily bad or detrimental. Their use could, in fact, introduce effiencies, but they need to be held to very high standards when they're used to shape something as important as medical care.