Future of Health
Covid-19 exposed the one huge reason doctors remain so crucial in the era of A.I.
Why we still need human doctors.
The truth about A.I. medicine.
Universal History Archive/Universal Images Group/Getty Images
Taking a step closer to the mirror, you brush the raised patch of skin with a finger, and ice settles in your stomach.
Could it be dangerous?
It’s the type of question that over 100,000 people will ask themselves every year when scrutinizing what could be melanoma — the deadliest form of skin cancer. Every year, more than 7,000 people in the U.S. alone will die from cancerous moles like these.
“Am I just using it because it's futuristic and because it sells?”
Easing the fear is an app that checks for melanoma with artificial intelligence.
Unlike doctors, these apps have no office hours and can make their judgments without ever touching you. But this convenience can also be a detriment.
Bilal Mateen, a clinical data science fellow at the Alan Turing Institute in London, has one foot in the world of technology and the other in public health. From this perspective, Mateen sees an unfulfilled promise:
“The story that everyone loves is the computer vision tool that could diagnose melanoma almost perfectly because all the pictures with melanoma also had a ruler in them,” Mateen says. “Context matters.”
From cancer screenings to Covid-19 diagnoses, and even intelligent robot-assisted surgeries, doctors and patients are already turning to machine learning and artificial intelligence to diagnose human ailments. But while these techniques are impressive, Mateen says they’re far from perfect.
Is A.I. better than a human doctor?
The idea behind the push for more A.I. in medicine is simple: delegating comparatively easy tasks to A.I. — like differentiating a possible melanoma from a benign mole — will free up doctors to spend more time communicating with patients or solving trickier medical problems.
It’s not uncommon these days to see breathless coverage or discussions online of how such A.I. “doctors” are out-pacing humans when it comes to quick and accurate diagnoses. But while this may be true in some cases, Mateen says the A.I. has some pretty severe limitations.
“A.I. made consistent errors when it came to accurately predicting Covid cases.”
A.I. is more successful when using computer vision than it is with more abstract datasets filled with everything from numerical vital signs to audio of coughs from patients. (Computer vision can be thought of as taking a picture of a possible melanoma and comparing it against other known melanoma pictures.)
The room A.I. has to grow was made especially evident during Covid-19, Mateen says. As a co-editor on an Alan Turing Institute report that examines how A.I. failed to truly assist doctors during the height of the Covid-19 pandemic, he saw how A.I. made consistent errors.
For example, when given a mix of numerical and image datasets, A.I. wasn’t able to accurately predict Covid cases from this information:
- Using different hospital fonts to predict cases
- Using x-ray scans of only children's chests to learn what healthy lungs look like
- Deciding a patient's position (laying down or sitting up) was a predictor of Covid because sick patients were more likely to be scanned laying down
And even if computer vision-based A.I. do have more success, a 2020 review of the methods behind these studies published last February in BMJ suggests the way studies compared human and A.I. doctors may be highly biased.
Do we need A.I. medicine?
This competition between human doctors and A.I. in terms of who is more accurate is only one part of the puzzle. Another big question fewer people are asking is whether or not we need this kind of A.I. in the first place.
“I like to think of it as one of the many sexy technologies that exist, which is why everyone seems to want a piece of it — it’s very futuristic,” says Mateen. “It doesn’t have the entire context of the person in front of it.”
Contextual details remain critical — The ultimate treatment patients receive from human health workers is what will largely determine their overall health outcomes, Mateen says. Factoring in contextual details, like a patient's medical history, their insurance, or what kind of support they receive from friends and family, are all things A.I. is not really prepared to understand the way a human doctor can.
“It shouldn't require a medical degree or a Ph.D. to be able to have that conversation with your doctor.”
The question of whether A.I. can provide more ultimate good for a patient than a team of doctors armed with their own modern tools (like predictive analysis models, which Mateen says falls short of true A.I.) is still very much an open question, Mateen says.
“You have to think of it as a somewhat blunt tool,” Mateen explains. “It may tell you what the best treatment is for the patient in front of you based on their predicted length of life if you gave them type A of chemo versus chemo B, whereas figuring out whether that is truly the best thing [overall] for the patient in front of you is probably not a task for what is effectively a predictive model.”
“The question that people often fail to ask is ‘do I need the extra step to A.I.?’” Mateen continues. “Or am I just using it because it's futuristic and because it sells?”
Should we trust medical A.I.?
Despite his tepid approach to A.I. in medicine, Mateen says this doesn’t mean a patient's knee-jerk reaction to the technology should be distrust.
However, if the Covid-19 pandemic has shown us anything, it’s that establishing trust in new medical innovations can be extremely challenging.
Establishing trust is something that Romain Cadario, an assistant professor of behavioral science and marketing at Erasmus University, has studied in depth.
In a study published this June in the journal Nature Human Behavior, Cadario and colleagues found that part of what holds patients back from accepting A.I. healthcare is the technology’s infamous “black box.” The “thought process” machine learning or a neural network takes to reach a decision can is often closed off to its creators and users alike.
“How well do you understand how a doctor looks at your mole and decides whether it's cancerous or not?” Cadario asks, contrasting that scenario with an A.I. doctor. “What we find is that people feel they understand how doctors work better, and this drives their preference to utilize a doctor triage person rather than an A.I.”
There’s no easy fix to improve people’s trust in A.I.
There’s no easy fix to improve people’s trust in A.I. — or their doctors — but Cadario and Mateen both agree that improving transparency and being as trustworthy as possible are key to moving that needle.
As for Mateen, he’d still readily welcome elements of the technology in his care, though he may be one of those patients who asks to sift through the code first.
“I'm one of those people that probably gets a little bit too involved in their own care if given the opportunity,” Mateen says. “If someone were to put in front of me the fact that a model predicted that I need A over B I would be very interested to know where it was developed, who it was developed on, and how representative that population is to me as a person.
“We want to move toward an informed choice model of care,” he adds. “It shouldn't require a medical degree or a Ph.D. to be able to have that conversation with your doctor.”
This path toward increased understanding will not only help patients communicate better with their doctors but also help them understand the real benefits — or lack thereof — of their panic-downloaded A.I. skin cancer app.
Now read: Robots don’t deserve names