How Explainable AI Could Revolutionize Medicine
In studies, explainable AI is showing its potential for informing clinical decisions as well — flagging high-risk patients and letting doctors know why that calculation was made. University of Washington researchers have used the technology to predict whether a patient will have hypoxemia during surgery, revealing which features contributed to the prediction, such as blood pressure or body mass index. Another study used explainable AI to help emergency medical services providers and emergency room clinicians optimize time — for example, by identifying trauma patients at high risk for acute traumatic coagulopathy more quickly.
A crucial benefit of explainable AI is its ability to audit machine learning models for mistakes, said Su-In Lee, PhD, a computer scientist who led the UW research.
For example, a surge of research during the pandemic suggested that AI models could predict COVID-19 infection based on chest x-rays. Dr. Lee’s research used explainable AI to show that many of the studies were not as accurate as they claimed. Her lab revealed that many models› decisions were based not on pathologies but rather on other aspects such as laterality markers in the corners of x-rays or medical devices worn by patients (like pacemakers). She applied the same model auditing technique to AI-powered dermatology devices, digging into the flawed reasoning in their melanoma predictions.
Explainable AI is beginning to affect drug development too. A 2023 study led by Dr. Lee used it to explain how to select complementary drugs for acute myeloid leukemia patients based on the differentiation levels of cancer cells. And in two other studies aimed at identifying Alzheimer’s therapeutic targets, “explainable AI played a key role in terms of identifying the driver pathway,” she said.
Currently, the US Food and Drug Administration (FDA) approval doesn’t require an understanding of a drug’s mechanism of action. But the issue is being raised more often, including at December’s Health Regulatory Policy Conference at MIT’s Jameel Clinic. And just over a year ago, Dr. Lee predicted that the FDA approval process would come to incorporate explainable AI analysis.
“I didn’t hesitate,” Dr. Lee said, regarding her prediction. “We didn’t see this in 2023, so I won’t assert that I was right, but I can confidently say that we are progressing in that direction.”
What’s Next?
The MIT study is part of the Antibiotics-AI project, a 7-year effort to leverage AI to find new antibiotics. Phare Bio, a nonprofit started by MIT professor James Collins, PhD, and others, will do clinical testing on the antibiotic candidates.
Even with the AI’s assistance, there’s still a long way to go before clinical approval.
But knowing which elements contribute to a candidate’s effectiveness against MRSA could help the researchers formulate scientific hypotheses and design better validation, Dr. Lee noted. In other words, because they used explainable AI, they could be better positioned for clinical trial success.
A version of this article appeared on Medscape.com.