Article Type
Changed
Tue, 02/13/2024 - 15:23

“New antibiotics discovered using AI!”

That’s how headlines read in December 2023, when MIT researchers announced a new class of antibiotics that could wipe out the drug-resistant superbug methicillin-resistant Staphylococcus aureus (MRSA) in mice.

Powered by deep learning, the study was a significant breakthrough. Few new antibiotics have come out since the 1960s, and this one in particular could be crucial in fighting tough-to-treat MRSA, which kills more than 10,000 people annually in the United States.

But as remarkable as the antibiotic discovery was, it may not be the most impactful part of this study.

The researchers used a method known as explainable artificial intelligence (AI), which unveils the AI’s reasoning process, sometimes known as the black box because it’s hidden from the user. Their work in this emerging field could be pivotal in advancing new drug design.

“Of course, we view the antibiotic-discovery angle to be very important,” said Felix Wong, PhD, a colead author of the study and postdoctoral fellow at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. “But I think equally important, or maybe even more important, is really our method of opening up the black box.”

The black box is generally thought of as impenetrable in complex machine learning models, and that poses a challenge in the drug discovery realm.

“A major bottleneck in AI-ML-driven drug discovery is that nobody knows what the heck is going on,” said Dr. Wong. Models have such powerful architectures that their decision-making is mysterious.

Researchers input data, such as patient features, and the model says what drugs might be effective. But researchers have no idea how the model arrived at its predictions — until now.

What the Researchers Did

Dr. Wong and his colleagues first mined 39,000 compounds for antibiotic activity against MRSA. They fed information about the compounds’ chemical structures and antibiotic activity into their machine learning model. With this, they “trained” the model to predict whether a compound is antibacterial.

Next, they used additional deep learning to narrow the field, ruling out compounds toxic to humans. Then, deploying their various models at once, they screened 12 million commercially available compounds. Five classes emerged as likely MRSA fighters. Further testing of 280 compounds from the five classes produced the final results: Two compounds from the same class. Both reduced MRSA infection in mouse models.

How did the computer flag these compounds? The researchers sought to answer that question by figuring out which chemical structures the model had been looking for.

A chemical structure can be “pruned” — that is, scientists can remove certain atoms and bonds to reveal an underlying substructure. The MIT researchers used the Monte Carlo Tree Search, a commonly used algorithm in machine learning, to select which atoms and bonds to edit out. Then they fed the pruned substructures into their model to find out which was likely responsible for the antibacterial activity.

“The main idea is we can pinpoint which substructure of a chemical structure is causative instead of just correlated with high antibiotic activity,” Dr. Wong said.

This could fuel new “design-driven” or generative AI approaches where these substructures become “starting points to design entirely unseen, unprecedented antibiotics,” Dr. Wong said. “That’s one of the key efforts that we’ve been working on since the publication of this paper.”

More broadly, their method could lead to discoveries in drug classes beyond antibiotics, such as antivirals and anticancer drugs, according to Dr. Wong.

“This is the first major study that I’ve seen seeking to incorporate explainability into deep learning models in the context of antibiotics,” said César de la Fuente, PhD, an assistant professor at the University of Pennsylvania, Philadelphia, Pennsylvania, whose lab has been engaged in AI for antibiotic discovery for the past 5 years.

“It’s kind of like going into the black box with a magnifying lens and figuring out what is actually happening in there,” Dr. de la Fuente said. “And that will open up possibilities for leveraging those different steps to make better drugs.”

 

 

How Explainable AI Could Revolutionize Medicine

In studies, explainable AI is showing its potential for informing clinical decisions as well — flagging high-risk patients and letting doctors know why that calculation was made. University of Washington researchers have used the technology to predict whether a patient will have hypoxemia during surgery, revealing which features contributed to the prediction, such as blood pressure or body mass index. Another study used explainable AI to help emergency medical services providers and emergency room clinicians optimize time — for example, by identifying trauma patients at high risk for acute traumatic coagulopathy more quickly.

A crucial benefit of explainable AI is its ability to audit machine learning models for mistakes, said Su-In Lee, PhD, a computer scientist who led the UW research.

For example, a surge of research during the pandemic suggested that AI models could predict COVID-19 infection based on chest x-rays. Dr. Lee’s research used explainable AI to show that many of the studies were not as accurate as they claimed. Her lab revealed that many models› decisions were based not on pathologies but rather on other aspects such as laterality markers in the corners of x-rays or medical devices worn by patients (like pacemakers). She applied the same model auditing technique to AI-powered dermatology devices, digging into the flawed reasoning in their melanoma predictions. 

Explainable AI is beginning to affect drug development too. A 2023 study led by Dr. Lee used it to explain how to select complementary drugs for acute myeloid leukemia patients based on the differentiation levels of cancer cells. And in two other studies aimed at identifying Alzheimer’s therapeutic targets, “explainable AI played a key role in terms of identifying the driver pathway,” she said.

Currently, the US Food and Drug Administration (FDA) approval doesn’t require an understanding of a drug’s mechanism of action. But the issue is being raised more often, including at December’s Health Regulatory Policy Conference at MIT’s Jameel Clinic. And just over a year ago, Dr. Lee predicted that the FDA approval process would come to incorporate explainable AI analysis.

“I didn’t hesitate,” Dr. Lee said, regarding her prediction. “We didn’t see this in 2023, so I won’t assert that I was right, but I can confidently say that we are progressing in that direction.”

What’s Next?

The MIT study is part of the Antibiotics-AI project, a 7-year effort to leverage AI to find new antibiotics. Phare Bio, a nonprofit started by MIT professor James Collins, PhD, and others, will do clinical testing on the antibiotic candidates.

Even with the AI’s assistance, there’s still a long way to go before clinical approval.

But knowing which elements contribute to a candidate’s effectiveness against MRSA could help the researchers formulate scientific hypotheses and design better validation, Dr. Lee noted. In other words, because they used explainable AI, they could be better positioned for clinical trial success.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

“New antibiotics discovered using AI!”

That’s how headlines read in December 2023, when MIT researchers announced a new class of antibiotics that could wipe out the drug-resistant superbug methicillin-resistant Staphylococcus aureus (MRSA) in mice.

Powered by deep learning, the study was a significant breakthrough. Few new antibiotics have come out since the 1960s, and this one in particular could be crucial in fighting tough-to-treat MRSA, which kills more than 10,000 people annually in the United States.

But as remarkable as the antibiotic discovery was, it may not be the most impactful part of this study.

The researchers used a method known as explainable artificial intelligence (AI), which unveils the AI’s reasoning process, sometimes known as the black box because it’s hidden from the user. Their work in this emerging field could be pivotal in advancing new drug design.

“Of course, we view the antibiotic-discovery angle to be very important,” said Felix Wong, PhD, a colead author of the study and postdoctoral fellow at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. “But I think equally important, or maybe even more important, is really our method of opening up the black box.”

The black box is generally thought of as impenetrable in complex machine learning models, and that poses a challenge in the drug discovery realm.

“A major bottleneck in AI-ML-driven drug discovery is that nobody knows what the heck is going on,” said Dr. Wong. Models have such powerful architectures that their decision-making is mysterious.

Researchers input data, such as patient features, and the model says what drugs might be effective. But researchers have no idea how the model arrived at its predictions — until now.

What the Researchers Did

Dr. Wong and his colleagues first mined 39,000 compounds for antibiotic activity against MRSA. They fed information about the compounds’ chemical structures and antibiotic activity into their machine learning model. With this, they “trained” the model to predict whether a compound is antibacterial.

Next, they used additional deep learning to narrow the field, ruling out compounds toxic to humans. Then, deploying their various models at once, they screened 12 million commercially available compounds. Five classes emerged as likely MRSA fighters. Further testing of 280 compounds from the five classes produced the final results: Two compounds from the same class. Both reduced MRSA infection in mouse models.

How did the computer flag these compounds? The researchers sought to answer that question by figuring out which chemical structures the model had been looking for.

A chemical structure can be “pruned” — that is, scientists can remove certain atoms and bonds to reveal an underlying substructure. The MIT researchers used the Monte Carlo Tree Search, a commonly used algorithm in machine learning, to select which atoms and bonds to edit out. Then they fed the pruned substructures into their model to find out which was likely responsible for the antibacterial activity.

“The main idea is we can pinpoint which substructure of a chemical structure is causative instead of just correlated with high antibiotic activity,” Dr. Wong said.

This could fuel new “design-driven” or generative AI approaches where these substructures become “starting points to design entirely unseen, unprecedented antibiotics,” Dr. Wong said. “That’s one of the key efforts that we’ve been working on since the publication of this paper.”

More broadly, their method could lead to discoveries in drug classes beyond antibiotics, such as antivirals and anticancer drugs, according to Dr. Wong.

“This is the first major study that I’ve seen seeking to incorporate explainability into deep learning models in the context of antibiotics,” said César de la Fuente, PhD, an assistant professor at the University of Pennsylvania, Philadelphia, Pennsylvania, whose lab has been engaged in AI for antibiotic discovery for the past 5 years.

“It’s kind of like going into the black box with a magnifying lens and figuring out what is actually happening in there,” Dr. de la Fuente said. “And that will open up possibilities for leveraging those different steps to make better drugs.”

 

 

How Explainable AI Could Revolutionize Medicine

In studies, explainable AI is showing its potential for informing clinical decisions as well — flagging high-risk patients and letting doctors know why that calculation was made. University of Washington researchers have used the technology to predict whether a patient will have hypoxemia during surgery, revealing which features contributed to the prediction, such as blood pressure or body mass index. Another study used explainable AI to help emergency medical services providers and emergency room clinicians optimize time — for example, by identifying trauma patients at high risk for acute traumatic coagulopathy more quickly.

A crucial benefit of explainable AI is its ability to audit machine learning models for mistakes, said Su-In Lee, PhD, a computer scientist who led the UW research.

For example, a surge of research during the pandemic suggested that AI models could predict COVID-19 infection based on chest x-rays. Dr. Lee’s research used explainable AI to show that many of the studies were not as accurate as they claimed. Her lab revealed that many models› decisions were based not on pathologies but rather on other aspects such as laterality markers in the corners of x-rays or medical devices worn by patients (like pacemakers). She applied the same model auditing technique to AI-powered dermatology devices, digging into the flawed reasoning in their melanoma predictions. 

Explainable AI is beginning to affect drug development too. A 2023 study led by Dr. Lee used it to explain how to select complementary drugs for acute myeloid leukemia patients based on the differentiation levels of cancer cells. And in two other studies aimed at identifying Alzheimer’s therapeutic targets, “explainable AI played a key role in terms of identifying the driver pathway,” she said.

Currently, the US Food and Drug Administration (FDA) approval doesn’t require an understanding of a drug’s mechanism of action. But the issue is being raised more often, including at December’s Health Regulatory Policy Conference at MIT’s Jameel Clinic. And just over a year ago, Dr. Lee predicted that the FDA approval process would come to incorporate explainable AI analysis.

“I didn’t hesitate,” Dr. Lee said, regarding her prediction. “We didn’t see this in 2023, so I won’t assert that I was right, but I can confidently say that we are progressing in that direction.”

What’s Next?

The MIT study is part of the Antibiotics-AI project, a 7-year effort to leverage AI to find new antibiotics. Phare Bio, a nonprofit started by MIT professor James Collins, PhD, and others, will do clinical testing on the antibiotic candidates.

Even with the AI’s assistance, there’s still a long way to go before clinical approval.

But knowing which elements contribute to a candidate’s effectiveness against MRSA could help the researchers formulate scientific hypotheses and design better validation, Dr. Lee noted. In other words, because they used explainable AI, they could be better positioned for clinical trial success.

A version of this article appeared on Medscape.com.

“New antibiotics discovered using AI!”

That’s how headlines read in December 2023, when MIT researchers announced a new class of antibiotics that could wipe out the drug-resistant superbug methicillin-resistant Staphylococcus aureus (MRSA) in mice.

Powered by deep learning, the study was a significant breakthrough. Few new antibiotics have come out since the 1960s, and this one in particular could be crucial in fighting tough-to-treat MRSA, which kills more than 10,000 people annually in the United States.

But as remarkable as the antibiotic discovery was, it may not be the most impactful part of this study.

The researchers used a method known as explainable artificial intelligence (AI), which unveils the AI’s reasoning process, sometimes known as the black box because it’s hidden from the user. Their work in this emerging field could be pivotal in advancing new drug design.

“Of course, we view the antibiotic-discovery angle to be very important,” said Felix Wong, PhD, a colead author of the study and postdoctoral fellow at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. “But I think equally important, or maybe even more important, is really our method of opening up the black box.”

The black box is generally thought of as impenetrable in complex machine learning models, and that poses a challenge in the drug discovery realm.

“A major bottleneck in AI-ML-driven drug discovery is that nobody knows what the heck is going on,” said Dr. Wong. Models have such powerful architectures that their decision-making is mysterious.

Researchers input data, such as patient features, and the model says what drugs might be effective. But researchers have no idea how the model arrived at its predictions — until now.

What the Researchers Did

Dr. Wong and his colleagues first mined 39,000 compounds for antibiotic activity against MRSA. They fed information about the compounds’ chemical structures and antibiotic activity into their machine learning model. With this, they “trained” the model to predict whether a compound is antibacterial.

Next, they used additional deep learning to narrow the field, ruling out compounds toxic to humans. Then, deploying their various models at once, they screened 12 million commercially available compounds. Five classes emerged as likely MRSA fighters. Further testing of 280 compounds from the five classes produced the final results: Two compounds from the same class. Both reduced MRSA infection in mouse models.

How did the computer flag these compounds? The researchers sought to answer that question by figuring out which chemical structures the model had been looking for.

A chemical structure can be “pruned” — that is, scientists can remove certain atoms and bonds to reveal an underlying substructure. The MIT researchers used the Monte Carlo Tree Search, a commonly used algorithm in machine learning, to select which atoms and bonds to edit out. Then they fed the pruned substructures into their model to find out which was likely responsible for the antibacterial activity.

“The main idea is we can pinpoint which substructure of a chemical structure is causative instead of just correlated with high antibiotic activity,” Dr. Wong said.

This could fuel new “design-driven” or generative AI approaches where these substructures become “starting points to design entirely unseen, unprecedented antibiotics,” Dr. Wong said. “That’s one of the key efforts that we’ve been working on since the publication of this paper.”

More broadly, their method could lead to discoveries in drug classes beyond antibiotics, such as antivirals and anticancer drugs, according to Dr. Wong.

“This is the first major study that I’ve seen seeking to incorporate explainability into deep learning models in the context of antibiotics,” said César de la Fuente, PhD, an assistant professor at the University of Pennsylvania, Philadelphia, Pennsylvania, whose lab has been engaged in AI for antibiotic discovery for the past 5 years.

“It’s kind of like going into the black box with a magnifying lens and figuring out what is actually happening in there,” Dr. de la Fuente said. “And that will open up possibilities for leveraging those different steps to make better drugs.”

 

 

How Explainable AI Could Revolutionize Medicine

In studies, explainable AI is showing its potential for informing clinical decisions as well — flagging high-risk patients and letting doctors know why that calculation was made. University of Washington researchers have used the technology to predict whether a patient will have hypoxemia during surgery, revealing which features contributed to the prediction, such as blood pressure or body mass index. Another study used explainable AI to help emergency medical services providers and emergency room clinicians optimize time — for example, by identifying trauma patients at high risk for acute traumatic coagulopathy more quickly.

A crucial benefit of explainable AI is its ability to audit machine learning models for mistakes, said Su-In Lee, PhD, a computer scientist who led the UW research.

For example, a surge of research during the pandemic suggested that AI models could predict COVID-19 infection based on chest x-rays. Dr. Lee’s research used explainable AI to show that many of the studies were not as accurate as they claimed. Her lab revealed that many models› decisions were based not on pathologies but rather on other aspects such as laterality markers in the corners of x-rays or medical devices worn by patients (like pacemakers). She applied the same model auditing technique to AI-powered dermatology devices, digging into the flawed reasoning in their melanoma predictions. 

Explainable AI is beginning to affect drug development too. A 2023 study led by Dr. Lee used it to explain how to select complementary drugs for acute myeloid leukemia patients based on the differentiation levels of cancer cells. And in two other studies aimed at identifying Alzheimer’s therapeutic targets, “explainable AI played a key role in terms of identifying the driver pathway,” she said.

Currently, the US Food and Drug Administration (FDA) approval doesn’t require an understanding of a drug’s mechanism of action. But the issue is being raised more often, including at December’s Health Regulatory Policy Conference at MIT’s Jameel Clinic. And just over a year ago, Dr. Lee predicted that the FDA approval process would come to incorporate explainable AI analysis.

“I didn’t hesitate,” Dr. Lee said, regarding her prediction. “We didn’t see this in 2023, so I won’t assert that I was right, but I can confidently say that we are progressing in that direction.”

What’s Next?

The MIT study is part of the Antibiotics-AI project, a 7-year effort to leverage AI to find new antibiotics. Phare Bio, a nonprofit started by MIT professor James Collins, PhD, and others, will do clinical testing on the antibiotic candidates.

Even with the AI’s assistance, there’s still a long way to go before clinical approval.

But knowing which elements contribute to a candidate’s effectiveness against MRSA could help the researchers formulate scientific hypotheses and design better validation, Dr. Lee noted. In other words, because they used explainable AI, they could be better positioned for clinical trial success.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>166930</fileName> <TBEID>0C04E894.SIG</TBEID> <TBUniqueIdentifier>MD_0C04E894</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240213T151318</QCDate> <firstPublished>20240213T152025</firstPublished> <LastPublished>20240213T152025</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240213T152025</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>Sarah Amandolare</byline> <bylineText>SARAH AMANDOLARE</bylineText> <bylineFull>SARAH AMANDOLARE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>The researchers used a method known as explainable artificial intelligence (AI), which unveils the AI’s reasoning process, sometimes known as the black box beca</metaDescription> <articlePDF/> <teaserImage/> <teaser>MIT scientists use ‘explainable AI’ to discover new antibiotics, and possibly lead the way for other developments.</teaser> <title>How the New MRSA Antibiotic Cracked AI’s ‘Black Box’</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>idprac</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term>15</term> <term>21</term> <term canonical="true">20</term> </publications> <sections> <term canonical="true">39313</term> </sections> <topics> <term>27442</term> <term canonical="true">319</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>How the New MRSA Antibiotic Cracked AI’s ‘Black Box’</title> <deck/> </itemMeta> <itemContent> <p>“New antibiotics discovered using AI!”</p> <p>That’s how headlines read in December 2023, when MIT researchers <a href="https://www.nature.com/articles/s41586-023-06887-8">announced</a> a new class of antibiotics that could wipe out the drug-resistant superbug methicillin-resistant <em>Staphylococcus aureus</em> (MRSA) in mice.<br/><br/>Powered by deep learning, the study was a significant breakthrough. Few new antibiotics have come out since the 1960s, and this one in particular could be crucial in fighting tough-to-treat MRSA, which kills more than <a href="https://www.cdc.gov/mmwr/volumes/68/wr/mm6809e1.htm">10,000 people</a> annually in the United States.<br/><br/>But as remarkable as the antibiotic discovery was, it may not be the most impactful part of this study.<br/><br/><span class="tag metaDescription">The researchers used a method known as explainable artificial intelligence (AI), which unveils the AI’s reasoning process, sometimes known as the black box because it’s hidden from the user. Their work in this emerging field could be pivotal in advancing new drug design.</span><br/><br/>“Of course, we view the antibiotic-discovery angle to be very important,” said <span class="Hyperlink"><a href="https://www.mit.edu/~wongf/">Felix Wong, PhD</a></span>, a colead author of the study and postdoctoral fellow at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. “But I think equally important, or maybe even more important, is really our method of opening up the black box.”<br/><br/>The black box is generally thought of as impenetrable in complex machine learning models, and that poses a challenge in the drug discovery realm.<br/><br/>“A major bottleneck in AI-ML-driven drug discovery is that nobody knows what the heck is going on,” said Dr. Wong. Models have such powerful architectures that their decision-making is mysterious.<br/><br/>Researchers input data, such as patient features, and the model says what drugs might be effective. But researchers have no idea how the model arrived at its predictions — until now.</p> <h2>What the Researchers Did</h2> <p>Dr. Wong and his colleagues first mined 39,000 compounds for antibiotic activity against MRSA. They fed information about the compounds’ chemical structures and antibiotic activity into their machine learning model. With this, they “trained” the model to predict whether a compound is antibacterial.</p> <p>Next, they used additional deep learning to narrow the field, ruling out compounds toxic to humans. Then, deploying their various models at once, they screened 12 million commercially available compounds. Five classes emerged as likely MRSA fighters. Further testing of 280 compounds from the five classes produced the final results: Two compounds from the same class. Both reduced MRSA infection in mouse models.<br/><br/>How did the computer flag these compounds? The researchers sought to answer that question by figuring out which chemical structures the model had been looking for.<br/><br/>A chemical structure can be “pruned” — that is, scientists can remove certain atoms and bonds to reveal an underlying substructure. The MIT researchers used the Monte Carlo Tree Search, a commonly used algorithm in machine learning, to select which atoms and bonds to edit out. Then they fed the pruned substructures into their model to find out which was likely responsible for the antibacterial activity.<br/><br/>“The main idea is we can pinpoint which substructure of a chemical structure is causative instead of just correlated with high antibiotic activity,” Dr. Wong said.<br/><br/>This could fuel new “design-driven” or generative AI approaches where these substructures become “starting points to design entirely unseen, unprecedented antibiotics,” Dr. Wong said. “That’s one of the key efforts that we’ve been working on since the publication of this paper.”<br/><br/>More broadly, their method could lead to discoveries in drug classes beyond antibiotics, such as antivirals and anticancer drugs, according to Dr. Wong.<br/><br/>“This is the first major study that I’ve seen seeking to incorporate explainability into deep learning models in the context of antibiotics,” said <a href="https://delafuentelab.seas.upenn.edu/">César de la Fuente</a>, PhD, an assistant professor at the University of Pennsylvania, Philadelphia, Pennsylvania, whose lab has been engaged in AI for antibiotic discovery for the past 5 years.<br/><br/>“It’s kind of like going into the black box with a magnifying lens and figuring out what is actually happening in there,” Dr. de la Fuente said. “And that will open up possibilities for leveraging those different steps to make better drugs.”</p> <h2>How Explainable AI Could Revolutionize Medicine</h2> <p>In studies, explainable AI is showing its potential for informing clinical decisions as well — flagging high-risk patients and letting doctors know why that calculation was made. University of Washington researchers have used the technology to predict whether a patient will <a href="https://www.nature.com/articles/s41551-018-0304-0">have hypoxemia</a> during surgery, revealing which features contributed to the prediction, such as blood pressure or body mass index. Another <a href="https://www.nature.com/articles/s41551-022-00872-8.epdf?sharing_token=AbecY23Lw9m6UtX02tjMWdRgN0jAjWel9jnR3ZoTv0Ot6IX5edej-UY9W2MjXAysD9FAAl40c14kJUNO34dE5yS4HDpJI-kfl9kW3-c-Xr_rC1zpMvMte4aGtpoCNucExyt7DBFAqu2S0AyfPUulA8gn-j8iD9BlxDp_RUhykYM%3D">study</a> used explainable AI to help emergency medical services providers and emergency room clinicians optimize time — for example, by identifying trauma patients at high risk for acute traumatic coagulopathy more quickly.</p> <p>A crucial benefit of explainable AI is its ability to audit machine learning models for mistakes, said <a href="https://aims.cs.washington.edu/su-in-lee">Su-In Lee</a>, PhD, a computer scientist who led the UW research.<br/><br/>For example, a surge of research during the pandemic suggested that AI models could predict COVID-19 infection based on chest x-rays. Dr. Lee’s <a href="https://www.nature.com/articles/s42256-021-00338-7">research</a> used explainable AI to show that many of the studies were not as accurate as they claimed. Her lab revealed that many models› decisions were based not on pathologies but rather on <a href="https://www.nature.com/articles/d41586-022-00858-1">other aspects</a> such as laterality markers in the corners of x-rays or medical devices worn by patients (like <a href="https://emedicine.medscape.com/article/162245-overview">pacemakers</a>). She applied the same model auditing technique to <a href="https://www.nature.com/articles/s41551-023-01160-9">AI-powered dermatology devices</a>, digging into the flawed reasoning in their <a href="https://emedicine.medscape.com/article/1295718-overview">melanoma</a> predictions. <br/><br/>Explainable AI is beginning to affect drug development too. A <a href="https://www.nature.com/articles/s41551-023-01034-0">2023 study</a> led by Dr. Lee used it to explain how to select <a href="https://www.medrxiv.org/content/10.1101/2023.06.07.23291119v1">complementary drugs</a> for <a href="https://emedicine.medscape.com/article/197802-overview">acute myeloid leukemia</a> patients based on the differentiation levels of cancer cells. And in <a href="https://genomebiology.biomedcentral.com/articles/10.1186/s13059-023-02901-4">two other </a><a href="https://www.nature.com/articles/s41467-021-25680-7">studies</a> aimed at identifying Alzheimer’s therapeutic targets, “explainable AI played a key role in terms of identifying the driver pathway,” she said.<br/><br/>Currently, the US Food and Drug Administration (FDA) approval doesn’t require an understanding of a drug’s mechanism of action. But the issue is being raised more often, including at December’s <a href="https://news.mit.edu/2024/what-to-do-about-ai-in-health-0123">Health Regulatory Policy Conference</a> at MIT’s Jameel Clinic. And just over a year ago, Dr. Lee <a href="https://www.geekwire.com/2022/digital-health-leaders-share-predictions-on-what-to-expect-in-2023/">predicted</a> that the FDA approval process would come to incorporate explainable AI analysis.<br/><br/>“I didn’t hesitate,” Dr. Lee said, regarding her prediction. “We didn’t see this in 2023, so I won’t assert that I was right, but I can confidently say that we are progressing in that direction.”</p> <h2>What’s Next?</h2> <p>The MIT study is part of the Antibiotics-AI project, a 7-year effort to leverage AI to find new antibiotics. Phare Bio, a nonprofit started by MIT professor James Collins, PhD, and others, will do clinical testing on the antibiotic candidates.</p> <p>Even with the AI’s assistance, there’s still a long way to go before clinical approval.<br/><br/>But knowing which elements contribute to a candidate’s effectiveness against MRSA could help the researchers formulate scientific hypotheses and design better validation, Dr. Lee noted. In other words, because they used explainable AI, they could be better positioned for clinical trial success.<span class="end"/></p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/how-new-mrsa-antibiotic-cracked-open-ais-black-box-2024a1000326">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article