Article Type
Changed
Mon, 04/15/2024 - 17:37

 

TOPLINE:

A recent survey highlighted ethical concerns US oncologists have about using artificial intelligence (AI) to help make cancer treatment decisions and revealed some contradictory views about how best to integrate these tools into practice. Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.

METHODOLOGY:

  • The US Food and Drug Administration (FDA) has  for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
  • However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
  • In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
  • Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
  • The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.

TAKEAWAY:

  • Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
  • When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
  • About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
  • Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.

IN PRACTICE:

“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.

SOURCE:

The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.

LIMITATIONS:

The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.

DISCLOSURES:

The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A recent survey highlighted ethical concerns US oncologists have about using artificial intelligence (AI) to help make cancer treatment decisions and revealed some contradictory views about how best to integrate these tools into practice. Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.

METHODOLOGY:

  • The US Food and Drug Administration (FDA) has  for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
  • However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
  • In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
  • Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
  • The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.

TAKEAWAY:

  • Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
  • When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
  • About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
  • Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.

IN PRACTICE:

“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.

SOURCE:

The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.

LIMITATIONS:

The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.

DISCLOSURES:

The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.

A version of this article appeared on Medscape.com.

 

TOPLINE:

A recent survey highlighted ethical concerns US oncologists have about using artificial intelligence (AI) to help make cancer treatment decisions and revealed some contradictory views about how best to integrate these tools into practice. Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.

METHODOLOGY:

  • The US Food and Drug Administration (FDA) has  for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
  • However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
  • In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
  • Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
  • The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.

TAKEAWAY:

  • Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
  • When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
  • About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
  • Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.

IN PRACTICE:

“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.

SOURCE:

The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.

LIMITATIONS:

The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.

DISCLOSURES:

The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167698</fileName> <TBEID>0C04F8D4.SIG</TBEID> <TBUniqueIdentifier>MD_0C04F8D4</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240412T154850</QCDate> <firstPublished>20240412T164351</firstPublished> <LastPublished>20240412T164352</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240412T164351</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>Megan Brooks</byline> <bylineText>MEGAN BROOKS</bylineText> <bylineFull>MEGAN BROOKS</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>A recent survey highlighted ethical concerns US oncologists have about using artificial intelligence (AI) to help make cancer treatment decisions and revealed s</metaDescription> <articlePDF/> <teaserImage/> <teaser>Researchers ask 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.</teaser> <title>Oncologists Voice Ethical Concerns Over AI in Cancer Care</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>oncr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>hemn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>pn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>skin</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>chph</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>nr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle>Neurology Reviews</journalTitle> <journalFullTitle>Neurology Reviews</journalFullTitle> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> <publicationData> <publicationCode>GIHOLD</publicationCode> <pubIssueName>January 2014</pubIssueName> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> </publications_g> <publications> <term canonical="true">31</term> <term>18</term> <term>25</term> <term>13</term> <term>6</term> <term>21</term> <term>15</term> <term>22</term> </publications> <sections> <term canonical="true">27970</term> <term>39313</term> <term>86</term> </sections> <topics> <term canonical="true">278</term> <term>192</term> <term>198</term> <term>61821</term> <term>59244</term> <term>67020</term> <term>214</term> <term>217</term> <term>221</term> <term>238</term> <term>244</term> <term>242</term> <term>240</term> <term>39570</term> <term>256</term> <term>245</term> <term>270</term> <term>271</term> <term>31848</term> <term>292</term> <term>280</term> <term>27442</term> <term>179</term> <term>178</term> <term>59374</term> <term>37637</term> <term>233</term> <term>243</term> <term>250</term> <term>253</term> <term>49434</term> <term>303</term> <term>263</term> <term>38029</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Oncologists Voice Ethical Concerns Over AI in Cancer Care</title> <deck/> </itemMeta> <itemContent> <h2>TOPLINE:</h2> <p><span class="tag metaDescription">A recent survey highlighted ethical concerns US oncologists have about using artificial intelligence (AI) to help make cancer treatment decisions and revealed some contradictory views about how best to integrate these tools into practice.</span> Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.</p> <h2>METHODOLOGY:</h2> <ul class="body"> <li>The US Food and Drug Administration (FDA) has  for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.</li> <li>However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.</li> <li>In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.</li> <li>Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.</li> <li>The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.</li> </ul> <h2>TAKEAWAY:</h2> <ul class="body"> <li>Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.</li> <li>When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).</li> <li>About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.</li> <li>Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.</li> </ul> <h2>IN PRACTICE:</h2> <p>“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.</p> <h2>SOURCE:</h2> <p>The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2816829">published</a> last month in <em>JAMA Network Open</em>.</p> <h2>LIMITATIONS:</h2> <p>The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.</p> <h2>DISCLOSURES:</h2> <p>The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.</p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/oncologists-voice-ethical-concerns-over-ai-cancer-care-2024a100071i">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article