Article Type
Changed
Wed, 01/17/2024 - 16:28

ORLANDO — Experts shed light on the applications, benefits, and pitfalls of artificial intelligence (AI) during the Merrit-Putnam Symposium at the annual meeting of the American Epilepsy Society (AES).

In a session titled “Artificial Intelligence Fundamentals and Breakthrough Applications in Epilepsy,” University of Pittsburgh neurologist and assistant professor Wesley Kerr, MD, PhD, provided an overview of AI as well its applications in neurology. He began by addressing perhaps one of the most controversial topics regarding AI in the medical community: clinicians’ fear of being replaced by technology.

“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence,” he told the audience.
 

To Optimize AI, Clinicians Must Lay the Proper Foundation

Dr. Kerr’s presentation focused on providing audience members with tools to help them evaluate new technologies, recognize benefits, and identify key costs and limitations associated with AI implementation and integration into clinical practice.

Before delving deeper, one must first understand basic terminology regarding AI. Without this knowledge, clinicians may inadvertently introduce bias or errata or fail to understand how to best leverage the technology to enhance the quality of the practice while improving patient outcomes.

Machine learning (ML) describes the process of using data to learn a specific task. Deep learning (DL) stacks multiple layers of ML to improve performance on the task. Lastly, generative AI generates content such as text, images, and media.

Utilizing AI effectively in clinical applications involves tapping into select features most related to prediction (for example, disease factors) and grouping features into categories based on measuring commonalities such as factor composition in a population. This information should be used in training data only.

Fully understanding ML/AI allows clinicians to use it as a diagnostic test by exploiting a combination of accuracy, sensitivity, and specificity, along with positive and negative predictive values.
 

Data Fidelity and Integrity Hinge on Optimal Data Inputs

In the case of epilepsy, calibration curves can provide practical guidance in terms of predicting impending seizures.

“ML/AI needs gold-standard labels for evaluation,” Dr. Kerr said. He went on to stress the importance of quality data inputs to optimize the fidelity of AI’s predictive analytics.

“If you input garbage, you’ll get garbage out,” he said. “So a lot of garbage going in means a lot of garbage out.”

Such “garbage” can result in missed or erroneous diagnoses, or even faulty predictions. Even when the data are complete, AI can draw incorrect conclusions based on trends for which it lacks proper context.

Dr. Kerr used epilepsy trends in the Black population to illustrate this problem.

“One potential bias is that AI can figure out a patient is Black without being told, and based on data that Black patients are less likely to get epilepsy surgery,” he said, “AI would say they don’t need it because they’re Black, which isn’t true.”

In other words, ML/AI can use systematic determinants of health, such as race, to learn what Dr. Kerr referred to as an “inappropriate association.”

For that reason, ML/AI users must test for bias.

Such data are often retrieved from electronic health records (EHR), which serve as an important source of data ML/AI input. Using EHR makes sense, as they are a major source of missed potential in improving prompt treatment. According to Dr. Kerr, 20% of academic neurologists’ notes miss seizure frequency, and 30% miss the age of onset.

In addition, International Classification of Diseases (ICD) codes create another hurdle depending on the type of code used. For example, epilepsy with G40 or 2 codes of R56 is reliable, while focal to bilateral versus generalized epilepsy proves more challenging.
 

 

 

AI Improves Efficiency in National Language Generation

Large language models (LLM) look at first drafts and can save time on formatting, image selection, and construction. Perhaps ChatGPT is the most famous LLM, but other tools in this category include Open AI and Bard. LLMs are trained on “the whole internet” and use publicly accessible text.

In these cases, prompts serve as input data. Output data are predictions of the first and subsequent words.

Many users appreciate the foundation LLMs provide in terms of facilitating and collating research and summarizing ideas. The LLM-generated text actually serves as a first draft, saving users time on more clerical tasks such as formatting, image selection, and structure. Notwithstanding, these tools still require human supervision to screen for hallucinations or to add specialized content.

“LLMs are a great starting place to save time but are loaded with errors,” Dr. Kerr said.

Even if the tools could produce error-free content, ethics still come into play when using AI-generated content without any alterations. Any ML/AI that has not been modified or supervised is considered plagiarism.

Yet, interestingly enough, Dr. Kerr found that patients respond more positively to AI than physicians when interacting.

“Patients felt that AI was more sensitive and compassionate because it was longer-winded and humans are short,” he said. He went on to argue that AI might actually prove useful in helping physicians to improve the quality of their patient interactions.

Dr. Kerr left the audience with these key takeaways:

  • ML/AI is just one type of clinical tool with benefits and limitations. The technology conveys the advantages of freeing up the clinician’s time to focus on more human-centered tasks, improving clinical decisions in challenging situations, and improving efficiency.
  • However, healthcare systems should understand that ML/AI is not 100% foolproof, as the software’s knowledge is limited to its training exposure, and proper use requires supervision.
Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

ORLANDO — Experts shed light on the applications, benefits, and pitfalls of artificial intelligence (AI) during the Merrit-Putnam Symposium at the annual meeting of the American Epilepsy Society (AES).

In a session titled “Artificial Intelligence Fundamentals and Breakthrough Applications in Epilepsy,” University of Pittsburgh neurologist and assistant professor Wesley Kerr, MD, PhD, provided an overview of AI as well its applications in neurology. He began by addressing perhaps one of the most controversial topics regarding AI in the medical community: clinicians’ fear of being replaced by technology.

“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence,” he told the audience.
 

To Optimize AI, Clinicians Must Lay the Proper Foundation

Dr. Kerr’s presentation focused on providing audience members with tools to help them evaluate new technologies, recognize benefits, and identify key costs and limitations associated with AI implementation and integration into clinical practice.

Before delving deeper, one must first understand basic terminology regarding AI. Without this knowledge, clinicians may inadvertently introduce bias or errata or fail to understand how to best leverage the technology to enhance the quality of the practice while improving patient outcomes.

Machine learning (ML) describes the process of using data to learn a specific task. Deep learning (DL) stacks multiple layers of ML to improve performance on the task. Lastly, generative AI generates content such as text, images, and media.

Utilizing AI effectively in clinical applications involves tapping into select features most related to prediction (for example, disease factors) and grouping features into categories based on measuring commonalities such as factor composition in a population. This information should be used in training data only.

Fully understanding ML/AI allows clinicians to use it as a diagnostic test by exploiting a combination of accuracy, sensitivity, and specificity, along with positive and negative predictive values.
 

Data Fidelity and Integrity Hinge on Optimal Data Inputs

In the case of epilepsy, calibration curves can provide practical guidance in terms of predicting impending seizures.

“ML/AI needs gold-standard labels for evaluation,” Dr. Kerr said. He went on to stress the importance of quality data inputs to optimize the fidelity of AI’s predictive analytics.

“If you input garbage, you’ll get garbage out,” he said. “So a lot of garbage going in means a lot of garbage out.”

Such “garbage” can result in missed or erroneous diagnoses, or even faulty predictions. Even when the data are complete, AI can draw incorrect conclusions based on trends for which it lacks proper context.

Dr. Kerr used epilepsy trends in the Black population to illustrate this problem.

“One potential bias is that AI can figure out a patient is Black without being told, and based on data that Black patients are less likely to get epilepsy surgery,” he said, “AI would say they don’t need it because they’re Black, which isn’t true.”

In other words, ML/AI can use systematic determinants of health, such as race, to learn what Dr. Kerr referred to as an “inappropriate association.”

For that reason, ML/AI users must test for bias.

Such data are often retrieved from electronic health records (EHR), which serve as an important source of data ML/AI input. Using EHR makes sense, as they are a major source of missed potential in improving prompt treatment. According to Dr. Kerr, 20% of academic neurologists’ notes miss seizure frequency, and 30% miss the age of onset.

In addition, International Classification of Diseases (ICD) codes create another hurdle depending on the type of code used. For example, epilepsy with G40 or 2 codes of R56 is reliable, while focal to bilateral versus generalized epilepsy proves more challenging.
 

 

 

AI Improves Efficiency in National Language Generation

Large language models (LLM) look at first drafts and can save time on formatting, image selection, and construction. Perhaps ChatGPT is the most famous LLM, but other tools in this category include Open AI and Bard. LLMs are trained on “the whole internet” and use publicly accessible text.

In these cases, prompts serve as input data. Output data are predictions of the first and subsequent words.

Many users appreciate the foundation LLMs provide in terms of facilitating and collating research and summarizing ideas. The LLM-generated text actually serves as a first draft, saving users time on more clerical tasks such as formatting, image selection, and structure. Notwithstanding, these tools still require human supervision to screen for hallucinations or to add specialized content.

“LLMs are a great starting place to save time but are loaded with errors,” Dr. Kerr said.

Even if the tools could produce error-free content, ethics still come into play when using AI-generated content without any alterations. Any ML/AI that has not been modified or supervised is considered plagiarism.

Yet, interestingly enough, Dr. Kerr found that patients respond more positively to AI than physicians when interacting.

“Patients felt that AI was more sensitive and compassionate because it was longer-winded and humans are short,” he said. He went on to argue that AI might actually prove useful in helping physicians to improve the quality of their patient interactions.

Dr. Kerr left the audience with these key takeaways:

  • ML/AI is just one type of clinical tool with benefits and limitations. The technology conveys the advantages of freeing up the clinician’s time to focus on more human-centered tasks, improving clinical decisions in challenging situations, and improving efficiency.
  • However, healthcare systems should understand that ML/AI is not 100% foolproof, as the software’s knowledge is limited to its training exposure, and proper use requires supervision.

ORLANDO — Experts shed light on the applications, benefits, and pitfalls of artificial intelligence (AI) during the Merrit-Putnam Symposium at the annual meeting of the American Epilepsy Society (AES).

In a session titled “Artificial Intelligence Fundamentals and Breakthrough Applications in Epilepsy,” University of Pittsburgh neurologist and assistant professor Wesley Kerr, MD, PhD, provided an overview of AI as well its applications in neurology. He began by addressing perhaps one of the most controversial topics regarding AI in the medical community: clinicians’ fear of being replaced by technology.

“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence,” he told the audience.
 

To Optimize AI, Clinicians Must Lay the Proper Foundation

Dr. Kerr’s presentation focused on providing audience members with tools to help them evaluate new technologies, recognize benefits, and identify key costs and limitations associated with AI implementation and integration into clinical practice.

Before delving deeper, one must first understand basic terminology regarding AI. Without this knowledge, clinicians may inadvertently introduce bias or errata or fail to understand how to best leverage the technology to enhance the quality of the practice while improving patient outcomes.

Machine learning (ML) describes the process of using data to learn a specific task. Deep learning (DL) stacks multiple layers of ML to improve performance on the task. Lastly, generative AI generates content such as text, images, and media.

Utilizing AI effectively in clinical applications involves tapping into select features most related to prediction (for example, disease factors) and grouping features into categories based on measuring commonalities such as factor composition in a population. This information should be used in training data only.

Fully understanding ML/AI allows clinicians to use it as a diagnostic test by exploiting a combination of accuracy, sensitivity, and specificity, along with positive and negative predictive values.
 

Data Fidelity and Integrity Hinge on Optimal Data Inputs

In the case of epilepsy, calibration curves can provide practical guidance in terms of predicting impending seizures.

“ML/AI needs gold-standard labels for evaluation,” Dr. Kerr said. He went on to stress the importance of quality data inputs to optimize the fidelity of AI’s predictive analytics.

“If you input garbage, you’ll get garbage out,” he said. “So a lot of garbage going in means a lot of garbage out.”

Such “garbage” can result in missed or erroneous diagnoses, or even faulty predictions. Even when the data are complete, AI can draw incorrect conclusions based on trends for which it lacks proper context.

Dr. Kerr used epilepsy trends in the Black population to illustrate this problem.

“One potential bias is that AI can figure out a patient is Black without being told, and based on data that Black patients are less likely to get epilepsy surgery,” he said, “AI would say they don’t need it because they’re Black, which isn’t true.”

In other words, ML/AI can use systematic determinants of health, such as race, to learn what Dr. Kerr referred to as an “inappropriate association.”

For that reason, ML/AI users must test for bias.

Such data are often retrieved from electronic health records (EHR), which serve as an important source of data ML/AI input. Using EHR makes sense, as they are a major source of missed potential in improving prompt treatment. According to Dr. Kerr, 20% of academic neurologists’ notes miss seizure frequency, and 30% miss the age of onset.

In addition, International Classification of Diseases (ICD) codes create another hurdle depending on the type of code used. For example, epilepsy with G40 or 2 codes of R56 is reliable, while focal to bilateral versus generalized epilepsy proves more challenging.
 

 

 

AI Improves Efficiency in National Language Generation

Large language models (LLM) look at first drafts and can save time on formatting, image selection, and construction. Perhaps ChatGPT is the most famous LLM, but other tools in this category include Open AI and Bard. LLMs are trained on “the whole internet” and use publicly accessible text.

In these cases, prompts serve as input data. Output data are predictions of the first and subsequent words.

Many users appreciate the foundation LLMs provide in terms of facilitating and collating research and summarizing ideas. The LLM-generated text actually serves as a first draft, saving users time on more clerical tasks such as formatting, image selection, and structure. Notwithstanding, these tools still require human supervision to screen for hallucinations or to add specialized content.

“LLMs are a great starting place to save time but are loaded with errors,” Dr. Kerr said.

Even if the tools could produce error-free content, ethics still come into play when using AI-generated content without any alterations. Any ML/AI that has not been modified or supervised is considered plagiarism.

Yet, interestingly enough, Dr. Kerr found that patients respond more positively to AI than physicians when interacting.

“Patients felt that AI was more sensitive and compassionate because it was longer-winded and humans are short,” he said. He went on to argue that AI might actually prove useful in helping physicians to improve the quality of their patient interactions.

Dr. Kerr left the audience with these key takeaways:

  • ML/AI is just one type of clinical tool with benefits and limitations. The technology conveys the advantages of freeing up the clinician’s time to focus on more human-centered tasks, improving clinical decisions in challenging situations, and improving efficiency.
  • However, healthcare systems should understand that ML/AI is not 100% foolproof, as the software’s knowledge is limited to its training exposure, and proper use requires supervision.
Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>166570</fileName> <TBEID>0C04E094.SIG</TBEID> <TBUniqueIdentifier>MD_0C04E094</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname>AES: AI in Epilepsy</storyname> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240117T135607</QCDate> <firstPublished>20240117T162453</firstPublished> <LastPublished>20240117T162453</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240117T162453</CMSDate> <articleSource>FROM AES 2023</articleSource> <facebookInfo/> <meetingNumber>3271-23</meetingNumber> <byline>Frieda Wiley</byline> <bylineText>FRIEDA WILEY, PHARMD</bylineText> <bylineFull>FRIEDA WILEY, PHARMD</bylineFull> <bylineTitleText>MDedge News</bylineTitleText> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence</metaDescription> <articlePDF/> <teaserImage/> <teaser>“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence.”</teaser> <title>With Proper Training, AI Can Be a Useful Tool in Epilepsy Management</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear>2024</pubPubdateYear> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>nr</publicationCode> <pubIssueName>January 2021</pubIssueName> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle>Neurology Reviews</journalTitle> <journalFullTitle>Neurology Reviews</journalFullTitle> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> <publicationData> <publicationCode>erc</publicationCode> <pubIssueName>January 2014</pubIssueName> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> </publications_g> <publications> <term canonical="true">22</term> <term>356</term> </publications> <sections> <term>39313</term> <term canonical="true">53</term> </sections> <topics> <term canonical="true">211</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>With Proper Training, AI Can Be a Useful Tool in Epilepsy Management</title> <deck/> </itemMeta> <itemContent> <p>ORLANDO — Experts shed light on the applications, benefits, and pitfalls of artificial intelligence (AI) during the Merrit-Putnam Symposium at the annual meeting of the American Epilepsy Society (AES). </p> <p>In a session titled “Artificial Intelligence Fundamentals and Breakthrough Applications in Epilepsy,” University of Pittsburgh neurologist and assistant professor Wesley Kerr, MD, PhD, provided an overview of AI as well its applications in neurology. He began by addressing perhaps one of the most controversial topics regarding AI in the medical community: clinicians’ fear of being replaced by technology. <br/><br/><span class="tag metaDescription">“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence,”</span> he told the audience. <br/><br/></p> <h2>To Optimize AI, Clinicians Must Lay the Proper Foundation</h2> <p>Dr. Kerr’s presentation focused on providing audience members with tools to help them evaluate new technologies, recognize benefits, and identify key costs and limitations associated with AI implementation and integration into clinical practice. </p> <p>Before delving deeper, one must first understand basic terminology regarding AI. Without this knowledge, clinicians may inadvertently introduce bias or errata or fail to understand how to best leverage the technology to enhance the quality of the practice while improving patient outcomes. <br/><br/>Machine learning (ML) describes the process of using data to learn a specific task. Deep learning (DL) stacks multiple layers of ML to improve performance on the task. Lastly, generative AI generates content such as text, images, and media.<br/><br/>Utilizing AI effectively in clinical applications involves tapping into select features most related to prediction (for example, disease factors) and grouping features into categories based on measuring commonalities such as factor composition in a population. This information should be used in training data only. <br/><br/>Fully understanding ML/AI allows clinicians to use it as a diagnostic test by exploiting a combination of accuracy, sensitivity, and specificity, along with positive and negative predictive values. <br/><br/></p> <h2>Data Fidelity and Integrity Hinge on Optimal Data Inputs</h2> <p>In the case of epilepsy, calibration curves can provide practical guidance in terms of predicting impending seizures. </p> <p>“ML/AI needs gold-standard labels for evaluation,” Dr. Kerr said. He went on to stress the importance of quality data inputs to optimize the fidelity of AI’s predictive analytics. <br/><br/>“If you input garbage, you’ll get garbage out,” he said. “So a lot of garbage going in means a lot of garbage out.”<br/><br/>Such “garbage” can result in missed or erroneous diagnoses, or even faulty predictions. Even when the data are complete, AI can draw incorrect conclusions based on trends for which it lacks proper context. <br/><br/>Dr. Kerr used epilepsy trends in the Black population to illustrate this problem.<br/><br/>“One potential bias is that AI can figure out a patient is Black without being told, and based on data that Black patients are less likely to get epilepsy surgery,” he said, “AI would say they don’t need it because they’re Black, which isn’t true.”<br/><br/>In other words, ML/AI can use systematic determinants of health, such as race, to learn what Dr. Kerr referred to as an “inappropriate association.” <br/><br/>For that reason, ML/AI users must test for bias.<br/><br/>Such data are often retrieved from electronic health records (EHR), which serve as an important source of data ML/AI input. Using EHR makes sense, as they are a major source of missed potential in improving prompt treatment. According to Dr. Kerr, 20% of academic neurologists’ notes miss seizure frequency, and 30% miss the age of onset. <br/><br/>In addition, International Classification of Diseases (ICD) codes create another hurdle depending on the type of code used. For example, epilepsy with G40 or 2 codes of R56 is reliable, while focal to bilateral versus generalized epilepsy proves more challenging. <br/><br/></p> <h2>AI Improves Efficiency in National Language Generation</h2> <p>Large language models (LLM) look at first drafts and can save time on formatting, image selection, and construction. Perhaps ChatGPT is the most famous LLM, but other tools in this category include Open AI and Bard. LLMs are trained on “the whole internet” and use publicly accessible text. </p> <p>In these cases, prompts serve as input data. Output data are predictions of the first and subsequent words. <br/><br/>Many users appreciate the foundation LLMs provide in terms of facilitating and collating research and summarizing ideas. The LLM-generated text actually serves as a first draft, saving users time on more clerical tasks such as formatting, image selection, and structure. Notwithstanding, these tools still require human supervision to screen for hallucinations or to add specialized content.<br/><br/>“LLMs are a great starting place to save time but are loaded with errors,” Dr. Kerr said. <br/><br/>Even if the tools could produce error-free content, ethics still come into play when using AI-generated content without any alterations. Any ML/AI that has not been modified or supervised is considered plagiarism. <br/><br/>Yet, interestingly enough, Dr. Kerr found that patients respond more positively to AI than physicians when interacting.<br/><br/>“Patients felt that AI was more sensitive and compassionate because it was longer-winded and humans are short,” he said. He went on to argue that AI might actually prove useful in helping physicians to improve the quality of their patient interactions. <br/><br/>Dr. Kerr left the audience with these key takeaways:</p> <ul class="body"> <li>ML/AI is just one type of clinical tool with benefits and limitations. The technology conveys the advantages of freeing up the clinician’s time to focus on more human-centered tasks, improving clinical decisions in challenging situations, and improving efficiency. </li> <li>However, healthcare systems should understand that ML/AI is not 100% foolproof, as the software’s knowledge is limited to its training exposure, and proper use requires supervision. </li> </ul> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Article Source

FROM AES 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article