Article Type
Changed
Tue, 06/20/2023 - 10:09

ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay, so like them or not, physicians might as well figure out how to optimize their role in medicine and health care. That’s the takeaway from a three-expert panel session about the technology held at the annual Digestive Disease Week® (DDW).

The chatbot can help doctors to a certain extent by suggesting differential diagnoses, assisting with clinical note-taking, and producing rapid and easy-to-understand patient communication and educational materials, they noted. However, it can also make mistakes. And, unlike a medical trainee who might give a clinical answer and express some doubt, ChatGPT (Open AI/Microsoft) clearly states its findings as fact, even when it’s wrong.

Known as “hallucinating,” this problem of AI inaccuracy was displayed at the packed DDW session.

When asked when Leonardo da Vinci painted the Mona Lisa, for example, ChatGPT replied 1815. That’s off by about 300 years; the masterpiece was created sometime between 1503 and 1519. Asked for a fact about George Washington, ChatGPT said he invented the cotton gin. Also not true. (Eli Whitney patented the cotton gin.)

In an example more suited for gastroenterologists at DDW, ChatGPT correctly stated that Barrett esophagus can lead to adenocarcinoma of the esophagus in some cases. However, the technology also said that the condition could lead to prostate cancer.

So, if someone asked ChatGPT for a list of possible risks for Barrett’s esophagus, it would include prostate cancer. A person without medical knowledge “could take it at face value that it causes prostate cancer,” said panelist Sravanthi Parasa, MD, a gastroenterologist at Swedish Medical Center, Seattle.

“That is a lot of misinformation that is going to come our way,” she added at the session, which was sponsored by the American Society for Gastrointestinal Endoscopy.

The potential for inaccuracy is a downside to ChatGPT, agreed panelist Prateek Sharma, MD, a gastroenterologist at the University of Kansas Medical Center in Kansas City, Kansas.

“There is no quality control. You have to double check its answers,” said Dr. Sharma, who is president-elect of ASGE.

ChatGPT is not going to replace physicians in general or gastroenterologists doing endoscopies, said Ian Gralnek, MD, chief of the Institute of Gastroenterology and Hepatology at Emek Medical Center in Afula, Israel.

Even though the tool could play a role in medicine, “we need to be very careful as a society going forward ... and see where things are going,” Dr. Gralnek said.
 

How you ask makes a difference

Future iterations of ChatGPT are likely to produce fewer hallucinations, the experts said. In the meantime, users can lower the risk by paying attention to how they’re wording their queries, a practice known as “prompt engineering.”

It’s best to ask a question that has a firm answer to it. If you ask a vague question, you’ll likely get a vague answer, Dr. Sharma said.

ChatGPT is a large language model (LLM). GPT stands for “generative pretrained transformer” – specialized algorithms that find long-range patterns in sequences of data. LLMs can predict the next word in a sentence.

“That’s why this is also called generative AI,” Dr. Sharma said. “For example, if you put in ‘Where are we?’, it will predict for you that perhaps the next word is ‘going?’ ”

The current public version is ChatGPT 3.5, which was trained on open-source online information up until early 2022. It can gather information from open-access scientific journals and medical society guidelines, as well as from Twitter, Reddit, and other social media. It does not have access to private information, like electronic health records.

The use of ChatGPT has exploded in the past 6 months, Dr. Sharma said.

“ChatGPT has been the most-searched website or platform ever in history since it was launched in December of 2022,” he said.
 

 

 

What’s in it for doctors?

Although not specifically trained for health care–related tasks, the panelists noted that ChatGPT does have potential as a virtual medical assistant, chatbot, clinical decision-support tool, source of medical education, natural language processor for documentation, or medical note-taker.

ChatGPT can help physicians write a letter of support to a patient who, for example, was just diagnosed with stage IV colon cancer. It can do that in only 15 seconds, whereas it would take us much longer, Dr. Sharma said.

ChatGPT is the “next frontier” for generating patient education materials, Dr. Parasa said. It can help time-constrained health care providers, as long as the information is accurate.

ChatGPT 4.0, now available by subscription, can do “almost real-time note-taking during patient encounters,” she added.

Another reason to be familiar with the technology: “Many of your patients are using it, even if you don’t know about it,” Dr. Sharma said.
 

Questions abound

A conference attendee asked the panel what to do when a patient comes in with ChatGPT medical advice that does not align with official guidelines.

Dr. Gralnek said that he would explain to patients that medical information based on guidelines are not “black and white.” The panel likened it to how patients come to an appointment now armed with information from the Internet, which is not always correct, that must then be countered by doctors. The same would likely happen with ChatGPT.

Another attendee asked whether ChatGPT eventually will work in accordance with electronic health record systems.

“Open AI and Microsoft are already working with Epic,” Dr. Parasa said.

A question arose about the reading level of information provided by ChatGPT. Dr. Parasa noted that it’s not standard. However, a person can prompt ChatGPT to provide an answer either at an eighth-grade reading level or for a well-trained physician.

Dr. Sharma offered a final warning: The technology learns over time.

“It knows what your habits are. It will learn what you’re doing,” Dr. Sharma said. “Everything else on your browsers that are open, it’s learning from that also. So be careful what websites you visit before you go to ChatGPT.”

Dr. Sharma is a stock shareholder in Microsoft. Dr. Parasa and Dr. Gralneck reported no relevant financial relationships.

DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay, so like them or not, physicians might as well figure out how to optimize their role in medicine and health care. That’s the takeaway from a three-expert panel session about the technology held at the annual Digestive Disease Week® (DDW).

The chatbot can help doctors to a certain extent by suggesting differential diagnoses, assisting with clinical note-taking, and producing rapid and easy-to-understand patient communication and educational materials, they noted. However, it can also make mistakes. And, unlike a medical trainee who might give a clinical answer and express some doubt, ChatGPT (Open AI/Microsoft) clearly states its findings as fact, even when it’s wrong.

Known as “hallucinating,” this problem of AI inaccuracy was displayed at the packed DDW session.

When asked when Leonardo da Vinci painted the Mona Lisa, for example, ChatGPT replied 1815. That’s off by about 300 years; the masterpiece was created sometime between 1503 and 1519. Asked for a fact about George Washington, ChatGPT said he invented the cotton gin. Also not true. (Eli Whitney patented the cotton gin.)

In an example more suited for gastroenterologists at DDW, ChatGPT correctly stated that Barrett esophagus can lead to adenocarcinoma of the esophagus in some cases. However, the technology also said that the condition could lead to prostate cancer.

So, if someone asked ChatGPT for a list of possible risks for Barrett’s esophagus, it would include prostate cancer. A person without medical knowledge “could take it at face value that it causes prostate cancer,” said panelist Sravanthi Parasa, MD, a gastroenterologist at Swedish Medical Center, Seattle.

“That is a lot of misinformation that is going to come our way,” she added at the session, which was sponsored by the American Society for Gastrointestinal Endoscopy.

The potential for inaccuracy is a downside to ChatGPT, agreed panelist Prateek Sharma, MD, a gastroenterologist at the University of Kansas Medical Center in Kansas City, Kansas.

“There is no quality control. You have to double check its answers,” said Dr. Sharma, who is president-elect of ASGE.

ChatGPT is not going to replace physicians in general or gastroenterologists doing endoscopies, said Ian Gralnek, MD, chief of the Institute of Gastroenterology and Hepatology at Emek Medical Center in Afula, Israel.

Even though the tool could play a role in medicine, “we need to be very careful as a society going forward ... and see where things are going,” Dr. Gralnek said.
 

How you ask makes a difference

Future iterations of ChatGPT are likely to produce fewer hallucinations, the experts said. In the meantime, users can lower the risk by paying attention to how they’re wording their queries, a practice known as “prompt engineering.”

It’s best to ask a question that has a firm answer to it. If you ask a vague question, you’ll likely get a vague answer, Dr. Sharma said.

ChatGPT is a large language model (LLM). GPT stands for “generative pretrained transformer” – specialized algorithms that find long-range patterns in sequences of data. LLMs can predict the next word in a sentence.

“That’s why this is also called generative AI,” Dr. Sharma said. “For example, if you put in ‘Where are we?’, it will predict for you that perhaps the next word is ‘going?’ ”

The current public version is ChatGPT 3.5, which was trained on open-source online information up until early 2022. It can gather information from open-access scientific journals and medical society guidelines, as well as from Twitter, Reddit, and other social media. It does not have access to private information, like electronic health records.

The use of ChatGPT has exploded in the past 6 months, Dr. Sharma said.

“ChatGPT has been the most-searched website or platform ever in history since it was launched in December of 2022,” he said.
 

 

 

What’s in it for doctors?

Although not specifically trained for health care–related tasks, the panelists noted that ChatGPT does have potential as a virtual medical assistant, chatbot, clinical decision-support tool, source of medical education, natural language processor for documentation, or medical note-taker.

ChatGPT can help physicians write a letter of support to a patient who, for example, was just diagnosed with stage IV colon cancer. It can do that in only 15 seconds, whereas it would take us much longer, Dr. Sharma said.

ChatGPT is the “next frontier” for generating patient education materials, Dr. Parasa said. It can help time-constrained health care providers, as long as the information is accurate.

ChatGPT 4.0, now available by subscription, can do “almost real-time note-taking during patient encounters,” she added.

Another reason to be familiar with the technology: “Many of your patients are using it, even if you don’t know about it,” Dr. Sharma said.
 

Questions abound

A conference attendee asked the panel what to do when a patient comes in with ChatGPT medical advice that does not align with official guidelines.

Dr. Gralnek said that he would explain to patients that medical information based on guidelines are not “black and white.” The panel likened it to how patients come to an appointment now armed with information from the Internet, which is not always correct, that must then be countered by doctors. The same would likely happen with ChatGPT.

Another attendee asked whether ChatGPT eventually will work in accordance with electronic health record systems.

“Open AI and Microsoft are already working with Epic,” Dr. Parasa said.

A question arose about the reading level of information provided by ChatGPT. Dr. Parasa noted that it’s not standard. However, a person can prompt ChatGPT to provide an answer either at an eighth-grade reading level or for a well-trained physician.

Dr. Sharma offered a final warning: The technology learns over time.

“It knows what your habits are. It will learn what you’re doing,” Dr. Sharma said. “Everything else on your browsers that are open, it’s learning from that also. So be careful what websites you visit before you go to ChatGPT.”

Dr. Sharma is a stock shareholder in Microsoft. Dr. Parasa and Dr. Gralneck reported no relevant financial relationships.

DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.

A version of this article originally appeared on Medscape.com.

ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay, so like them or not, physicians might as well figure out how to optimize their role in medicine and health care. That’s the takeaway from a three-expert panel session about the technology held at the annual Digestive Disease Week® (DDW).

The chatbot can help doctors to a certain extent by suggesting differential diagnoses, assisting with clinical note-taking, and producing rapid and easy-to-understand patient communication and educational materials, they noted. However, it can also make mistakes. And, unlike a medical trainee who might give a clinical answer and express some doubt, ChatGPT (Open AI/Microsoft) clearly states its findings as fact, even when it’s wrong.

Known as “hallucinating,” this problem of AI inaccuracy was displayed at the packed DDW session.

When asked when Leonardo da Vinci painted the Mona Lisa, for example, ChatGPT replied 1815. That’s off by about 300 years; the masterpiece was created sometime between 1503 and 1519. Asked for a fact about George Washington, ChatGPT said he invented the cotton gin. Also not true. (Eli Whitney patented the cotton gin.)

In an example more suited for gastroenterologists at DDW, ChatGPT correctly stated that Barrett esophagus can lead to adenocarcinoma of the esophagus in some cases. However, the technology also said that the condition could lead to prostate cancer.

So, if someone asked ChatGPT for a list of possible risks for Barrett’s esophagus, it would include prostate cancer. A person without medical knowledge “could take it at face value that it causes prostate cancer,” said panelist Sravanthi Parasa, MD, a gastroenterologist at Swedish Medical Center, Seattle.

“That is a lot of misinformation that is going to come our way,” she added at the session, which was sponsored by the American Society for Gastrointestinal Endoscopy.

The potential for inaccuracy is a downside to ChatGPT, agreed panelist Prateek Sharma, MD, a gastroenterologist at the University of Kansas Medical Center in Kansas City, Kansas.

“There is no quality control. You have to double check its answers,” said Dr. Sharma, who is president-elect of ASGE.

ChatGPT is not going to replace physicians in general or gastroenterologists doing endoscopies, said Ian Gralnek, MD, chief of the Institute of Gastroenterology and Hepatology at Emek Medical Center in Afula, Israel.

Even though the tool could play a role in medicine, “we need to be very careful as a society going forward ... and see where things are going,” Dr. Gralnek said.
 

How you ask makes a difference

Future iterations of ChatGPT are likely to produce fewer hallucinations, the experts said. In the meantime, users can lower the risk by paying attention to how they’re wording their queries, a practice known as “prompt engineering.”

It’s best to ask a question that has a firm answer to it. If you ask a vague question, you’ll likely get a vague answer, Dr. Sharma said.

ChatGPT is a large language model (LLM). GPT stands for “generative pretrained transformer” – specialized algorithms that find long-range patterns in sequences of data. LLMs can predict the next word in a sentence.

“That’s why this is also called generative AI,” Dr. Sharma said. “For example, if you put in ‘Where are we?’, it will predict for you that perhaps the next word is ‘going?’ ”

The current public version is ChatGPT 3.5, which was trained on open-source online information up until early 2022. It can gather information from open-access scientific journals and medical society guidelines, as well as from Twitter, Reddit, and other social media. It does not have access to private information, like electronic health records.

The use of ChatGPT has exploded in the past 6 months, Dr. Sharma said.

“ChatGPT has been the most-searched website or platform ever in history since it was launched in December of 2022,” he said.
 

 

 

What’s in it for doctors?

Although not specifically trained for health care–related tasks, the panelists noted that ChatGPT does have potential as a virtual medical assistant, chatbot, clinical decision-support tool, source of medical education, natural language processor for documentation, or medical note-taker.

ChatGPT can help physicians write a letter of support to a patient who, for example, was just diagnosed with stage IV colon cancer. It can do that in only 15 seconds, whereas it would take us much longer, Dr. Sharma said.

ChatGPT is the “next frontier” for generating patient education materials, Dr. Parasa said. It can help time-constrained health care providers, as long as the information is accurate.

ChatGPT 4.0, now available by subscription, can do “almost real-time note-taking during patient encounters,” she added.

Another reason to be familiar with the technology: “Many of your patients are using it, even if you don’t know about it,” Dr. Sharma said.
 

Questions abound

A conference attendee asked the panel what to do when a patient comes in with ChatGPT medical advice that does not align with official guidelines.

Dr. Gralnek said that he would explain to patients that medical information based on guidelines are not “black and white.” The panel likened it to how patients come to an appointment now armed with information from the Internet, which is not always correct, that must then be countered by doctors. The same would likely happen with ChatGPT.

Another attendee asked whether ChatGPT eventually will work in accordance with electronic health record systems.

“Open AI and Microsoft are already working with Epic,” Dr. Parasa said.

A question arose about the reading level of information provided by ChatGPT. Dr. Parasa noted that it’s not standard. However, a person can prompt ChatGPT to provide an answer either at an eighth-grade reading level or for a well-trained physician.

Dr. Sharma offered a final warning: The technology learns over time.

“It knows what your habits are. It will learn what you’re doing,” Dr. Sharma said. “Everything else on your browsers that are open, it’s learning from that also. So be careful what websites you visit before you go to ChatGPT.”

Dr. Sharma is a stock shareholder in Microsoft. Dr. Parasa and Dr. Gralneck reported no relevant financial relationships.

DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>163809</fileName> <TBEID>0C04A8A8.SIG</TBEID> <TBUniqueIdentifier>MD_0C04A8A8</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname>Chat GPT from Medscape</storyname> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20230620T090132</QCDate> <firstPublished>20230620T091100</firstPublished> <LastPublished>20230620T091100</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20230620T091100</CMSDate> <articleSource>AT DDW 2023</articleSource> <facebookInfo/> <meetingNumber>3042-23</meetingNumber> <byline>Damian McNamara</byline> <bylineText>DAMIAN MCNAMARA, MA</bylineText> <bylineFull>DAMIAN MCNAMARA, MA</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay</metaDescription> <articlePDF/> <teaserImage/> <teaser>Three-expert panel at DDW suggests clinicians should embrace the technology. </teaser> <title>ChatGPT in medicine: The good, the bad, and the unknown</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>gih</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">17</term> <term>21</term> </publications> <sections> <term canonical="true">39313</term> <term>53</term> </sections> <topics> <term canonical="true">27442</term> <term>213</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>ChatGPT in medicine: The good, the bad, and the unknown</title> <deck/> </itemMeta> <itemContent> <p><span class="dateline">CHICAGO</span> – <span class="tag metaDescription">ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay</span>, so like them or not, physicians might as well figure out how to optimize their role in medicine and health care. That’s the takeaway from a three-expert panel session about the technology held at the annual Digestive Disease Week® (DDW).</p> <p>The chatbot can help doctors to a certain extent by suggesting differential diagnoses, assisting with clinical note-taking, and producing rapid and easy-to-understand patient communication and educational materials, they noted. However, it can also make mistakes. And, unlike a medical trainee who might give a clinical answer and express some doubt, ChatGPT (Open AI/Microsoft) clearly states its findings as fact, even when it’s wrong.<br/><br/>Known as “hallucinating,” this problem of AI inaccuracy was displayed at the packed DDW session.<br/><br/>When asked when Leonardo da Vinci painted the Mona Lisa, for example, ChatGPT replied 1815. That’s off by about 300 years; the masterpiece was created sometime between 1503 and 1519. Asked for a fact about George Washington, ChatGPT said he invented the cotton gin. Also not true. (Eli Whitney patented the cotton gin.)<br/><br/>In an example more suited for gastroenterologists at DDW, ChatGPT correctly stated that Barrett esophagus can lead to adenocarcinoma of the esophagus in some cases. However, the technology also said that the condition could lead to prostate cancer.<br/><br/>So, if someone asked ChatGPT for a list of possible risks for Barrett’s esophagus, it would include prostate cancer. A person without medical knowledge “could take it at face value that it causes prostate cancer,” said panelist Sravanthi Parasa, MD, a gastroenterologist at Swedish Medical Center, Seattle.<br/><br/>“That is a lot of misinformation that is going to come our way,” she added at the session, which was sponsored by the American Society for Gastrointestinal Endoscopy.<br/><br/>The potential for inaccuracy is a downside to ChatGPT, agreed panelist Prateek Sharma, MD, a gastroenterologist at the University of Kansas Medical Center in Kansas City, Kansas.<br/><br/>“There is no quality control. You have to double check its answers,” said Dr. Sharma, who is president-elect of ASGE.<br/><br/>ChatGPT is not going to replace physicians in general or gastroenterologists doing endoscopies, said Ian Gralnek, MD, chief of the Institute of Gastroenterology and Hepatology at Emek Medical Center in Afula, Israel.<br/><br/>Even though the tool could play a role in medicine, “we need to be very careful as a society going forward ... and see where things are going,” Dr. Gralnek said.<br/><br/></p> <h2>How you ask makes a difference</h2> <p>Future iterations of ChatGPT are likely to produce fewer hallucinations, the experts said. In the meantime, users can lower the risk by paying attention to how they’re wording their queries, a practice known as “prompt engineering.”</p> <p>It’s best to ask a question that has a firm answer to it. If you ask a vague question, you’ll likely get a vague answer, Dr. Sharma said.<br/><br/>ChatGPT is a large language model (LLM). GPT stands for “generative pretrained transformer” – specialized algorithms that find long-range patterns in sequences of data. LLMs can predict the next word in a sentence.<br/><br/>“That’s why this is also called generative AI,” Dr. Sharma said. “For example, if you put in ‘Where are we?’, it will predict for you that perhaps the next word is ‘going?’ ”<br/><br/>The current public version is ChatGPT 3.5, which was trained on open-source online information up until early 2022. It can gather information from open-access scientific journals and medical society guidelines, as well as from Twitter, Reddit, and other social media. It does not have access to private information, like electronic health records.<br/><br/>The use of ChatGPT has exploded in the past 6 months, Dr. Sharma said.<br/><br/>“ChatGPT has been the most-searched website or platform ever in history since it was launched in December of 2022,” he said.<br/><br/></p> <h2>What’s in it for doctors?</h2> <p>Although not specifically trained for health care–related tasks, the panelists noted that ChatGPT does have potential as a virtual medical assistant, chatbot, clinical decision-support tool, source of medical education, natural language processor for documentation, or medical note-taker.</p> <p>ChatGPT can help physicians write a letter of support to a patient who, for example, was just diagnosed with stage IV colon cancer. It can do that in only 15 seconds, whereas it would take us much longer, Dr. Sharma said.<br/><br/>ChatGPT is the “next frontier” for generating patient education materials, Dr. Parasa said. It can help time-constrained health care providers, as long as the information is accurate.<br/><br/>ChatGPT 4.0, now available by subscription, can do “almost real-time note-taking during patient encounters,” she added.<br/><br/>Another reason to be familiar with the technology: “Many of your patients are using it, even if you don’t know about it,” Dr. Sharma said.<br/><br/></p> <h2>Questions abound</h2> <p>A conference attendee asked the panel what to do when a patient comes in with ChatGPT medical advice that does not align with official guidelines.</p> <p>Dr. Gralnek said that he would explain to patients that medical information based on guidelines are not “black and white.” The panel likened it to how patients come to an appointment now armed with information from the Internet, which is not always correct, that must then be countered by doctors. The same would likely happen with ChatGPT.<br/><br/>Another attendee asked whether ChatGPT eventually will work in accordance with electronic health record systems.<br/><br/>“Open AI and Microsoft are already working with Epic,” Dr. Parasa said.<br/><br/>A question arose about the reading level of information provided by ChatGPT. Dr. Parasa noted that it’s not standard. However, a person can prompt ChatGPT to provide an answer either at an eighth-grade reading level or for a well-trained physician.<br/><br/>Dr. Sharma offered a final warning: The technology learns over time.<br/><br/>“It knows what your habits are. It will learn what you’re doing,” Dr. Sharma said. “Everything else on your browsers that are open, it’s learning from that also. So be careful what websites you visit before you go to ChatGPT.”<br/><br/>Dr. Sharma is a stock shareholder in Microsoft. Dr. Parasa and Dr. Gralneck reported no relevant financial relationships.<br/><br/>DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.</p> <p> <em>A version of this article originally appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/992466">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Article Source

AT DDW 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article