Article Type
Changed
Tue, 06/18/2024 - 16:22

 

TOPLINE:

The artificial intelligence (AI) chatbot ChatGPT can significantly improve the readability of online cancer-related patient information while maintaining the content’s quality, a recent study found.

METHODOLOGY:

  • Patients with cancer often search for cancer information online after their diagnosis, with most seeking information from their oncologists’ websites. However, the online materials often exceed the average reading level of the US population, limiting accessibility and comprehension.
  • Researchers asked ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer, aiming for a sixth-grade readability level. The content came from a random sample of documents from 34 patient-facing websites associated with National Comprehensive Cancer Network (NCCN) member institutions.
  • Readability, accuracy, similarity, and quality of the rewritten content were assessed using several established metrics and tools, including an F1 score, which assesses the precision and recall of a machine-learning model; a cosine similarity score, which measures similarities and is often used to detect plagiarism; and the DISCERN instrument, which helps assess the quality of the AI-rewritten information.
  • The primary outcome was the mean readability score for the original and AI-generated content.

TAKEAWAY:

  • The original content had an average readability level equivalent to a university freshman (grade 13). Following the AI revision, the readability level improved to a high school freshman level (grade 9).
  • The rewritten content had high accuracy, with an overall F1 score of 0.87 (a good score is 0.8-0.9).
  • The rewritten content had a high cosine similarity score of 0.915 (scores range from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity). Researchers attributed the improved readability to the use of simpler words and shorter sentences.
  • Quality assessment using the DISCERN instrument showed that the AI-rewritten content maintained a “good” quality rating, similar to that of the original content.

IN PRACTICE:

Society has become increasingly dependent on online educational materials, and considering that more than half of Americans may not be literate beyond an eighth-grade level, our AI intervention offers a potential low-cost solution to narrow the gap between patient health literacy and content received from the nation’s leading cancer centers, the authors wrote.

SOURCE:

The study, with first author Andres A. Abreu, MD, with UT Southwestern Medical Center, Dallas, Texas, was published online in the Journal of the National Comprehensive Cancer Network.

LIMITATIONS:

The study was limited to English-language content from NCCN member websites, so the findings may not be generalizable to other sources or languages. Readability alone cannot guarantee comprehension. Factors such as material design and audiovisual aids were not evaluated.

DISCLOSURES:

The study did not report a funding source. The authors reported several disclosures but none related to the study. Herbert J. Zeh disclosed serving as a scientific advisor for Surgical Safety Technologies; Dr. Polanco disclosed serving as a consultant for Iota Biosciences and Palisade Bio and as a proctor for Intuitive Surgical.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

The artificial intelligence (AI) chatbot ChatGPT can significantly improve the readability of online cancer-related patient information while maintaining the content’s quality, a recent study found.

METHODOLOGY:

  • Patients with cancer often search for cancer information online after their diagnosis, with most seeking information from their oncologists’ websites. However, the online materials often exceed the average reading level of the US population, limiting accessibility and comprehension.
  • Researchers asked ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer, aiming for a sixth-grade readability level. The content came from a random sample of documents from 34 patient-facing websites associated with National Comprehensive Cancer Network (NCCN) member institutions.
  • Readability, accuracy, similarity, and quality of the rewritten content were assessed using several established metrics and tools, including an F1 score, which assesses the precision and recall of a machine-learning model; a cosine similarity score, which measures similarities and is often used to detect plagiarism; and the DISCERN instrument, which helps assess the quality of the AI-rewritten information.
  • The primary outcome was the mean readability score for the original and AI-generated content.

TAKEAWAY:

  • The original content had an average readability level equivalent to a university freshman (grade 13). Following the AI revision, the readability level improved to a high school freshman level (grade 9).
  • The rewritten content had high accuracy, with an overall F1 score of 0.87 (a good score is 0.8-0.9).
  • The rewritten content had a high cosine similarity score of 0.915 (scores range from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity). Researchers attributed the improved readability to the use of simpler words and shorter sentences.
  • Quality assessment using the DISCERN instrument showed that the AI-rewritten content maintained a “good” quality rating, similar to that of the original content.

IN PRACTICE:

Society has become increasingly dependent on online educational materials, and considering that more than half of Americans may not be literate beyond an eighth-grade level, our AI intervention offers a potential low-cost solution to narrow the gap between patient health literacy and content received from the nation’s leading cancer centers, the authors wrote.

SOURCE:

The study, with first author Andres A. Abreu, MD, with UT Southwestern Medical Center, Dallas, Texas, was published online in the Journal of the National Comprehensive Cancer Network.

LIMITATIONS:

The study was limited to English-language content from NCCN member websites, so the findings may not be generalizable to other sources or languages. Readability alone cannot guarantee comprehension. Factors such as material design and audiovisual aids were not evaluated.

DISCLOSURES:

The study did not report a funding source. The authors reported several disclosures but none related to the study. Herbert J. Zeh disclosed serving as a scientific advisor for Surgical Safety Technologies; Dr. Polanco disclosed serving as a consultant for Iota Biosciences and Palisade Bio and as a proctor for Intuitive Surgical.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

The artificial intelligence (AI) chatbot ChatGPT can significantly improve the readability of online cancer-related patient information while maintaining the content’s quality, a recent study found.

METHODOLOGY:

  • Patients with cancer often search for cancer information online after their diagnosis, with most seeking information from their oncologists’ websites. However, the online materials often exceed the average reading level of the US population, limiting accessibility and comprehension.
  • Researchers asked ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer, aiming for a sixth-grade readability level. The content came from a random sample of documents from 34 patient-facing websites associated with National Comprehensive Cancer Network (NCCN) member institutions.
  • Readability, accuracy, similarity, and quality of the rewritten content were assessed using several established metrics and tools, including an F1 score, which assesses the precision and recall of a machine-learning model; a cosine similarity score, which measures similarities and is often used to detect plagiarism; and the DISCERN instrument, which helps assess the quality of the AI-rewritten information.
  • The primary outcome was the mean readability score for the original and AI-generated content.

TAKEAWAY:

  • The original content had an average readability level equivalent to a university freshman (grade 13). Following the AI revision, the readability level improved to a high school freshman level (grade 9).
  • The rewritten content had high accuracy, with an overall F1 score of 0.87 (a good score is 0.8-0.9).
  • The rewritten content had a high cosine similarity score of 0.915 (scores range from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity). Researchers attributed the improved readability to the use of simpler words and shorter sentences.
  • Quality assessment using the DISCERN instrument showed that the AI-rewritten content maintained a “good” quality rating, similar to that of the original content.

IN PRACTICE:

Society has become increasingly dependent on online educational materials, and considering that more than half of Americans may not be literate beyond an eighth-grade level, our AI intervention offers a potential low-cost solution to narrow the gap between patient health literacy and content received from the nation’s leading cancer centers, the authors wrote.

SOURCE:

The study, with first author Andres A. Abreu, MD, with UT Southwestern Medical Center, Dallas, Texas, was published online in the Journal of the National Comprehensive Cancer Network.

LIMITATIONS:

The study was limited to English-language content from NCCN member websites, so the findings may not be generalizable to other sources or languages. Readability alone cannot guarantee comprehension. Factors such as material design and audiovisual aids were not evaluated.

DISCLOSURES:

The study did not report a funding source. The authors reported several disclosures but none related to the study. Herbert J. Zeh disclosed serving as a scientific advisor for Surgical Safety Technologies; Dr. Polanco disclosed serving as a consultant for Iota Biosciences and Palisade Bio and as a proctor for Intuitive Surgical.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>168444</fileName> <TBEID>0C0509AD.SIG</TBEID> <TBUniqueIdentifier>MD_0C0509AD</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240618T145923</QCDate> <firstPublished>20240618T161848</firstPublished> <LastPublished>20240618T161848</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240618T161847</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>Megan Brooks</byline> <bylineText>MEGAN BROOKS</bylineText> <bylineFull>MEGAN BROOKS</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType/> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>The artificial intelligence (AI) chatbot ChatGPT can significantly improve the readability of online cancer-related patient information while maintaining the co</metaDescription> <articlePDF/> <teaserImage/> <teaser>Before revision, the readability level was that of a university freshman; with ChatGPT revision, it improved to a high school freshman level.</teaser> <title>ChatGPT Enhances Readability of Cancer Information for Patients</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>oncr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term>15</term> <term>21</term> <term canonical="true">31</term> </publications> <sections> <term>27970</term> <term canonical="true">39313</term> </sections> <topics> <term canonical="true">270</term> <term>278</term> <term>38029</term> <term>263</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>ChatGPT Enhances Readability of Cancer Information for Patients</title> <deck/> </itemMeta> <itemContent> <h2>TOPLINE:</h2> <p>The artificial intelligence (AI) chatbot ChatGPT can significantly improve the readability of online cancer-related patient information while maintaining the content’s quality, a recent study found.</p> <h2>METHODOLOGY:</h2> <ul class="body"> <li>Patients with cancer often search for cancer information online after their diagnosis, with most seeking information from their oncologists’ websites. However, the online materials often exceed the average reading level of the US population, limiting accessibility and comprehension.</li> <li>Researchers asked ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer, aiming for a sixth-grade readability level. The content came from a random sample of documents from 34 patient-facing websites associated with National Comprehensive Cancer Network (NCCN) member institutions.</li> <li>Readability, accuracy, similarity, and quality of the rewritten content were assessed using several established metrics and tools, including an F1 score, which assesses the precision and recall of a machine-learning model; a cosine similarity score, which measures similarities and is often used to detect plagiarism; and the DISCERN instrument, which helps assess the quality of the AI-rewritten information.</li> <li>The primary outcome was the mean readability score for the original and AI-generated content.</li> </ul> <h2>TAKEAWAY:</h2> <ul class="body"> <li>The original content had an average readability level equivalent to a university freshman (grade 13). Following the AI revision, the readability level improved to a high school freshman level (grade 9).</li> <li>The rewritten content had high accuracy, with an overall F1 score of 0.87 (a good score is 0.8-0.9).</li> <li>The rewritten content had a high cosine similarity score of 0.915 (scores range from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity). Researchers attributed the improved readability to the use of simpler words and shorter sentences.</li> <li>Quality assessment using the DISCERN instrument showed that the AI-rewritten content maintained a “good” quality rating, similar to that of the original content.</li> </ul> <h2>IN PRACTICE:</h2> <p>Society has become increasingly dependent on online educational materials, and considering that more than half of Americans may not be literate beyond an eighth-grade level, our AI intervention offers a potential low-cost solution to narrow the gap between patient health literacy and content received from the nation’s leading cancer centers, the authors wrote.</p> <h2>SOURCE:</h2> <p>The study, with first author Andres A. Abreu, MD, with UT Southwestern Medical Center, Dallas, Texas, was <a href="https://jnccn.org/configurable/content/journals$002fjnccn$002faop$002farticle-10.6004-jnccn.2023.7334$002farticle-10.6004-jnccn.2023.7334.xml?t:ac=journals$002fjnccn$002faop$002farticle-10.6004-jnccn.2023.7334$002farticle-10.6004-jnccn.2023.7334.xml">published online</a> in the <em>Journal of the National Comprehensive Cancer Network</em>.</p> <h2>LIMITATIONS:</h2> <p>The study was limited to English-language content from NCCN member websites, so the findings may not be generalizable to other sources or languages. Readability alone cannot guarantee comprehension. Factors such as material design and audiovisual aids were not evaluated.</p> <h2>DISCLOSURES:</h2> <p>The study did not report a funding source. The authors reported several disclosures but none related to the study. Herbert J. Zeh disclosed serving as a scientific advisor for Surgical Safety Technologies; Dr. Polanco disclosed serving as a consultant for Iota Biosciences and Palisade Bio and as a proctor for Intuitive Surgical.<span class="end"/></p> <p> <em>A version of this article first appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/chatgpt-enhances-readability-cancer-information-patients-2024a1000b8m">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article