Article Type
Changed
Fri, 04/28/2023 - 00:32

ChatGPT, an artificial intelligence (AI) chatbot, may be helpful for patients with cirrhosis or hepatocellular carcinoma (HCC) and their clinicians by generating easy-to-understand information about the disease, a new study suggests.

ChatGPT can regurgitate correct and reproducible responses to commonly asked patient questions on cirrhosis and HCC; however, the majority of the correct responses were labeled by clinician specialists as “correct but inadequate,” according to the study findings.

The AI tool can also provide empathetic and practical advice to patients and caregivers but falls short in its ability to provide tailored recommendations, the researchers said.

“Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about managing and preventing complications of their disease. We found ChatGPT – while it has limitations – can help empower patients and improve health literacy for different populations,” study investigator Brennan Spiegel, MD, director of health services research at Cedars-Sinai, Los Angeles, said in a news release.

The study was published online in Clinical and Molecular Hepatology.
 

Adjunctive health literacy tool

ChatGPT (Chat Generative Pre-Trained Transformer), developed by OpenAI, is a natural language processing tool that allows users to have personalized conversations with an AI bot capable of providing detailed responses to any question posed.

It has already seen several potential applications in the medical field, but the Cedars-Sinai study is one of the first to examine the chatbot’s ability to answer clinically oriented, disease-specific questions correctly and compare its performance to that of physicians.

The investigators asked ChatGPT 164 questions relevant to patients with cirrhosis and/or HCC across five categories – basic knowledge, diagnosis, treatment, lifestyle, and preventive medicine. The chatbot’s answers were graded independently by two liver transplant specialists.

Overall, ChatGPT answered about 77% of the questions correctly, generating high levels of accuracy in 91 questions across the categories, the researchers reported.

ChatGPT regurgitated extensive knowledge of cirrhosis (79% correct) and HCC (74% correct), but only small proportions were deemed by specialists to be comprehensive (47% in cirrhosis, 41% in HCC).

The chatbot performed better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine.

The specialists grading ChatGPT felt that 75% of its answers for questions on basic knowledge, treatment, and lifestyle were comprehensive or correct but inadequate. The corresponding percentages for diagnosis and preventive medicine were lower (67% and 50%, respectively). No responses from ChatGPT were graded as completely incorrect.

Responses deemed by the specialists to be “mixed with correct and incorrect/outdated data” were 22% for basic knowledge, 33% for diagnosis, 25% for treatment, 18% for lifestyle, and 50% for preventive medicine.
 

No substitute for specialists

The investigators also tested ChatGPT on cirrhosis quality measures recommended by the American Association for the Study of Liver Diseases and contained in two published questionnaires. ChatGPT answered 77% of the relevant questions correctly but failed to specify decision-making cutoffs and treatment durations.

ChatGPT also lacked knowledge of variations in regional guidelines, such as HCC screening criteria, but it did offer “practical and multifaceted” advice to patients and caregivers about next steps and adjusting to a new diagnosis.

“We believe ChatGPT to be a very useful adjunctive tool for physicians – not a replacement – but adjunctive tool that provides access to reliable and accurate health information that is easy for many to understand,” Dr. Spiegel said in the news release. “We hope that this can help physicians to empower patients and improve health literacy for patients facing challenging conditions such as cirrhosis and liver cancer.”

ChatGPT could enhance clinician workflow by helping to draft a framework for each tailored question asked by patients and caregivers, the researchers wrote.

“Given the high proportion of either comprehensive or correct but inadequate responses and expected continued improvement over time, we foresee that physicians would only need to revise ChatGPT’s responses to best answer patient queries,” they wrote. “This may not only improve the efficiency of physicians but also decrease the overall cost and burden on the healthcare system.”

In addition, ChatGPT could empower patients to be better informed about their care, the researchers noted.

“This allows for patient-led care and facilitates efficient shared decision-making by providing patients with an additional source of information,” they added.

The study had no specific funding. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

ChatGPT, an artificial intelligence (AI) chatbot, may be helpful for patients with cirrhosis or hepatocellular carcinoma (HCC) and their clinicians by generating easy-to-understand information about the disease, a new study suggests.

ChatGPT can regurgitate correct and reproducible responses to commonly asked patient questions on cirrhosis and HCC; however, the majority of the correct responses were labeled by clinician specialists as “correct but inadequate,” according to the study findings.

The AI tool can also provide empathetic and practical advice to patients and caregivers but falls short in its ability to provide tailored recommendations, the researchers said.

“Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about managing and preventing complications of their disease. We found ChatGPT – while it has limitations – can help empower patients and improve health literacy for different populations,” study investigator Brennan Spiegel, MD, director of health services research at Cedars-Sinai, Los Angeles, said in a news release.

The study was published online in Clinical and Molecular Hepatology.
 

Adjunctive health literacy tool

ChatGPT (Chat Generative Pre-Trained Transformer), developed by OpenAI, is a natural language processing tool that allows users to have personalized conversations with an AI bot capable of providing detailed responses to any question posed.

It has already seen several potential applications in the medical field, but the Cedars-Sinai study is one of the first to examine the chatbot’s ability to answer clinically oriented, disease-specific questions correctly and compare its performance to that of physicians.

The investigators asked ChatGPT 164 questions relevant to patients with cirrhosis and/or HCC across five categories – basic knowledge, diagnosis, treatment, lifestyle, and preventive medicine. The chatbot’s answers were graded independently by two liver transplant specialists.

Overall, ChatGPT answered about 77% of the questions correctly, generating high levels of accuracy in 91 questions across the categories, the researchers reported.

ChatGPT regurgitated extensive knowledge of cirrhosis (79% correct) and HCC (74% correct), but only small proportions were deemed by specialists to be comprehensive (47% in cirrhosis, 41% in HCC).

The chatbot performed better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine.

The specialists grading ChatGPT felt that 75% of its answers for questions on basic knowledge, treatment, and lifestyle were comprehensive or correct but inadequate. The corresponding percentages for diagnosis and preventive medicine were lower (67% and 50%, respectively). No responses from ChatGPT were graded as completely incorrect.

Responses deemed by the specialists to be “mixed with correct and incorrect/outdated data” were 22% for basic knowledge, 33% for diagnosis, 25% for treatment, 18% for lifestyle, and 50% for preventive medicine.
 

No substitute for specialists

The investigators also tested ChatGPT on cirrhosis quality measures recommended by the American Association for the Study of Liver Diseases and contained in two published questionnaires. ChatGPT answered 77% of the relevant questions correctly but failed to specify decision-making cutoffs and treatment durations.

ChatGPT also lacked knowledge of variations in regional guidelines, such as HCC screening criteria, but it did offer “practical and multifaceted” advice to patients and caregivers about next steps and adjusting to a new diagnosis.

“We believe ChatGPT to be a very useful adjunctive tool for physicians – not a replacement – but adjunctive tool that provides access to reliable and accurate health information that is easy for many to understand,” Dr. Spiegel said in the news release. “We hope that this can help physicians to empower patients and improve health literacy for patients facing challenging conditions such as cirrhosis and liver cancer.”

ChatGPT could enhance clinician workflow by helping to draft a framework for each tailored question asked by patients and caregivers, the researchers wrote.

“Given the high proportion of either comprehensive or correct but inadequate responses and expected continued improvement over time, we foresee that physicians would only need to revise ChatGPT’s responses to best answer patient queries,” they wrote. “This may not only improve the efficiency of physicians but also decrease the overall cost and burden on the healthcare system.”

In addition, ChatGPT could empower patients to be better informed about their care, the researchers noted.

“This allows for patient-led care and facilitates efficient shared decision-making by providing patients with an additional source of information,” they added.

The study had no specific funding. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

ChatGPT, an artificial intelligence (AI) chatbot, may be helpful for patients with cirrhosis or hepatocellular carcinoma (HCC) and their clinicians by generating easy-to-understand information about the disease, a new study suggests.

ChatGPT can regurgitate correct and reproducible responses to commonly asked patient questions on cirrhosis and HCC; however, the majority of the correct responses were labeled by clinician specialists as “correct but inadequate,” according to the study findings.

The AI tool can also provide empathetic and practical advice to patients and caregivers but falls short in its ability to provide tailored recommendations, the researchers said.

“Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about managing and preventing complications of their disease. We found ChatGPT – while it has limitations – can help empower patients and improve health literacy for different populations,” study investigator Brennan Spiegel, MD, director of health services research at Cedars-Sinai, Los Angeles, said in a news release.

The study was published online in Clinical and Molecular Hepatology.
 

Adjunctive health literacy tool

ChatGPT (Chat Generative Pre-Trained Transformer), developed by OpenAI, is a natural language processing tool that allows users to have personalized conversations with an AI bot capable of providing detailed responses to any question posed.

It has already seen several potential applications in the medical field, but the Cedars-Sinai study is one of the first to examine the chatbot’s ability to answer clinically oriented, disease-specific questions correctly and compare its performance to that of physicians.

The investigators asked ChatGPT 164 questions relevant to patients with cirrhosis and/or HCC across five categories – basic knowledge, diagnosis, treatment, lifestyle, and preventive medicine. The chatbot’s answers were graded independently by two liver transplant specialists.

Overall, ChatGPT answered about 77% of the questions correctly, generating high levels of accuracy in 91 questions across the categories, the researchers reported.

ChatGPT regurgitated extensive knowledge of cirrhosis (79% correct) and HCC (74% correct), but only small proportions were deemed by specialists to be comprehensive (47% in cirrhosis, 41% in HCC).

The chatbot performed better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine.

The specialists grading ChatGPT felt that 75% of its answers for questions on basic knowledge, treatment, and lifestyle were comprehensive or correct but inadequate. The corresponding percentages for diagnosis and preventive medicine were lower (67% and 50%, respectively). No responses from ChatGPT were graded as completely incorrect.

Responses deemed by the specialists to be “mixed with correct and incorrect/outdated data” were 22% for basic knowledge, 33% for diagnosis, 25% for treatment, 18% for lifestyle, and 50% for preventive medicine.
 

No substitute for specialists

The investigators also tested ChatGPT on cirrhosis quality measures recommended by the American Association for the Study of Liver Diseases and contained in two published questionnaires. ChatGPT answered 77% of the relevant questions correctly but failed to specify decision-making cutoffs and treatment durations.

ChatGPT also lacked knowledge of variations in regional guidelines, such as HCC screening criteria, but it did offer “practical and multifaceted” advice to patients and caregivers about next steps and adjusting to a new diagnosis.

“We believe ChatGPT to be a very useful adjunctive tool for physicians – not a replacement – but adjunctive tool that provides access to reliable and accurate health information that is easy for many to understand,” Dr. Spiegel said in the news release. “We hope that this can help physicians to empower patients and improve health literacy for patients facing challenging conditions such as cirrhosis and liver cancer.”

ChatGPT could enhance clinician workflow by helping to draft a framework for each tailored question asked by patients and caregivers, the researchers wrote.

“Given the high proportion of either comprehensive or correct but inadequate responses and expected continued improvement over time, we foresee that physicians would only need to revise ChatGPT’s responses to best answer patient queries,” they wrote. “This may not only improve the efficiency of physicians but also decrease the overall cost and burden on the healthcare system.”

In addition, ChatGPT could empower patients to be better informed about their care, the researchers noted.

“This allows for patient-led care and facilitates efficient shared decision-making by providing patients with an additional source of information,” they added.

The study had no specific funding. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL AND MOLECULAR HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article