Applications of ChatGPT and Large Language Models in Medicine and Health Care: Benefits and Pitfalls

Article Type
Changed
Tue, 06/13/2023 - 13:34

The development of [artificial intelligence] is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.

Bill Gates 1

As the world emerges from the pandemic and the health care system faces new challenges, technology has become an increasingly important tool for health care professionals (HCPs). One such technology is the large language model (LLM), which has the potential to revolutionize the health care industry. ChatGPT, a popular LLM developed by OpenAI, has gained particular attention in the medical community for its ability to pass the United States Medical Licensing Exam.2 This article will explore the benefits and potential pitfalls of using LLMs like ChatGPT in medicine and health care.

Benefits

HCP burnout is a serious issue that can lead to lower productivity, increased medical errors, and decreased patient satisfaction.3 LLMs can alleviate some administrative burdens on HCPs, allowing them to focus on patient care. By assisting with billing, coding, insurance claims, and organizing schedules, LLMs like ChatGPT can free up time for HCPs to focus on what they do best: providing quality patient care.4 ChatGPT also can assist with diagnoses by providing accurate and reliable information based on a vast amount of clinical data. By learning the relationships between different medical conditions, symptoms, and treatment options, ChatGPT can provide an appropriate differential diagnosis (Figure 1).

figure_1.png
 It can also interpret medical tests, such as imaging studies and laboratory results, improving the accuracy of diagnoses.5 LLMs can also identify potential clinical trial opportunities for patients, leading to improved treatment options and outcomes.6

Imaging medical specialists like radiologists, pathologists, dermatologists, and others can benefit from combining computer vision diagnostics with ChatGPT report creation abilities to streamline the diagnostic workflow and improve diagnostic accuracy (Figure 2).

figure_2.png
 By leveraging the power of LLMs, HCPs can provide faster and more accurate diagnoses, improving patient outcomes. ChatGPT can also help triage patients with urgent issues in the emergency department, reducing the burden on personnel and allowing patients to receive prompt care.7,8

Although using ChatGPT and other LLMs in mental health care has potential benefits, it is essential to note that they are not a substitute for human interaction and personalized care. While ChatGPT can remember information from previous conversations, it cannot provide the same level of personalized, high-quality care that a professional therapist or HCP can. However, by augmenting the work of HCPs, ChatGPT and other LLMs have the potential to make mental health care more accessible and efficient. In addition to providing effective screening in underserved areas, ChatGPT technology may improve the competence of physician assistants and nurse practitioners in delivering mental health care. With the increased incidence of mental health problems in veterans, the pertinence of a ChatGPT-like feature will only increase with time.9

ChatGPT can also be integrated into health care organizations’ websites and mobile apps, providing patients instant access to medical information, self-care advice, symptom checkers, scheduling appointments, and arranging transportation. These features can reduce the burden on health care staff and help patients stay informed and motivated to take an active role in their health. Additionally, health care organizations can use ChatGPT to engage patients by providing reminders for medication renewals and assistance with self-care.4,6,10,11

The potential of artificial intelligence (AI) in the field of medical education and research is immense. According to a study by Gilson and colleagues, ChatGPT has shown promising results as a medical education tool.12 ChatGPT can simulate clinical scenarios, provide real-time feedback, and improve diagnostic skills. It also offers new interactive and personalized learning opportunities for medical students and HCPs.13 ChatGPT can help researchers by streamlining the process of data analysis. It can also administer surveys or questionnaires, facilitate data collection on preferences and experiences, and help in writing scientific publications.14 Nevertheless, to fully unlock the potential of these AI models, additional models that perform checks for factual accuracy, plagiarism, and copyright infringement must be developed.15,16

 

 

AI Bill of Rights

In order to protect the American public, the White House Office of Science and Technology Policy (OSTP) has released a blueprint for an AI Bill of Rights that emphasizes 5 principles to protect the public from the harmful effects of AI models, including safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, considerations, and fallback (Figure 3).17

figure_3.png
 Other trustworthy AI frameworks, such as the White House Executive Order 13960 and the National Institute of Standards and Technology AI Risk Management Framework, are essential to building trust for AI services among HCPs and veteran patients.18,19 To ensure that ChatGPT complies with these principles, especially those related to privacy, security, transparency, and explainability, it is essential to develop trustworthy AI health care products. Methods like calibration and fine-tuning with specialized data sets from the target population and guiding the model’s behavior with reinforcement learning with human feedback (RLHF) may be beneficial. Preserving the patient’s confidentiality is of utmost importance. For example, Microsoft Azure Machine Learning Services, including ChatGPT GPT-4, are Health Insurance Portability and Accountability Act–certified and could enable the creation of such products.20

One of the biggest challenges with LLMs like ChatGPT is the prevalence of inaccurate information or so-called hallucinations.16 These inaccuracies stem from the inability of LLMs to distinguish between real and fake information. To prevent hallucinations, researchers have proposed several methods, including training models on more diverse data, using adversarial training methods, and human-in-the-loop approaches.21 In addition, medicine-specific models like GatorTron, medPaLM, and Almanac were developed, increasing the accuracy of factual results.22-24 Unfortunately, only the GatorTron model is available to the public through the NVIDIA developers’ program.25

Despite these shortcomings, the future of LLMs in health care is promising. Although these models will not replace HCPs, they can help reduce the unnecessary burden on them, prevent burnout, and enable HCPs and patients spend more time together. Establishing an official hospital AI oversight governing body that would promote best practices could ensure the trustworthy implementation of these new technologies.26

Conclusions

The use of ChatGPT and other LLMs in health care has the potential to revolutionize the industry. By assisting HCPs with administrative tasks, improving the accuracy and reliability of diagnoses, and engaging patients, ChatGPT can help health care organizations provide better care to their patients. While LLMs are not a substitute for human interaction and personalized care, they can augment the work of HCPs, making health care more accessible and efficient. As the health care industry continues to evolve, it will be exciting to see how ChatGPT and other LLMs are used to improve patient outcomes and quality of care. In addition, AI technologies like ChatGPT offer enormous potential in medical education and research. To ensure that the benefits outweigh the risks, developing trustworthy AI health care products and establishing oversight governing bodies to ensure their implementation is essential. By doing so, we can help HCPs focus on what matters most, providing high-quality care to patients.

Acknowledgments

This material is the result of work supported by resources and the use of facilities at the James A. Haley Veterans’ Hospital.

References

1. Bill Gates. The age of AI has begun. March 21, 2023. Accessed May 10, 2023. https://www.gatesnotes.com/the-age-of-ai-has-begun

2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198. Published 2023 Feb 9. doi:10.1371/journal.pdig.0000198

3. Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2020. Mayo Clin Proc. 2022;97(3):491-506. doi:10.1016/j.mayocp.2021.11.021

4. Goodman RS, Patrinely JR Jr, Osterman T, Wheless L, Johnson DB. On the cusp: considering the impact of artificial intelligence language models in healthcare. Med. 2023;4(3):139-140. doi:10.1016/j.medj.2023.02.008

5. Will ChatGPT transform healthcare? Nat Med. 2023;29(3):505-506. doi:10.1038/s41591-023-02289-5

6. Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023;7(2):pkad010. doi:10.1093/jncics/pkad010

7. Babar Z, van Laarhoven T, Zanzotto FM, Marchiori E. Evaluating diagnostic content of AI-generated radiology reports of chest X-rays. Artif Intell Med. 2021;116:102075. doi:10.1016/j.artmed.2021.102075

8. Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn Interv Imaging. 2023;S2211-5684(23)00027-X. doi:10.1016/j.diii.2023.02.003

9. Germain JM. Is ChatGPT smart enough to practice mental health therapy? March 23, 2023. Accessed May 11, 2023. https://www.technewsworld.com/story/is-chatgpt-smart-enough-to-practice-mental-health-therapy-178064.html

10. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J Med Syst. 2023;47(1):33. Published 2023 Mar 4. doi:10.1007/s10916-023-01925-4

11. Jungwirth D, Haluza D. Artificial intelligence and public health: an exploratory study. Int J Environ Res Public Health. 2023;20(5):4541. Published 2023 Mar 3. doi:10.3390/ijerph20054541

12. Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Published 2023 Feb 8. doi:10.2196/45312

13. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023;9:e46885. Published 2023 Mar 6. doi:10.2196/46885

14. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. Published 2023 Feb 17. doi:10.7189/jogh.13.01003

15. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. Med Teach. 2023;1-11. doi:10.1080/0142159X.2023.2186203

16. Smith CS. Hallucinations could blunt ChatGPT’s success. IEEE Spectrum. March 13, 2023. Accessed May 11, 2023. https://spectrum.ieee.org/ai-hallucination

17. Executive Office of the President, Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. Accessed May 11, 2023. https://www.whitehouse.gov/ostp/ai-bill-of-rights

18. Executive office of the President. Executive Order 13960: promoting the use of trustworthy artificial intelligence in the federal government. Fed Regist. 2020;89(236):78939-78943.

19. US Department of Commerce, National institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). Published January 2023. doi:10.6028/NIST.AI.100-1

20. Microsoft. Azure Cognitive Search—Cloud Search Service. Accessed May 11, 2023. https://azure.microsoft.com/en-us/products/search

21. Aiyappa R, An J, Kwak H, Ahn YY. Can we trust the evaluation on ChatGPT? March 22, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.12767v1

22. Yang X, Chen A, Pournejatian N, et al. GatorTron: a large clinical language model to unlock patient information from unstructured electronic health records. Updated December 16, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2203.03540v3

23. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. December 26, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2212.13138v1

24. Zakka C, Chaurasia A, Shad R, Hiesinger W. Almanac: knowledge-grounded language models for clinical medicine. March 1, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.01229v1

25. NVIDIA. GatorTron-OG. Accessed May 11, 2023. https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og

26. Borkowski AA, Jakey CE, Thomas LB, Viswanadhan N, Mastorides SM. Establishing a hospital artificial intelligence committee to improve patient care. Fed Pract. 2022;39(8):334-336. doi:10.12788/fp.0299

Article PDF
Author and Disclosure Information

Andrew A. Borkowski, MDa,b,c; Colleen E. Jakey, MDa,b; Stephen M. Mastorides, MDa,b; Ana L. Kraus, MDa,b; Gitanjali Vidyarthi, MDa,b; Narayan Viswanadhan, MDa,b; Jose L. Lezama, MDa,b

Correspondence:  Andrew Borkowski  (andrew.borkowski@va.gov)

aJames A. Haley Veterans’ Hospital, Tampa, Florida

bUniversity of South Florida Morsani College of Medicine, Tampa

cNational Artificial Intelligence Institute, Washington, DC

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Issue
Federal Practitioner - 40(6)a
Publications
Topics
Page Number
170-173
Sections
Author and Disclosure Information

Andrew A. Borkowski, MDa,b,c; Colleen E. Jakey, MDa,b; Stephen M. Mastorides, MDa,b; Ana L. Kraus, MDa,b; Gitanjali Vidyarthi, MDa,b; Narayan Viswanadhan, MDa,b; Jose L. Lezama, MDa,b

Correspondence:  Andrew Borkowski  (andrew.borkowski@va.gov)

aJames A. Haley Veterans’ Hospital, Tampa, Florida

bUniversity of South Florida Morsani College of Medicine, Tampa

cNational Artificial Intelligence Institute, Washington, DC

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Author and Disclosure Information

Andrew A. Borkowski, MDa,b,c; Colleen E. Jakey, MDa,b; Stephen M. Mastorides, MDa,b; Ana L. Kraus, MDa,b; Gitanjali Vidyarthi, MDa,b; Narayan Viswanadhan, MDa,b; Jose L. Lezama, MDa,b

Correspondence:  Andrew Borkowski  (andrew.borkowski@va.gov)

aJames A. Haley Veterans’ Hospital, Tampa, Florida

bUniversity of South Florida Morsani College of Medicine, Tampa

cNational Artificial Intelligence Institute, Washington, DC

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Article PDF
Article PDF

The development of [artificial intelligence] is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.

Bill Gates 1

As the world emerges from the pandemic and the health care system faces new challenges, technology has become an increasingly important tool for health care professionals (HCPs). One such technology is the large language model (LLM), which has the potential to revolutionize the health care industry. ChatGPT, a popular LLM developed by OpenAI, has gained particular attention in the medical community for its ability to pass the United States Medical Licensing Exam.2 This article will explore the benefits and potential pitfalls of using LLMs like ChatGPT in medicine and health care.

Benefits

HCP burnout is a serious issue that can lead to lower productivity, increased medical errors, and decreased patient satisfaction.3 LLMs can alleviate some administrative burdens on HCPs, allowing them to focus on patient care. By assisting with billing, coding, insurance claims, and organizing schedules, LLMs like ChatGPT can free up time for HCPs to focus on what they do best: providing quality patient care.4 ChatGPT also can assist with diagnoses by providing accurate and reliable information based on a vast amount of clinical data. By learning the relationships between different medical conditions, symptoms, and treatment options, ChatGPT can provide an appropriate differential diagnosis (Figure 1).

figure_1.png
 It can also interpret medical tests, such as imaging studies and laboratory results, improving the accuracy of diagnoses.5 LLMs can also identify potential clinical trial opportunities for patients, leading to improved treatment options and outcomes.6

Imaging medical specialists like radiologists, pathologists, dermatologists, and others can benefit from combining computer vision diagnostics with ChatGPT report creation abilities to streamline the diagnostic workflow and improve diagnostic accuracy (Figure 2).

figure_2.png
 By leveraging the power of LLMs, HCPs can provide faster and more accurate diagnoses, improving patient outcomes. ChatGPT can also help triage patients with urgent issues in the emergency department, reducing the burden on personnel and allowing patients to receive prompt care.7,8

Although using ChatGPT and other LLMs in mental health care has potential benefits, it is essential to note that they are not a substitute for human interaction and personalized care. While ChatGPT can remember information from previous conversations, it cannot provide the same level of personalized, high-quality care that a professional therapist or HCP can. However, by augmenting the work of HCPs, ChatGPT and other LLMs have the potential to make mental health care more accessible and efficient. In addition to providing effective screening in underserved areas, ChatGPT technology may improve the competence of physician assistants and nurse practitioners in delivering mental health care. With the increased incidence of mental health problems in veterans, the pertinence of a ChatGPT-like feature will only increase with time.9

ChatGPT can also be integrated into health care organizations’ websites and mobile apps, providing patients instant access to medical information, self-care advice, symptom checkers, scheduling appointments, and arranging transportation. These features can reduce the burden on health care staff and help patients stay informed and motivated to take an active role in their health. Additionally, health care organizations can use ChatGPT to engage patients by providing reminders for medication renewals and assistance with self-care.4,6,10,11

The potential of artificial intelligence (AI) in the field of medical education and research is immense. According to a study by Gilson and colleagues, ChatGPT has shown promising results as a medical education tool.12 ChatGPT can simulate clinical scenarios, provide real-time feedback, and improve diagnostic skills. It also offers new interactive and personalized learning opportunities for medical students and HCPs.13 ChatGPT can help researchers by streamlining the process of data analysis. It can also administer surveys or questionnaires, facilitate data collection on preferences and experiences, and help in writing scientific publications.14 Nevertheless, to fully unlock the potential of these AI models, additional models that perform checks for factual accuracy, plagiarism, and copyright infringement must be developed.15,16

 

 

AI Bill of Rights

In order to protect the American public, the White House Office of Science and Technology Policy (OSTP) has released a blueprint for an AI Bill of Rights that emphasizes 5 principles to protect the public from the harmful effects of AI models, including safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, considerations, and fallback (Figure 3).17

figure_3.png
 Other trustworthy AI frameworks, such as the White House Executive Order 13960 and the National Institute of Standards and Technology AI Risk Management Framework, are essential to building trust for AI services among HCPs and veteran patients.18,19 To ensure that ChatGPT complies with these principles, especially those related to privacy, security, transparency, and explainability, it is essential to develop trustworthy AI health care products. Methods like calibration and fine-tuning with specialized data sets from the target population and guiding the model’s behavior with reinforcement learning with human feedback (RLHF) may be beneficial. Preserving the patient’s confidentiality is of utmost importance. For example, Microsoft Azure Machine Learning Services, including ChatGPT GPT-4, are Health Insurance Portability and Accountability Act–certified and could enable the creation of such products.20

One of the biggest challenges with LLMs like ChatGPT is the prevalence of inaccurate information or so-called hallucinations.16 These inaccuracies stem from the inability of LLMs to distinguish between real and fake information. To prevent hallucinations, researchers have proposed several methods, including training models on more diverse data, using adversarial training methods, and human-in-the-loop approaches.21 In addition, medicine-specific models like GatorTron, medPaLM, and Almanac were developed, increasing the accuracy of factual results.22-24 Unfortunately, only the GatorTron model is available to the public through the NVIDIA developers’ program.25

Despite these shortcomings, the future of LLMs in health care is promising. Although these models will not replace HCPs, they can help reduce the unnecessary burden on them, prevent burnout, and enable HCPs and patients spend more time together. Establishing an official hospital AI oversight governing body that would promote best practices could ensure the trustworthy implementation of these new technologies.26

Conclusions

The use of ChatGPT and other LLMs in health care has the potential to revolutionize the industry. By assisting HCPs with administrative tasks, improving the accuracy and reliability of diagnoses, and engaging patients, ChatGPT can help health care organizations provide better care to their patients. While LLMs are not a substitute for human interaction and personalized care, they can augment the work of HCPs, making health care more accessible and efficient. As the health care industry continues to evolve, it will be exciting to see how ChatGPT and other LLMs are used to improve patient outcomes and quality of care. In addition, AI technologies like ChatGPT offer enormous potential in medical education and research. To ensure that the benefits outweigh the risks, developing trustworthy AI health care products and establishing oversight governing bodies to ensure their implementation is essential. By doing so, we can help HCPs focus on what matters most, providing high-quality care to patients.

Acknowledgments

This material is the result of work supported by resources and the use of facilities at the James A. Haley Veterans’ Hospital.

The development of [artificial intelligence] is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.

Bill Gates 1

As the world emerges from the pandemic and the health care system faces new challenges, technology has become an increasingly important tool for health care professionals (HCPs). One such technology is the large language model (LLM), which has the potential to revolutionize the health care industry. ChatGPT, a popular LLM developed by OpenAI, has gained particular attention in the medical community for its ability to pass the United States Medical Licensing Exam.2 This article will explore the benefits and potential pitfalls of using LLMs like ChatGPT in medicine and health care.

Benefits

HCP burnout is a serious issue that can lead to lower productivity, increased medical errors, and decreased patient satisfaction.3 LLMs can alleviate some administrative burdens on HCPs, allowing them to focus on patient care. By assisting with billing, coding, insurance claims, and organizing schedules, LLMs like ChatGPT can free up time for HCPs to focus on what they do best: providing quality patient care.4 ChatGPT also can assist with diagnoses by providing accurate and reliable information based on a vast amount of clinical data. By learning the relationships between different medical conditions, symptoms, and treatment options, ChatGPT can provide an appropriate differential diagnosis (Figure 1).

figure_1.png
 It can also interpret medical tests, such as imaging studies and laboratory results, improving the accuracy of diagnoses.5 LLMs can also identify potential clinical trial opportunities for patients, leading to improved treatment options and outcomes.6

Imaging medical specialists like radiologists, pathologists, dermatologists, and others can benefit from combining computer vision diagnostics with ChatGPT report creation abilities to streamline the diagnostic workflow and improve diagnostic accuracy (Figure 2).

figure_2.png
 By leveraging the power of LLMs, HCPs can provide faster and more accurate diagnoses, improving patient outcomes. ChatGPT can also help triage patients with urgent issues in the emergency department, reducing the burden on personnel and allowing patients to receive prompt care.7,8

Although using ChatGPT and other LLMs in mental health care has potential benefits, it is essential to note that they are not a substitute for human interaction and personalized care. While ChatGPT can remember information from previous conversations, it cannot provide the same level of personalized, high-quality care that a professional therapist or HCP can. However, by augmenting the work of HCPs, ChatGPT and other LLMs have the potential to make mental health care more accessible and efficient. In addition to providing effective screening in underserved areas, ChatGPT technology may improve the competence of physician assistants and nurse practitioners in delivering mental health care. With the increased incidence of mental health problems in veterans, the pertinence of a ChatGPT-like feature will only increase with time.9

ChatGPT can also be integrated into health care organizations’ websites and mobile apps, providing patients instant access to medical information, self-care advice, symptom checkers, scheduling appointments, and arranging transportation. These features can reduce the burden on health care staff and help patients stay informed and motivated to take an active role in their health. Additionally, health care organizations can use ChatGPT to engage patients by providing reminders for medication renewals and assistance with self-care.4,6,10,11

The potential of artificial intelligence (AI) in the field of medical education and research is immense. According to a study by Gilson and colleagues, ChatGPT has shown promising results as a medical education tool.12 ChatGPT can simulate clinical scenarios, provide real-time feedback, and improve diagnostic skills. It also offers new interactive and personalized learning opportunities for medical students and HCPs.13 ChatGPT can help researchers by streamlining the process of data analysis. It can also administer surveys or questionnaires, facilitate data collection on preferences and experiences, and help in writing scientific publications.14 Nevertheless, to fully unlock the potential of these AI models, additional models that perform checks for factual accuracy, plagiarism, and copyright infringement must be developed.15,16

 

 

AI Bill of Rights

In order to protect the American public, the White House Office of Science and Technology Policy (OSTP) has released a blueprint for an AI Bill of Rights that emphasizes 5 principles to protect the public from the harmful effects of AI models, including safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, considerations, and fallback (Figure 3).17

figure_3.png
 Other trustworthy AI frameworks, such as the White House Executive Order 13960 and the National Institute of Standards and Technology AI Risk Management Framework, are essential to building trust for AI services among HCPs and veteran patients.18,19 To ensure that ChatGPT complies with these principles, especially those related to privacy, security, transparency, and explainability, it is essential to develop trustworthy AI health care products. Methods like calibration and fine-tuning with specialized data sets from the target population and guiding the model’s behavior with reinforcement learning with human feedback (RLHF) may be beneficial. Preserving the patient’s confidentiality is of utmost importance. For example, Microsoft Azure Machine Learning Services, including ChatGPT GPT-4, are Health Insurance Portability and Accountability Act–certified and could enable the creation of such products.20

One of the biggest challenges with LLMs like ChatGPT is the prevalence of inaccurate information or so-called hallucinations.16 These inaccuracies stem from the inability of LLMs to distinguish between real and fake information. To prevent hallucinations, researchers have proposed several methods, including training models on more diverse data, using adversarial training methods, and human-in-the-loop approaches.21 In addition, medicine-specific models like GatorTron, medPaLM, and Almanac were developed, increasing the accuracy of factual results.22-24 Unfortunately, only the GatorTron model is available to the public through the NVIDIA developers’ program.25

Despite these shortcomings, the future of LLMs in health care is promising. Although these models will not replace HCPs, they can help reduce the unnecessary burden on them, prevent burnout, and enable HCPs and patients spend more time together. Establishing an official hospital AI oversight governing body that would promote best practices could ensure the trustworthy implementation of these new technologies.26

Conclusions

The use of ChatGPT and other LLMs in health care has the potential to revolutionize the industry. By assisting HCPs with administrative tasks, improving the accuracy and reliability of diagnoses, and engaging patients, ChatGPT can help health care organizations provide better care to their patients. While LLMs are not a substitute for human interaction and personalized care, they can augment the work of HCPs, making health care more accessible and efficient. As the health care industry continues to evolve, it will be exciting to see how ChatGPT and other LLMs are used to improve patient outcomes and quality of care. In addition, AI technologies like ChatGPT offer enormous potential in medical education and research. To ensure that the benefits outweigh the risks, developing trustworthy AI health care products and establishing oversight governing bodies to ensure their implementation is essential. By doing so, we can help HCPs focus on what matters most, providing high-quality care to patients.

Acknowledgments

This material is the result of work supported by resources and the use of facilities at the James A. Haley Veterans’ Hospital.

References

1. Bill Gates. The age of AI has begun. March 21, 2023. Accessed May 10, 2023. https://www.gatesnotes.com/the-age-of-ai-has-begun

2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198. Published 2023 Feb 9. doi:10.1371/journal.pdig.0000198

3. Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2020. Mayo Clin Proc. 2022;97(3):491-506. doi:10.1016/j.mayocp.2021.11.021

4. Goodman RS, Patrinely JR Jr, Osterman T, Wheless L, Johnson DB. On the cusp: considering the impact of artificial intelligence language models in healthcare. Med. 2023;4(3):139-140. doi:10.1016/j.medj.2023.02.008

5. Will ChatGPT transform healthcare? Nat Med. 2023;29(3):505-506. doi:10.1038/s41591-023-02289-5

6. Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023;7(2):pkad010. doi:10.1093/jncics/pkad010

7. Babar Z, van Laarhoven T, Zanzotto FM, Marchiori E. Evaluating diagnostic content of AI-generated radiology reports of chest X-rays. Artif Intell Med. 2021;116:102075. doi:10.1016/j.artmed.2021.102075

8. Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn Interv Imaging. 2023;S2211-5684(23)00027-X. doi:10.1016/j.diii.2023.02.003

9. Germain JM. Is ChatGPT smart enough to practice mental health therapy? March 23, 2023. Accessed May 11, 2023. https://www.technewsworld.com/story/is-chatgpt-smart-enough-to-practice-mental-health-therapy-178064.html

10. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J Med Syst. 2023;47(1):33. Published 2023 Mar 4. doi:10.1007/s10916-023-01925-4

11. Jungwirth D, Haluza D. Artificial intelligence and public health: an exploratory study. Int J Environ Res Public Health. 2023;20(5):4541. Published 2023 Mar 3. doi:10.3390/ijerph20054541

12. Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Published 2023 Feb 8. doi:10.2196/45312

13. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023;9:e46885. Published 2023 Mar 6. doi:10.2196/46885

14. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. Published 2023 Feb 17. doi:10.7189/jogh.13.01003

15. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. Med Teach. 2023;1-11. doi:10.1080/0142159X.2023.2186203

16. Smith CS. Hallucinations could blunt ChatGPT’s success. IEEE Spectrum. March 13, 2023. Accessed May 11, 2023. https://spectrum.ieee.org/ai-hallucination

17. Executive Office of the President, Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. Accessed May 11, 2023. https://www.whitehouse.gov/ostp/ai-bill-of-rights

18. Executive office of the President. Executive Order 13960: promoting the use of trustworthy artificial intelligence in the federal government. Fed Regist. 2020;89(236):78939-78943.

19. US Department of Commerce, National institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). Published January 2023. doi:10.6028/NIST.AI.100-1

20. Microsoft. Azure Cognitive Search—Cloud Search Service. Accessed May 11, 2023. https://azure.microsoft.com/en-us/products/search

21. Aiyappa R, An J, Kwak H, Ahn YY. Can we trust the evaluation on ChatGPT? March 22, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.12767v1

22. Yang X, Chen A, Pournejatian N, et al. GatorTron: a large clinical language model to unlock patient information from unstructured electronic health records. Updated December 16, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2203.03540v3

23. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. December 26, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2212.13138v1

24. Zakka C, Chaurasia A, Shad R, Hiesinger W. Almanac: knowledge-grounded language models for clinical medicine. March 1, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.01229v1

25. NVIDIA. GatorTron-OG. Accessed May 11, 2023. https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og

26. Borkowski AA, Jakey CE, Thomas LB, Viswanadhan N, Mastorides SM. Establishing a hospital artificial intelligence committee to improve patient care. Fed Pract. 2022;39(8):334-336. doi:10.12788/fp.0299

References

1. Bill Gates. The age of AI has begun. March 21, 2023. Accessed May 10, 2023. https://www.gatesnotes.com/the-age-of-ai-has-begun

2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198. Published 2023 Feb 9. doi:10.1371/journal.pdig.0000198

3. Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2020. Mayo Clin Proc. 2022;97(3):491-506. doi:10.1016/j.mayocp.2021.11.021

4. Goodman RS, Patrinely JR Jr, Osterman T, Wheless L, Johnson DB. On the cusp: considering the impact of artificial intelligence language models in healthcare. Med. 2023;4(3):139-140. doi:10.1016/j.medj.2023.02.008

5. Will ChatGPT transform healthcare? Nat Med. 2023;29(3):505-506. doi:10.1038/s41591-023-02289-5

6. Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023;7(2):pkad010. doi:10.1093/jncics/pkad010

7. Babar Z, van Laarhoven T, Zanzotto FM, Marchiori E. Evaluating diagnostic content of AI-generated radiology reports of chest X-rays. Artif Intell Med. 2021;116:102075. doi:10.1016/j.artmed.2021.102075

8. Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn Interv Imaging. 2023;S2211-5684(23)00027-X. doi:10.1016/j.diii.2023.02.003

9. Germain JM. Is ChatGPT smart enough to practice mental health therapy? March 23, 2023. Accessed May 11, 2023. https://www.technewsworld.com/story/is-chatgpt-smart-enough-to-practice-mental-health-therapy-178064.html

10. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J Med Syst. 2023;47(1):33. Published 2023 Mar 4. doi:10.1007/s10916-023-01925-4

11. Jungwirth D, Haluza D. Artificial intelligence and public health: an exploratory study. Int J Environ Res Public Health. 2023;20(5):4541. Published 2023 Mar 3. doi:10.3390/ijerph20054541

12. Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Published 2023 Feb 8. doi:10.2196/45312

13. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023;9:e46885. Published 2023 Mar 6. doi:10.2196/46885

14. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. Published 2023 Feb 17. doi:10.7189/jogh.13.01003

15. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. Med Teach. 2023;1-11. doi:10.1080/0142159X.2023.2186203

16. Smith CS. Hallucinations could blunt ChatGPT’s success. IEEE Spectrum. March 13, 2023. Accessed May 11, 2023. https://spectrum.ieee.org/ai-hallucination

17. Executive Office of the President, Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. Accessed May 11, 2023. https://www.whitehouse.gov/ostp/ai-bill-of-rights

18. Executive office of the President. Executive Order 13960: promoting the use of trustworthy artificial intelligence in the federal government. Fed Regist. 2020;89(236):78939-78943.

19. US Department of Commerce, National institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). Published January 2023. doi:10.6028/NIST.AI.100-1

20. Microsoft. Azure Cognitive Search—Cloud Search Service. Accessed May 11, 2023. https://azure.microsoft.com/en-us/products/search

21. Aiyappa R, An J, Kwak H, Ahn YY. Can we trust the evaluation on ChatGPT? March 22, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.12767v1

22. Yang X, Chen A, Pournejatian N, et al. GatorTron: a large clinical language model to unlock patient information from unstructured electronic health records. Updated December 16, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2203.03540v3

23. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. December 26, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2212.13138v1

24. Zakka C, Chaurasia A, Shad R, Hiesinger W. Almanac: knowledge-grounded language models for clinical medicine. March 1, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.01229v1

25. NVIDIA. GatorTron-OG. Accessed May 11, 2023. https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og

26. Borkowski AA, Jakey CE, Thomas LB, Viswanadhan N, Mastorides SM. Establishing a hospital artificial intelligence committee to improve patient care. Fed Pract. 2022;39(8):334-336. doi:10.12788/fp.0299

Issue
Federal Practitioner - 40(6)a
Issue
Federal Practitioner - 40(6)a
Page Number
170-173
Page Number
170-173
Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>0623 FED ChatGPT</fileName> <TBEID>0C02D05E.SIG</TBEID> <TBUniqueIdentifier>NJ_0C02D05E</TBUniqueIdentifier> <newsOrJournal>Journal</newsOrJournal> <publisherName>Frontline Medical Communications Inc.</publisherName> <storyname/> <articleType>1</articleType> <TBLocation>Copyfitting-FED</TBLocation> <QCDate/> <firstPublished>20230613T113642</firstPublished> <LastPublished>20230613T113642</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20230613T113642</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline/> <bylineText>Andrew A. Borkowski, MDa,b,c; Colleen E. Jakey, MD,a,b; Stephen M. Mastorides, MDa,b; Ana L. Kraus, MDa,b; Gitanjali Vidyarthi, MDa,b; Narayan Viswanadhan, MDa,b; Jose L. Lezama, MDa,b</bylineText> <bylineFull/> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType/> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:"> <name/> <rightsInfo> <copyrightHolder> <name/> </copyrightHolder> <copyrightNotice/> </rightsInfo> </provider> <abstract/> <metaDescription>As the world emerges from the pandemic and the health care system faces new challenges, technology has become an increasingly important tool for health care pro</metaDescription> <articlePDF/> <teaserImage/> <title>Applications of ChatGPT and Large Language Models in Medicine and Health Care: Benefits and Pitfalls</title> <deck/> <eyebrow>Commentary</eyebrow> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear>2023</pubPubdateYear> <pubPubdateMonth>June</pubPubdateMonth> <pubPubdateDay/> <pubVolume>40</pubVolume> <pubNumber>6</pubNumber> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>fed</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">16</term> </publications> <sections> <term canonical="true">52</term> </sections> <topics> <term canonical="true">27442</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Applications of ChatGPT and Large Language Models in Medicine and Health Care: Benefits and Pitfalls</title> <deck/> </itemMeta> <itemContent> <p class="abstract"><b>Background:</b> The use of large language models like ChatGPT is becoming increasingly popular in health care settings. These artificial intelligence models are trained on vast amounts of data and can be used for various tasks, such as language translation, summarization, and answering questions. <br/><br/><b>Observations: </b>Large language models have the potential to revolutionize the industry by assisting medical professionals with administrative tasks, improving diagnostic accuracy, and engaging patients. However, pitfalls exist, such as its inability to distinguish between real and fake information and the need to comply with privacy, security, and transparency principles. <br/><br/><b>Conclusions:</b> Careful consideration is needed to ensure the responsible and ethical use of large language models in medicine and health care.</p> <p class="Normal"> <i>The development of [artificial intelligence] is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.</i> </p> <p class="Normal"> Bill Gates <sup>1</sup> </p> <p><span class="Drop">A</span>s the world emerges from the pandemic and the health care system faces new challenges, technology has become an increasingly important tool for health care professionals (HCPs). One such technology is the large language model (LLM), which has the potential to revolutionize the health care industry. ChatGPT, a popular LLM developed by OpenAI, has gained particular attention in the medical community for its ability to pass the United States Medical Licensing Exam.<sup>2</sup> This article will explore the benefits and potential pitfalls of using LLMs like ChatGPT in medicine and health care. </p> <h2>Benefits</h2> <p>HCP burnout is a serious issue that can lead to lower productivity, increased medical errors, and decreased patient satisfaction.<sup>3</sup> LLMs can alleviate some administrative burdens on HCPs, allowing them to focus on patient care. By assisting with billing, coding, insurance claims, and organizing schedules, LLMs like ChatGPT can free up time for HCPs to focus on what they do best: providing quality patient care.<sup>4</sup> ChatGPT also can assist with diagnoses by providing accurate and reliable information based on a vast amount of clinical data. By learning the relationships between different medical conditions, symptoms, and treatment options, ChatGPT can provide an appropriate differential diagnosis (Figure 1). It can also interpret medical tests, such as imaging studies and laboratory results, improving the accuracy of diagnoses.<sup>5</sup> LLMs can also identify potential clinical trial opportunities for patients, leading to improved treatment options and outcomes.<sup>6</sup></p> <p>Imaging medical specialists like radiologists, pathologists, dermatologists, and others can benefit from combining computer vision diagnostics with ChatGPT report creation abilities to streamline the diagnostic workflow and improve diagnostic accuracy (Figure 2). By leveraging the power of LLMs, HCPs can provide faster and more accurate diagnoses, improving patient outcomes. ChatGPT can also help triage patients with urgent issues in the emergency department, reducing the burden on personnel and allowing patients to receive prompt care.<sup>7,8<br/><br/></sup>Although using ChatGPT and other LLMs in mental health care has potential benefits, it is essential to note that they are not a substitute for human interaction and personalized care. While ChatGPT can remember information from previous conversations, it cannot provide the same level of personalized, high-quality care that a professional therapist or HCP can. However, by augmenting the work of HCPs, ChatGPT and other LLMs have the potential to make mental health care more accessible and efficient. In addition to providing effective screening in underserved areas, ChatGPT technology may improve the competence of physician assistants and nurse practitioners in delivering mental health care. With the increased incidence of mental health problems in veterans, the pertinence of a ChatGPT-like feature will only increase with time.<sup>9<br/><br/></sup>ChatGPT can also be integrated into health care organizations’ websites and mobile apps, providing patients instant access to medical information, self-care advice, symptom checkers, scheduling appointments, and arranging transportation. These features can reduce the burden on health care staff and help patients stay informed and motivated to take an active role in their health. Additionally, health care organizations can use ChatGPT to engage patients by providing reminders for medication renewals and assistance with self-care.<sup>4,6,10,11<br/><br/></sup>The potential of artificial intelligence (AI) in the field of medical education and research is immense. According to a study by Gilson and colleagues, ChatGPT has shown promising results as a medical education tool.<sup>12</sup> ChatGPT can simulate clinical scenarios, provide real-time feedback, and improve diagnostic skills. It also offers new interactive and personalized learning opportunities for medical students and HCPs.<sup>13</sup> ChatGPT can help researchers by streamlining the process of data analysis. It can also administer surveys or questionnaires, facilitate data collection on preferences and experiences, and help in writing scientific publications.<sup>14</sup> Nevertheless, to fully unlock the potential of these AI models, additional models that perform checks for factual accuracy, plagiarism, and copyright infringement must be developed.<sup>15,16</sup></p> <h2>AI Bill of Rights</h2> <p>In order to protect the American public, the White House Office of Science and Technology Policy (OSTP) has released a blueprint for an AI Bill of Rights that emphasizes 5 principles to protect the public from the harmful effects of AI models, including safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, considerations, and fallback (Figure 3).<sup>17</sup> Other trustworthy AI frameworks, such as the White House Executive Order 13960 and the National Institute of Standards and Technology AI Risk Management Framework, are essential to building trust for AI services among HCPs and veteran patients.<sup>18,19</sup> To ensure that ChatGPT complies with these principles, especially those related to privacy, security, transparency, and explainability, it is essential to develop trustworthy AI health care products. Methods like calibration and fine-tuning with specialized data sets from the target population and guiding the model’s behavior with reinforcement learning with human feedback (RLHF) may be beneficial. Preserving the patient’s confidentiality is of utmost importance. For example, Microsoft Azure Machine Learning Services, including ChatGPT GPT-4, are Health Insurance Portability and Accountability Act–certified and could enable the creation of such products.<sup>20</sup></p> <p>One of the biggest challenges with LLMs like ChatGPT is the prevalence of inaccurate information or so-called hallucinations.<sup>16</sup> These inaccuracies stem from the inability of LLMs to distinguish between real and fake information. To prevent hallucinations, researchers have proposed several methods, including training models on more diverse data, using adversarial training methods, and human-in-the-loop approaches.<sup>21</sup> In addition, medicine-specific models like GatorTron, medPaLM, and Almanac were developed, increasing the accuracy of factual results.<sup>22-24</sup> Unfortunately, only the GatorTron model is available to the public through the NVIDIA developers’ program.<sup>25<br/><br/></sup>Despite these shortcomings, the future of LLMs in health care is promising. Although these models will not replace HCPs, they can help reduce the unnecessary burden on them, prevent burnout, and enable HCPs and patients spend more time together. Establishing an official hospital AI oversight governing body that would promote best practices could ensure the trustworthy implementation of these new technologies.<sup>26</sup></p> <h2>Conclusions</h2> <p>The use of ChatGPT and other LLMs in health care has the potential to revolutionize the industry. By assisting HCPs with administrative tasks, improving the accuracy and reliability of diagnoses, and engaging patients, ChatGPT can help health care organizations provide better care to their patients. While LLMs are not a substitute for human interaction and personalized care, they can augment the work of HCPs, making health care more accessible and efficient. As the health care industry continues to evolve, it will be exciting to see how ChatGPT and other LLMs are used to improve patient outcomes and quality of care. In addition, AI technologies like ChatGPT offer enormous potential in medical education and research. To ensure that the benefits outweigh the risks, developing trustworthy AI health care products and establishing oversight governing bodies to ensure their implementation is essential. By doing so, we can help HCPs focus on what matters most, providing high-quality care to patients.</p> <h3> Acknowledgments </h3> <p> <em>This material is the result of work supported by resources and the use of facilities at the James A. Haley Veterans’ Hospital.</em> </p> <h3> Author affiliations </h3> <p> <em><sup>a</sup>James A. Haley Veterans’ Hospital, Tampa, Florida<br/><br/><sup>b</sup>University of South Florida Morsani College of Medicine, Tampa<br/><br/><sup>c</sup>National Artificial Intelligence Institute, Washington, DC</em> </p> <h3> Author disclosures </h3> <p> <em>The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.</em> </p> <h3> Disclaimer </h3> <p> <em>The opinions expressed herein are those of the authors and do not necessarily reflect those of<i> Federal Practitioner,</i> Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.</em> </p> <h3> References </h3> <p class="reference"> 1. Bill Gates. The age of AI has begun. March 21, 2023. Accessed May 10, 2023. https://www.gatesnotes.com/the-age-of-ai-has-begun <br/><br/> 2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. <i>PLOS Digit Health</i>. 2023;2(2):e0000198. Published 2023 Feb 9. doi:10.1371/journal.pdig.0000198<br/><br/> 3. Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2020. <i>Mayo Clin Proc</i>. 2022;97(3):491-506. doi:10.1016/j.mayocp.2021.11.021<br/><br/> 4. Goodman RS, Patrinely JR Jr, Osterman T, Wheless L, Johnson DB. On the cusp: considering the impact of artificial intelligence language models in healthcare. <i>Med</i>. 2023;4(3):139-140. doi:10.1016/j.medj.2023.02.008<br/><br/> 5. Will ChatGPT transform healthcare? <i>Nat Med</i>. 2023;29(3):505-506. doi:10.1038/s41591-023-02289-5<br/><br/> 6. Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. <i>JNCI Cancer Spectr</i>. 2023;7(2):pkad010. doi:10.1093/jncics/pkad010<br/><br/> 7. Babar Z, van Laarhoven T, Zanzotto FM, Marchiori E. Evaluating diagnostic content of AI-generated radiology reports of chest X-rays. <i>Artif Intell Med</i>. 2021;116:102075. doi:10.1016/j.artmed.2021.102075<br/><br/> 8. Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. <i>Diagn Interv Imaging</i>. 2023;S2211-5684(23)00027-X. doi:10.1016/j.diii.2023.02.003<br/><br/> 9. Germain JM. Is ChatGPT smart enough to practice mental health therapy? March 23, 2023. Accessed May 11, 2023. https://www.technewsworld.com/story/is-chatgpt-smart-enough-to-practice-mental-health-therapy-178064.html<br/><br/>10. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. <i>J Med Syst</i>. 2023;47(1):33. Published 2023 Mar 4. doi:10.1007/s10916-023-01925-4<br/><br/>11. Jungwirth D, Haluza D. Artificial intelligence and public health: an exploratory study. <i>Int J Environ Res Public Health</i>. 2023;20(5):4541. Published 2023 Mar 3. doi:10.3390/ijerph20054541<br/><br/>12. Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. <i>JMIR Med Educ</i>. 2023;9:e45312. Published 2023 Feb 8. doi:10.2196/45312<br/><br/>13. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. <i>JMIR Med Educ</i>. 2023;9:e46885. Published 2023 Mar 6. doi:10.2196/46885<br/><br/>14. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. <i>J Glob Health</i>. 2023;13:01003. Published 2023 Feb 17. doi:10.7189/jogh.13.01003<br/><br/>15. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. <i>Med Teach</i>. 2023;1-11. doi:10.1080/0142159X.2023.2186203<br/><br/>16. Smith CS. Hallucinations could blunt ChatGPT’s success. IEEE Spectrum. March 13, 2023. Accessed May 11, 2023. https://spectrum.ieee.org/ai-hallucination<br/><br/>17. Executive Office of the President, Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. Accessed May 11, 2023. https://www.whitehouse.gov/ostp/ai-bill-of-rights<br/><br/>18. Executive office of the President. Executive Order 13960: promoting the use of trustworthy artificial intelligence in the federal government. <i>Fed Regist</i>. 2020;89(236):78939-78943.<br/><br/>19. US Department of Commerce, National institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). Published January 2023. doi:10.6028/NIST.AI.100-1<br/><br/>20. Microsoft. Azure Cognitive Search—Cloud Search Service. Accessed May 11, 2023. https://azure.microsoft.com/en-us/products/search<br/><br/>21. Aiyappa R, An J, Kwak H, Ahn YY. Can we trust the evaluation on ChatGPT? March 22, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.12767v1<br/><br/>22. Yang X, Chen A, Pournejatian N, et al. GatorTron: a large clinical language model to unlock patient information from unstructured electronic health records. Updated December 16, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2203.03540v3<br/><br/>23. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. December 26, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2212.13138v1<br/><br/>24. Zakka C, Chaurasia A, Shad R, Hiesinger W. Almanac: knowledge-grounded language models for clinical medicine. March 1, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.01229v1<br/><br/>25. NVIDIA. GatorTron-OG. Accessed May 11, 2023. https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og<br/><br/>26. Borkowski AA, Jakey CE, Thomas LB, Viswanadhan N, Mastorides SM. Establishing a hospital artificial intelligence committee to improve patient care. <i>Fed Pract</i>. 2022;39(8):334-336. doi:10.12788/fp.0299</p> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Establishing a Hospital Artificial Intelligence Committee to Improve Patient Care

Article Type
Changed
Tue, 08/09/2022 - 14:08

In the past 10 years, artificial intelligence (AI) applications have exploded in numerous fields, including medicine. Myriad publications report that the use of AI in health care is increasing, and AI has shown utility in many medical specialties, eg, pathology, radiology, and oncology.1,2

In cancer pathology, AI was able not only to detect various cancers, but also to subtype and grade them. In addition, AI could predict survival, the success of therapeutic response, and underlying mutations from histopathologic images.3 In other medical fields, AI applications are as notable. For example, in imaging specialties like radiology, ophthalmology, dermatology, and gastroenterology, AI is being used for image recognition, enhancement, and segmentation. In addition, AI is beneficial for predicting disease progression, survival, and response to therapy in other medical specialties. Finally, AI may help with administrative tasks like scheduling.

However, many obstacles to successfully implementing AI programs in the clinical setting exist, including clinical data limitations and ethical use of data, trust in the AI models, regulatory barriers, and lack of clinical buy-in due to insufficient basic AI understanding.2 To address these barriers to successful clinical AI implementation, we decided to create a formal governing body at James A. Haley Veterans’ Hospital in Tampa, Florida. Accordingly, the hospital AI committee charter was officially approved on July 22, 2021. Our model could be used by both US Department of Veterans Affairs (VA) and non-VA hospitals throughout the country.

 

AI Committee

The vision of the AI committee is to improve outcomes and experiences for our veterans by developing trustworthy AI capabilities to support the VA mission. The mission is to build robust capacity in AI to create and apply innovative AI solutions and transform the VA by facilitating a learning environment that supports the delivery of world-class benefits and services to our veterans. Our vision and mission are aligned with the VA National AI Institute. 4

The AI Committee comprises 7 subcommittees: ethics, AI clinical product evaluation, education, data sharing and acquisition, research, 3D printing, and improvement and innovation. The role of the ethics subcommittee is to ensure the ethical and equitable implementation of clinical AI. We created the ethics subcommittee guidelines based on the World Health Organization ethics and governance of AI for health documents.5 They include 6 basic principles: protecting human autonomy; promoting human well-being and safety and the public interest; ensuring transparency, explainability, and intelligibility; fostering responsibility and accountability; ensuring inclusiveness and equity; and promoting AI that is responsive and sustainable (Table 1).

fdp03908334_t2.png

fdp03908334_t1.png


As the name indicates, the role of the AI clinical product evaluation subcommittee is to evaluate commercially available clinical AI products. More than 400 US Food and Drug Administration–approved AI medical applications exist, and the list is growing rapidly. Most AI applications are in medical imaging like radiology, dermatology, ophthalmology, and pathology.6,7 Each clinical product is evaluated according to 6 principles: relevance, usability, risks, regulatory, technical requirements, and financial (Table 2).8 We are in the process of evaluating a few commercial AI algorithms for pathology and radiology, using these 6 principles.

 

 

Implementations

After a comprehensive evaluation, we implemented 2 ClearRead (Riverain Technologies) AI radiology solutions. ClearRead CT Vessel Suppress produces a secondary series of computed tomography (CT) images, suppressing vessels and other normal structures within the lungs to improve nodule detectability, and ClearRead Xray Bone Suppress, which increases the visibility of soft tissue in standard chest X-rays by suppressing the bone on the digital image without the need for 2 exposures.

The role of the education subcommittee is to educate the staff about AI and how it can improve patient care. Every Friday, we email an AI article of the week to our practitioners. In addition, we publish a newsletter, and we organize an annual AI conference. The first conference in 2022 included speakers from the National AI Institute, Moffitt Cancer Center, the University of South Florida, and our facility.

As the name indicates, the data sharing and acquisition subcommittee oversees preparing data for our clinical and research projects. The role of the research subcommittee is to coordinate and promote AI research with the ultimate goal of improving patient care.

 

Other Technologies

Although 3D printing does not fall under the umbrella of AI, we have decided to include it in our future-oriented AI committee. We created an online 3D printing course to promote the technology throughout the VA. We 3D print organ models to help surgeons prepare for complicated operations. In addition, together with our colleagues from the University of Florida, we used 3D printing to address the shortage of swabs for COVID-19 testing. The VA Sunshine Healthcare Network (Veterans Integrated Services Network 8) has an active Innovation and Improvement Committee. 9 Our improvement and innovation subcommittee serves as a coordinating body with the network committee .

Conclusions

Through the hospital AI committee, we believe that we may overcome many obstacles to successfully implementing AI applications in the clinical setting, including the ethical use of data, trust in the AI models, regulatory barriers, and lack of clinical buy-in due to insufficient basic AI knowledge.

Acknowledgments

This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital.

Article PDF
Author and Disclosure Information

Andrew A. Borkowski, MDa,b; Colleen E. Jakey, MDa,b; L. Brannon Thomas, MD, PhDa,b; Narayan Viswanadhan, MDa,b, Stephen M. Mastorides, MDa,b
Correspondence:
Andrew Borkowski (andrew.borkowski@va.gov)

aJames A. Haley Veterans’ Hospital, Tampa, Florida
bUniversity of South Florida Morsani College of Medicine, Tampa

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

References

1. Thomas LB, Mastorides SM, Viswanadhan N, Jakey CE, Borkowski AA. Artificial intelligence: review of current and future applications in medicine. Fed Pract. 2021;38(11):527-538. doi:10.12788/fp.0174

2. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31-38. doi:10.1038/s41591-021-01614-0

3. Echle A, Rindtorff NT, Brinker TJ, Luedde T, Pearson AT, Kather JN. Deep learning in cancer pathology: a new generation of clinical biomarkers. Br J Cancer. 2021;124(4):686-696. doi:10.1038/s41416-020-01122-x

4. US Department of Veterans Affairs, Office of Research and Development. National Artificial Intelligence Institute. Accessed April 13, 2022. https://www.research.va.gov/naii

5. World Health Organization. Ethics and governance of artificial intelligence for health. Updated June 6, 2022. Accessed June 24, 2022. https://www.who.int/publications/i/item/9789240029200

6. US Food and Drug Administration. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. Updated September 22, 2021. Accessed June 24, 2022. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices

7. Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. The Lancet Digital Health. 2021;3(3):e195-e203. doi:10.1016/S2589-7500(20)30292-2/ATTACHMENT/C8457399-F5CE-4A30-8D36-2A9C835FB86D/MMC1.PDF

8. Omoumi P, Ducarouge A, Tournier A, et al. To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines). Eur Radiol. 2021;31(6):3786-3796. doi:10.1007/s00330-020-07684-x

9. US Department of Veterans Affairs. VA Sunshine Healthcare Network. Updated June 21, 2022. Accessed June 24, 2022. https://www.visn8.va.gov

Issue
Federal Practitioner - 39(8)a
Publications
Topics
Page Number
334-336
Sections
Author and Disclosure Information

Andrew A. Borkowski, MDa,b; Colleen E. Jakey, MDa,b; L. Brannon Thomas, MD, PhDa,b; Narayan Viswanadhan, MDa,b, Stephen M. Mastorides, MDa,b
Correspondence:
Andrew Borkowski (andrew.borkowski@va.gov)

aJames A. Haley Veterans’ Hospital, Tampa, Florida
bUniversity of South Florida Morsani College of Medicine, Tampa

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

References

1. Thomas LB, Mastorides SM, Viswanadhan N, Jakey CE, Borkowski AA. Artificial intelligence: review of current and future applications in medicine. Fed Pract. 2021;38(11):527-538. doi:10.12788/fp.0174

2. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31-38. doi:10.1038/s41591-021-01614-0

3. Echle A, Rindtorff NT, Brinker TJ, Luedde T, Pearson AT, Kather JN. Deep learning in cancer pathology: a new generation of clinical biomarkers. Br J Cancer. 2021;124(4):686-696. doi:10.1038/s41416-020-01122-x

4. US Department of Veterans Affairs, Office of Research and Development. National Artificial Intelligence Institute. Accessed April 13, 2022. https://www.research.va.gov/naii

5. World Health Organization. Ethics and governance of artificial intelligence for health. Updated June 6, 2022. Accessed June 24, 2022. https://www.who.int/publications/i/item/9789240029200

6. US Food and Drug Administration. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. Updated September 22, 2021. Accessed June 24, 2022. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices

7. Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. The Lancet Digital Health. 2021;3(3):e195-e203. doi:10.1016/S2589-7500(20)30292-2/ATTACHMENT/C8457399-F5CE-4A30-8D36-2A9C835FB86D/MMC1.PDF

8. Omoumi P, Ducarouge A, Tournier A, et al. To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines). Eur Radiol. 2021;31(6):3786-3796. doi:10.1007/s00330-020-07684-x

9. US Department of Veterans Affairs. VA Sunshine Healthcare Network. Updated June 21, 2022. Accessed June 24, 2022. https://www.visn8.va.gov

Author and Disclosure Information

Andrew A. Borkowski, MDa,b; Colleen E. Jakey, MDa,b; L. Brannon Thomas, MD, PhDa,b; Narayan Viswanadhan, MDa,b, Stephen M. Mastorides, MDa,b
Correspondence:
Andrew Borkowski (andrew.borkowski@va.gov)

aJames A. Haley Veterans’ Hospital, Tampa, Florida
bUniversity of South Florida Morsani College of Medicine, Tampa

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

References

1. Thomas LB, Mastorides SM, Viswanadhan N, Jakey CE, Borkowski AA. Artificial intelligence: review of current and future applications in medicine. Fed Pract. 2021;38(11):527-538. doi:10.12788/fp.0174

2. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31-38. doi:10.1038/s41591-021-01614-0

3. Echle A, Rindtorff NT, Brinker TJ, Luedde T, Pearson AT, Kather JN. Deep learning in cancer pathology: a new generation of clinical biomarkers. Br J Cancer. 2021;124(4):686-696. doi:10.1038/s41416-020-01122-x

4. US Department of Veterans Affairs, Office of Research and Development. National Artificial Intelligence Institute. Accessed April 13, 2022. https://www.research.va.gov/naii

5. World Health Organization. Ethics and governance of artificial intelligence for health. Updated June 6, 2022. Accessed June 24, 2022. https://www.who.int/publications/i/item/9789240029200

6. US Food and Drug Administration. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. Updated September 22, 2021. Accessed June 24, 2022. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices

7. Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. The Lancet Digital Health. 2021;3(3):e195-e203. doi:10.1016/S2589-7500(20)30292-2/ATTACHMENT/C8457399-F5CE-4A30-8D36-2A9C835FB86D/MMC1.PDF

8. Omoumi P, Ducarouge A, Tournier A, et al. To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines). Eur Radiol. 2021;31(6):3786-3796. doi:10.1007/s00330-020-07684-x

9. US Department of Veterans Affairs. VA Sunshine Healthcare Network. Updated June 21, 2022. Accessed June 24, 2022. https://www.visn8.va.gov

Article PDF
Article PDF

In the past 10 years, artificial intelligence (AI) applications have exploded in numerous fields, including medicine. Myriad publications report that the use of AI in health care is increasing, and AI has shown utility in many medical specialties, eg, pathology, radiology, and oncology.1,2

In cancer pathology, AI was able not only to detect various cancers, but also to subtype and grade them. In addition, AI could predict survival, the success of therapeutic response, and underlying mutations from histopathologic images.3 In other medical fields, AI applications are as notable. For example, in imaging specialties like radiology, ophthalmology, dermatology, and gastroenterology, AI is being used for image recognition, enhancement, and segmentation. In addition, AI is beneficial for predicting disease progression, survival, and response to therapy in other medical specialties. Finally, AI may help with administrative tasks like scheduling.

However, many obstacles to successfully implementing AI programs in the clinical setting exist, including clinical data limitations and ethical use of data, trust in the AI models, regulatory barriers, and lack of clinical buy-in due to insufficient basic AI understanding.2 To address these barriers to successful clinical AI implementation, we decided to create a formal governing body at James A. Haley Veterans’ Hospital in Tampa, Florida. Accordingly, the hospital AI committee charter was officially approved on July 22, 2021. Our model could be used by both US Department of Veterans Affairs (VA) and non-VA hospitals throughout the country.

 

AI Committee

The vision of the AI committee is to improve outcomes and experiences for our veterans by developing trustworthy AI capabilities to support the VA mission. The mission is to build robust capacity in AI to create and apply innovative AI solutions and transform the VA by facilitating a learning environment that supports the delivery of world-class benefits and services to our veterans. Our vision and mission are aligned with the VA National AI Institute. 4

The AI Committee comprises 7 subcommittees: ethics, AI clinical product evaluation, education, data sharing and acquisition, research, 3D printing, and improvement and innovation. The role of the ethics subcommittee is to ensure the ethical and equitable implementation of clinical AI. We created the ethics subcommittee guidelines based on the World Health Organization ethics and governance of AI for health documents.5 They include 6 basic principles: protecting human autonomy; promoting human well-being and safety and the public interest; ensuring transparency, explainability, and intelligibility; fostering responsibility and accountability; ensuring inclusiveness and equity; and promoting AI that is responsive and sustainable (Table 1).

fdp03908334_t2.png

fdp03908334_t1.png


As the name indicates, the role of the AI clinical product evaluation subcommittee is to evaluate commercially available clinical AI products. More than 400 US Food and Drug Administration–approved AI medical applications exist, and the list is growing rapidly. Most AI applications are in medical imaging like radiology, dermatology, ophthalmology, and pathology.6,7 Each clinical product is evaluated according to 6 principles: relevance, usability, risks, regulatory, technical requirements, and financial (Table 2).8 We are in the process of evaluating a few commercial AI algorithms for pathology and radiology, using these 6 principles.

 

 

Implementations

After a comprehensive evaluation, we implemented 2 ClearRead (Riverain Technologies) AI radiology solutions. ClearRead CT Vessel Suppress produces a secondary series of computed tomography (CT) images, suppressing vessels and other normal structures within the lungs to improve nodule detectability, and ClearRead Xray Bone Suppress, which increases the visibility of soft tissue in standard chest X-rays by suppressing the bone on the digital image without the need for 2 exposures.

The role of the education subcommittee is to educate the staff about AI and how it can improve patient care. Every Friday, we email an AI article of the week to our practitioners. In addition, we publish a newsletter, and we organize an annual AI conference. The first conference in 2022 included speakers from the National AI Institute, Moffitt Cancer Center, the University of South Florida, and our facility.

As the name indicates, the data sharing and acquisition subcommittee oversees preparing data for our clinical and research projects. The role of the research subcommittee is to coordinate and promote AI research with the ultimate goal of improving patient care.

 

Other Technologies

Although 3D printing does not fall under the umbrella of AI, we have decided to include it in our future-oriented AI committee. We created an online 3D printing course to promote the technology throughout the VA. We 3D print organ models to help surgeons prepare for complicated operations. In addition, together with our colleagues from the University of Florida, we used 3D printing to address the shortage of swabs for COVID-19 testing. The VA Sunshine Healthcare Network (Veterans Integrated Services Network 8) has an active Innovation and Improvement Committee. 9 Our improvement and innovation subcommittee serves as a coordinating body with the network committee .

Conclusions

Through the hospital AI committee, we believe that we may overcome many obstacles to successfully implementing AI applications in the clinical setting, including the ethical use of data, trust in the AI models, regulatory barriers, and lack of clinical buy-in due to insufficient basic AI knowledge.

Acknowledgments

This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital.

In the past 10 years, artificial intelligence (AI) applications have exploded in numerous fields, including medicine. Myriad publications report that the use of AI in health care is increasing, and AI has shown utility in many medical specialties, eg, pathology, radiology, and oncology.1,2

In cancer pathology, AI was able not only to detect various cancers, but also to subtype and grade them. In addition, AI could predict survival, the success of therapeutic response, and underlying mutations from histopathologic images.3 In other medical fields, AI applications are as notable. For example, in imaging specialties like radiology, ophthalmology, dermatology, and gastroenterology, AI is being used for image recognition, enhancement, and segmentation. In addition, AI is beneficial for predicting disease progression, survival, and response to therapy in other medical specialties. Finally, AI may help with administrative tasks like scheduling.

However, many obstacles to successfully implementing AI programs in the clinical setting exist, including clinical data limitations and ethical use of data, trust in the AI models, regulatory barriers, and lack of clinical buy-in due to insufficient basic AI understanding.2 To address these barriers to successful clinical AI implementation, we decided to create a formal governing body at James A. Haley Veterans’ Hospital in Tampa, Florida. Accordingly, the hospital AI committee charter was officially approved on July 22, 2021. Our model could be used by both US Department of Veterans Affairs (VA) and non-VA hospitals throughout the country.

 

AI Committee

The vision of the AI committee is to improve outcomes and experiences for our veterans by developing trustworthy AI capabilities to support the VA mission. The mission is to build robust capacity in AI to create and apply innovative AI solutions and transform the VA by facilitating a learning environment that supports the delivery of world-class benefits and services to our veterans. Our vision and mission are aligned with the VA National AI Institute. 4

The AI Committee comprises 7 subcommittees: ethics, AI clinical product evaluation, education, data sharing and acquisition, research, 3D printing, and improvement and innovation. The role of the ethics subcommittee is to ensure the ethical and equitable implementation of clinical AI. We created the ethics subcommittee guidelines based on the World Health Organization ethics and governance of AI for health documents.5 They include 6 basic principles: protecting human autonomy; promoting human well-being and safety and the public interest; ensuring transparency, explainability, and intelligibility; fostering responsibility and accountability; ensuring inclusiveness and equity; and promoting AI that is responsive and sustainable (Table 1).

fdp03908334_t2.png

fdp03908334_t1.png


As the name indicates, the role of the AI clinical product evaluation subcommittee is to evaluate commercially available clinical AI products. More than 400 US Food and Drug Administration–approved AI medical applications exist, and the list is growing rapidly. Most AI applications are in medical imaging like radiology, dermatology, ophthalmology, and pathology.6,7 Each clinical product is evaluated according to 6 principles: relevance, usability, risks, regulatory, technical requirements, and financial (Table 2).8 We are in the process of evaluating a few commercial AI algorithms for pathology and radiology, using these 6 principles.

 

 

Implementations

After a comprehensive evaluation, we implemented 2 ClearRead (Riverain Technologies) AI radiology solutions. ClearRead CT Vessel Suppress produces a secondary series of computed tomography (CT) images, suppressing vessels and other normal structures within the lungs to improve nodule detectability, and ClearRead Xray Bone Suppress, which increases the visibility of soft tissue in standard chest X-rays by suppressing the bone on the digital image without the need for 2 exposures.

The role of the education subcommittee is to educate the staff about AI and how it can improve patient care. Every Friday, we email an AI article of the week to our practitioners. In addition, we publish a newsletter, and we organize an annual AI conference. The first conference in 2022 included speakers from the National AI Institute, Moffitt Cancer Center, the University of South Florida, and our facility.

As the name indicates, the data sharing and acquisition subcommittee oversees preparing data for our clinical and research projects. The role of the research subcommittee is to coordinate and promote AI research with the ultimate goal of improving patient care.

 

Other Technologies

Although 3D printing does not fall under the umbrella of AI, we have decided to include it in our future-oriented AI committee. We created an online 3D printing course to promote the technology throughout the VA. We 3D print organ models to help surgeons prepare for complicated operations. In addition, together with our colleagues from the University of Florida, we used 3D printing to address the shortage of swabs for COVID-19 testing. The VA Sunshine Healthcare Network (Veterans Integrated Services Network 8) has an active Innovation and Improvement Committee. 9 Our improvement and innovation subcommittee serves as a coordinating body with the network committee .

Conclusions

Through the hospital AI committee, we believe that we may overcome many obstacles to successfully implementing AI applications in the clinical setting, including the ethical use of data, trust in the AI models, regulatory barriers, and lack of clinical buy-in due to insufficient basic AI knowledge.

Acknowledgments

This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital.

Issue
Federal Practitioner - 39(8)a
Issue
Federal Practitioner - 39(8)a
Page Number
334-336
Page Number
334-336
Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>0822 FED AI</fileName> <TBEID>0C02A084.SIG</TBEID> <TBUniqueIdentifier>NJ_0C02A084</TBUniqueIdentifier> <newsOrJournal>Journal</newsOrJournal> <publisherName>Frontline Medical Communications Inc.</publisherName> <storyname>0822 FED AI</storyname> <articleType>1</articleType> <TBLocation>Copyfitting-FED</TBLocation> <QCDate/> <firstPublished>20220809T081534</firstPublished> <LastPublished>20220809T081534</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20220809T081533</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline/> <bylineText>Andrew A. Borkowski, MDa,b; Colleen E. Jakey, MDa,b; L. Brannon Thomas, MD, PhDa,b; Narayan Viswanadhan, MDa,b; and Stephen M. Mastorides, MDa,b</bylineText> <bylineFull/> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType/> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:"> <name/> <rightsInfo> <copyrightHolder> <name/> </copyrightHolder> <copyrightNotice/> </rightsInfo> </provider> <abstract/> <metaDescription>Background: The use of artificial intelligence (AI) in health care is increasing and has shown utility in many medical specialties, especially pathology, radiol</metaDescription> <articlePDF/> <teaserImage/> <title>Establishing a Hospital Artificial Intelligence Committee to Improve Patient Care</title> <deck/> <eyebrow>Commentary</eyebrow> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear>2022</pubPubdateYear> <pubPubdateMonth>August</pubPubdateMonth> <pubPubdateDay/> <pubVolume>39</pubVolume> <pubNumber>8</pubNumber> <wireChannels/> <primaryCMSID>2951</primaryCMSID> <CMSIDs> <CMSID>2951</CMSID> <CMSID>2905</CMSID> <CMSID>3639</CMSID> </CMSIDs> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>FED</publicationCode> <pubIssueName>August 2022</pubIssueName> <pubArticleType>Feature Articles | 3639</pubArticleType> <pubTopics> <pubTopic>Technologies | 2905</pubTopic> </pubTopics> <pubCategories/> <pubSections> <pubSection>Feature | 2951<pubSubsection/></pubSection> </pubSections> <journalTitle>Fed Pract</journalTitle> <journalFullTitle>Federal Practitioner</journalFullTitle> <copyrightStatement>Copyright 2017 Frontline Medical Communications Inc., Parsippany, NJ, USA. All rights reserved.</copyrightStatement> </publicationData> </publications_g> <publications> <term canonical="true">16</term> </publications> <sections> <term canonical="true">52</term> </sections> <topics> <term>263</term> <term>27442</term> <term canonical="true">327</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Establishing a Hospital Artificial Intelligence Committee to Improve Patient Care</title> <deck/> </itemMeta> <itemContent> <p> <b>Background: </b> The use of artificial intelligence (AI) in health care is increasing and has shown utility in many medical specialties, especially pathology, radiology, and oncology. <br/><br/> <b>Observations: </b> Many barriers exist to successfully implement AI programs in the clinical setting. To address these barriers, a formal governing body, the hospital AI Committee, was created at James A. Haley Veterans’ Hospital in Tampa, Florida. The AI committee reviews and assesses AI products based on their success at protecting human autonomy; promoting human well-being and safety and the public interest; ensuring transparency, explainability, and intelligibility; fostering responsibility and accountability; ensuring inclusiveness and equity; and promoting AI that is responsive and sustainable.<br/><br/> <b>Conclusions: </b> Through the hospital AI Committee, we may overcome many obstacles to successfully implementing AI applications in the clinical setting. </p> <p><span class="dropcap">I</span>n the past 10 years, artificial intelligence (AI) applications have exploded in numerous fields, including medicine. Myriad publications report that the use of AI in health care is increasing, and AI has shown utility in many medical specialties, eg, pathology, radiology, and oncology.<sup>1,2</sup></p> <p>In cancer pathology, AI was able not only to detect various cancers, but also to subtype and grade them. In addition, AI could predict survival, the success of therapeutic response, and underlying mutations from histopathologic images.<sup>3</sup> In other medical fields, AI applications are as notable. For example, in imaging specialties like radiology, ophthalmology, dermatology, and gastroenterology, AI is being used for image recognition, enhancement, and segmentation. In addition, AI is beneficial for predicting disease progression, survival, and response to therapy in other medical specialties. Finally, AI may help with administrative tasks like scheduling. <br/><br/>However, many obstacles to successfully implementing AI programs in the clinical setting exist, including clinical data limitations and ethical use of data, trust in the AI models, regulatory barriers, and lack of clinical buy-in due to insufficient basic AI understanding.<sup>2</sup> To address these barriers to successful clinical AI implementation, we decided to create a formal governing body at James A. Haley Veterans’ Hospital in Tampa, Florida. Accordingly, the hospital AI committee charter was officially approved on July 22, 2021. Our model could be used by both US Department of Veterans Affairs (VA) and non-VA hospitals throughout the country. </p> <h2>AI Committee</h2> <p> The vision of the AI committee is to improve outcomes and experiences for our veterans by developing trustworthy AI capabilities to support the VA mission. The mission is to build robust capacity in AI to create and apply innovative AI solutions and transform the VA by facilitating a learning environment that supports the delivery of world-class benefits and services to our veterans. Our vision and mission are aligned with the VA National AI Institute. <sup>4</sup> </p> <p>The AI Committee comprises 7 subcommittees: ethics, AI clinical product evaluation, education, data sharing and acquisition, research, 3D printing, and improvement and innovation. The role of the ethics subcommittee is to ensure the ethical and equitable implementation of clinical AI. We created the ethics subcommittee guidelines based on the World Health Organization ethics and governance of AI for health documents.<sup>5</sup> They include 6 basic principles: protecting human autonomy; promoting human well-being and safety and the public interest; ensuring transparency, explainability, and intelligibility; fostering responsibility and accountability; ensuring inclusiveness and equity; and promoting AI that is responsive and sustainable (Table 1).<br/><br/>As the name indicates, the role of the AI clinical product evaluation subcommittee is to evaluate commercially available clinical AI products. More than 400 US Food and Drug Administration–approved AI medical applications exist, and the list is growing rapidly. Most AI applications are in medical imaging like radiology, dermatology, ophthalmology, and pathology.<sup>6,7</sup> Each clinical product is evaluated according to 6 principles: relevance, usability, risks, regulatory, technical requirements, and financial (Table 2).<sup>8</sup> We are in the process of evaluating a few commercial AI algorithms for pathology and radiology, using these 6 principles. </p> <h3>Implementations</h3> <p> After a comprehensive evaluation, we implemented 2 ClearRead (Riverain Technologies) AI radiology solutions. ClearRead CT Vessel Suppress produces a secondary series of computed tomography (CT) images, suppressing vessels and other normal structures within the lungs to improve nodule detectability, and ClearRead Xray Bone Suppress, which increases the visibility of soft tissue in standard chest X-rays by suppressing the bone on the digital image without the need for 2 exposures. <sup> </sup> </p> <p>The role of the education subcommittee is to educate the staff about AI and how it can improve patient care. Every Friday, we email an AI article of the week to our practitioners. In addition, we publish a newsletter, and we organize an annual AI conference. The first conference in 2022 included speakers from the National AI Institute, Moffitt Cancer Center, the University of South Florida, and our facility.<br/><br/>As the name indicates, the data sharing and acquisition subcommittee oversees preparing data for our clinical and research projects. The role of the research subcommittee is to coordinate and promote AI research with the ultimate goal of improving patient care.</p> <h3>Other Technologies</h3> <p> Although 3D printing does not fall under the umbrella of AI, we have decided to include it in our future-oriented AI committee. We created an online 3D printing course to promote the technology throughout the VA. We 3D print organ models to help surgeons prepare for complicated operations. In addition, together with our colleagues from the University of Florida, we used 3D printing to address the shortage of swabs for COVID-19 testing. The VA Sunshine Healthcare Network (Veterans Integrated Services Network 8) has an active Innovation and Improvement Committee. <sup>9</sup> Our improvement and innovation subcommittee serves as a coordinating body with the network committee . </p> <h2>conclusions</h2> <p>Through the hospital AI committee, we believe that we may overcome many obstacles to successfully implementing AI applications in the clinical setting, including the ethical use of data, trust in the AI models, regulatory barriers, and lack of clinical buy-in due to insufficient basic AI knowledge.</p> <p class="isub">Acknowledgments</p> <p> <em>This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital.</em> </p> <p class="isub">Author affiliations</p> <p> <em>aJames A. Haley Veterans’ Hospital, Tampa, Florida<br/><br/><sup>b</sup>University of South Florida Morsani College of Medicine, Tampa</em> </p> <p class="isub">Author disclosures </p> <p> <em>The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.</em> </p> <p class="isub">Disclaimer</p> <p> <em>The opinions expressed herein are those of the authors and do not necessarily reflect those of <i>Federal Practitioner</i>, Frontline Medical Communications Inc., the US Government, or any of its agencies. </em> </p> <p class="isub">References</p> <p class="reference"> 1. Thomas LB, Mastorides SM, Viswanadhan N, Jakey CE, Borkowski AA. Artificial intelligence: review of current and future applications in medicine. <i>Fed Pract.</i> 2021;38(11):527-538. doi:10.12788/fp.0174<br/><br/> 2. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. <i>Nat Med. </i>2022;28(1):31-38. doi:10.1038/s41591-021-01614-0<br/><br/> 3. Echle A, Rindtorff NT, Brinker TJ, Luedde T, Pearson AT, Kather JN. Deep learning in cancer pathology: a new generation of clinical biomarkers. <i>Br J Cancer. </i>2021;124(4):686-696. doi:10.1038/s41416-020-01122-x<br/><br/> 4. US Department of Veterans Affairs, Office of Research and Development. National Artificial Intelligence Institute. Accessed April 13, 2022. https://www.research.va.gov/naii<br/><br/> 5. World Health Organization. Ethics and governance of artificial intelligence for health. Updated June 6, 2022. Accessed June 24, 2022. https://www.who.int/publications/i/item/9789240029200<br/><br/> 6. US Food and Drug Administration. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. Updated September 22, 2021. Accessed June 24, 2022. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices<br/><br/> 7. Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. <i>The Lancet Digital Health.</i> 2021;3(3):e195-e203. doi:10.1016/S2589-7500(20)30292-2/ATTACHMENT/C8457399-F5CE-4A30-8D36-2A9C835FB86D/MMC1.PDF<br/><br/> 8. Omoumi P, Ducarouge A, Tournier A, et al. To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines). <i>Eur Radiol.</i> 2021;31(6):3786-3796. doi:10.1007/s00330-020-07684-x<br/><br/> 9. US Department of Veterans Affairs. VA Sunshine Healthcare Network. Updated June 21, 2022. Accessed June 24, 2022. https://www.visn8.va.gov</p> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media