Can AI enhance mental health treatment?

Article Type
Changed
Mon, 12/11/2023 - 12:52

Three questions for clinicians

Artificial intelligence (AI) is already impacting the mental health care space, with several new tools available to both clinicians and patients. While this technology could be a game-changer amid a mental health crisis and clinician shortage, there are important ethical and efficacy concerns clinicians should be aware of.

Lifestance Health
Dr. Anisha Patel-Dunn

Current use cases illustrate both the potential and risks of AI. On one hand, AI has the potential to improve patient care with tools that can support diagnoses and inform treatment decisions at scale. The UK’s National Health Service is using an AI-powered diagnostic tool to help clinicians diagnose mental health disorders and determine the severity of a patient’s needs. Other tools leverage AI to analyze a patient’s voice for signs of depression or anxiety.

On the other hand, there are serious potential risks involving privacy, bias, and misinformation. One chatbot tool designed to counsel patients through disordered eating was shut down after giving problematic weight-loss advice.

The number of AI tools in the healthcare space is expected to increase fivefold by 2035. Keeping up with these advances is just as important for clinicians as keeping up with the latest medication and treatment options. That means being aware of both the limitations and the potential of AI. Here are three questions clinicians can ask as they explore ways to integrate these tools into their practice while navigating the risks.
 

• How can AI augment, not replace, the work of my staff?

AI’s biggest potential lies in its ability to augment the work of clinicians, rather than replacing it. Mental health clinicians should evaluate emerging AI tools through this lens.

For example, documentation and the use of electronic health records have consistently been linked to clinician burnout. Using AI to cut down on documentation would leave clinicians with more time and energy to focus on patient care.

One study from the National Library of Medicine found that physicians who did not have enough time to complete documentation were nearly three times more likely to report burnout. In some cases, clinic schedules were deliberately shortened to allow time for documentation.

New tools are emerging that use audio recording, transcription services, and large language models to generate clinical summaries and other documentation support. Amazon and 3M have partnered to solve documentation challenges using AI. This is an area I’ll definitely be keeping an eye on as it develops.
 

• Do I have patient consent to use this tool?

Since most AI tools remain relatively new, there is a gap in the legal and regulatory framework needed to ensure patient privacy and data protection. Clinicians should draw on existing guardrails and best practices to protect patient privacy and prioritize informed consent. The bottom line: Patients need to know how their data will be used and agree to it.

In the example above regarding documentation, a clinician should obtain patient consent before using technology that records or transcribes sessions. This extends to disclosing the use of AI chat tools and other touch points that occur between sessions. One mental health nonprofit has come under fire for using ChatGPT to provide mental health counseling to thousands of patients who weren’t aware the responses were generated by AI.

Beyond disclosing the use of these tools, clinicians should sufficiently explain how they work to ensure patients understand what they’re consenting to. Some technology companies offer guidance on how informed consent applies to their products and even offer template consent forms to support clinicians. Ultimately, accountability for maintaining patient privacy rests with the clinician, not the company behind the AI tool.
 

 

 

• Where is there a risk of bias?

There has been much discussion around the issue of bias within large language models in particular, since these programs will inherit any bias from the data points or text used to train them. However, there is often little to no visibility into how these models are trained, the algorithms they rely on, and how efficacy is measured.

This is especially concerning within the mental health care space, where bias can contribute to lower-quality care based on a patient’s race, gender or other characteristics. One systemic review published in JAMA Network Open found that most of the AI models used for psychiatric diagnoses that have been studied had a high overall risk of bias — which can lead to outputs that are misleading or incorrect, which can be dangerous in the healthcare field.

It’s important to keep the risk of bias top-of-mind when exploring AI tools and consider whether a tool would pose any direct harm to patients. Clinicians should have active oversight with any use of AI and, ultimately, consider an AI tool’s outputs alongside their own insights, expertise, and instincts.
 

Clinicians have the power to shape AI’s impact

While there is plenty to be excited about as these new tools develop, clinicians should explore AI with an eye toward the risks as well as the rewards. Practitioners have a significant opportunity to help shape how this technology develops by making informed decisions about which products to invest in and holding tech companies accountable. By educating patients, prioritizing informed consent, and seeking ways to augment their work that ultimately improve quality and scale of care, clinicians can help ensure positive outcomes while minimizing unintended consequences.

Dr. Patel-Dunn is a psychiatrist and chief medical officer at Lifestance Health, Scottsdale, Ariz.

Publications
Topics
Sections

Three questions for clinicians

Three questions for clinicians

Artificial intelligence (AI) is already impacting the mental health care space, with several new tools available to both clinicians and patients. While this technology could be a game-changer amid a mental health crisis and clinician shortage, there are important ethical and efficacy concerns clinicians should be aware of.

Lifestance Health
Dr. Anisha Patel-Dunn

Current use cases illustrate both the potential and risks of AI. On one hand, AI has the potential to improve patient care with tools that can support diagnoses and inform treatment decisions at scale. The UK’s National Health Service is using an AI-powered diagnostic tool to help clinicians diagnose mental health disorders and determine the severity of a patient’s needs. Other tools leverage AI to analyze a patient’s voice for signs of depression or anxiety.

On the other hand, there are serious potential risks involving privacy, bias, and misinformation. One chatbot tool designed to counsel patients through disordered eating was shut down after giving problematic weight-loss advice.

The number of AI tools in the healthcare space is expected to increase fivefold by 2035. Keeping up with these advances is just as important for clinicians as keeping up with the latest medication and treatment options. That means being aware of both the limitations and the potential of AI. Here are three questions clinicians can ask as they explore ways to integrate these tools into their practice while navigating the risks.
 

• How can AI augment, not replace, the work of my staff?

AI’s biggest potential lies in its ability to augment the work of clinicians, rather than replacing it. Mental health clinicians should evaluate emerging AI tools through this lens.

For example, documentation and the use of electronic health records have consistently been linked to clinician burnout. Using AI to cut down on documentation would leave clinicians with more time and energy to focus on patient care.

One study from the National Library of Medicine found that physicians who did not have enough time to complete documentation were nearly three times more likely to report burnout. In some cases, clinic schedules were deliberately shortened to allow time for documentation.

New tools are emerging that use audio recording, transcription services, and large language models to generate clinical summaries and other documentation support. Amazon and 3M have partnered to solve documentation challenges using AI. This is an area I’ll definitely be keeping an eye on as it develops.
 

• Do I have patient consent to use this tool?

Since most AI tools remain relatively new, there is a gap in the legal and regulatory framework needed to ensure patient privacy and data protection. Clinicians should draw on existing guardrails and best practices to protect patient privacy and prioritize informed consent. The bottom line: Patients need to know how their data will be used and agree to it.

In the example above regarding documentation, a clinician should obtain patient consent before using technology that records or transcribes sessions. This extends to disclosing the use of AI chat tools and other touch points that occur between sessions. One mental health nonprofit has come under fire for using ChatGPT to provide mental health counseling to thousands of patients who weren’t aware the responses were generated by AI.

Beyond disclosing the use of these tools, clinicians should sufficiently explain how they work to ensure patients understand what they’re consenting to. Some technology companies offer guidance on how informed consent applies to their products and even offer template consent forms to support clinicians. Ultimately, accountability for maintaining patient privacy rests with the clinician, not the company behind the AI tool.
 

 

 

• Where is there a risk of bias?

There has been much discussion around the issue of bias within large language models in particular, since these programs will inherit any bias from the data points or text used to train them. However, there is often little to no visibility into how these models are trained, the algorithms they rely on, and how efficacy is measured.

This is especially concerning within the mental health care space, where bias can contribute to lower-quality care based on a patient’s race, gender or other characteristics. One systemic review published in JAMA Network Open found that most of the AI models used for psychiatric diagnoses that have been studied had a high overall risk of bias — which can lead to outputs that are misleading or incorrect, which can be dangerous in the healthcare field.

It’s important to keep the risk of bias top-of-mind when exploring AI tools and consider whether a tool would pose any direct harm to patients. Clinicians should have active oversight with any use of AI and, ultimately, consider an AI tool’s outputs alongside their own insights, expertise, and instincts.
 

Clinicians have the power to shape AI’s impact

While there is plenty to be excited about as these new tools develop, clinicians should explore AI with an eye toward the risks as well as the rewards. Practitioners have a significant opportunity to help shape how this technology develops by making informed decisions about which products to invest in and holding tech companies accountable. By educating patients, prioritizing informed consent, and seeking ways to augment their work that ultimately improve quality and scale of care, clinicians can help ensure positive outcomes while minimizing unintended consequences.

Dr. Patel-Dunn is a psychiatrist and chief medical officer at Lifestance Health, Scottsdale, Ariz.

Artificial intelligence (AI) is already impacting the mental health care space, with several new tools available to both clinicians and patients. While this technology could be a game-changer amid a mental health crisis and clinician shortage, there are important ethical and efficacy concerns clinicians should be aware of.

Lifestance Health
Dr. Anisha Patel-Dunn

Current use cases illustrate both the potential and risks of AI. On one hand, AI has the potential to improve patient care with tools that can support diagnoses and inform treatment decisions at scale. The UK’s National Health Service is using an AI-powered diagnostic tool to help clinicians diagnose mental health disorders and determine the severity of a patient’s needs. Other tools leverage AI to analyze a patient’s voice for signs of depression or anxiety.

On the other hand, there are serious potential risks involving privacy, bias, and misinformation. One chatbot tool designed to counsel patients through disordered eating was shut down after giving problematic weight-loss advice.

The number of AI tools in the healthcare space is expected to increase fivefold by 2035. Keeping up with these advances is just as important for clinicians as keeping up with the latest medication and treatment options. That means being aware of both the limitations and the potential of AI. Here are three questions clinicians can ask as they explore ways to integrate these tools into their practice while navigating the risks.
 

• How can AI augment, not replace, the work of my staff?

AI’s biggest potential lies in its ability to augment the work of clinicians, rather than replacing it. Mental health clinicians should evaluate emerging AI tools through this lens.

For example, documentation and the use of electronic health records have consistently been linked to clinician burnout. Using AI to cut down on documentation would leave clinicians with more time and energy to focus on patient care.

One study from the National Library of Medicine found that physicians who did not have enough time to complete documentation were nearly three times more likely to report burnout. In some cases, clinic schedules were deliberately shortened to allow time for documentation.

New tools are emerging that use audio recording, transcription services, and large language models to generate clinical summaries and other documentation support. Amazon and 3M have partnered to solve documentation challenges using AI. This is an area I’ll definitely be keeping an eye on as it develops.
 

• Do I have patient consent to use this tool?

Since most AI tools remain relatively new, there is a gap in the legal and regulatory framework needed to ensure patient privacy and data protection. Clinicians should draw on existing guardrails and best practices to protect patient privacy and prioritize informed consent. The bottom line: Patients need to know how their data will be used and agree to it.

In the example above regarding documentation, a clinician should obtain patient consent before using technology that records or transcribes sessions. This extends to disclosing the use of AI chat tools and other touch points that occur between sessions. One mental health nonprofit has come under fire for using ChatGPT to provide mental health counseling to thousands of patients who weren’t aware the responses were generated by AI.

Beyond disclosing the use of these tools, clinicians should sufficiently explain how they work to ensure patients understand what they’re consenting to. Some technology companies offer guidance on how informed consent applies to their products and even offer template consent forms to support clinicians. Ultimately, accountability for maintaining patient privacy rests with the clinician, not the company behind the AI tool.
 

 

 

• Where is there a risk of bias?

There has been much discussion around the issue of bias within large language models in particular, since these programs will inherit any bias from the data points or text used to train them. However, there is often little to no visibility into how these models are trained, the algorithms they rely on, and how efficacy is measured.

This is especially concerning within the mental health care space, where bias can contribute to lower-quality care based on a patient’s race, gender or other characteristics. One systemic review published in JAMA Network Open found that most of the AI models used for psychiatric diagnoses that have been studied had a high overall risk of bias — which can lead to outputs that are misleading or incorrect, which can be dangerous in the healthcare field.

It’s important to keep the risk of bias top-of-mind when exploring AI tools and consider whether a tool would pose any direct harm to patients. Clinicians should have active oversight with any use of AI and, ultimately, consider an AI tool’s outputs alongside their own insights, expertise, and instincts.
 

Clinicians have the power to shape AI’s impact

While there is plenty to be excited about as these new tools develop, clinicians should explore AI with an eye toward the risks as well as the rewards. Practitioners have a significant opportunity to help shape how this technology develops by making informed decisions about which products to invest in and holding tech companies accountable. By educating patients, prioritizing informed consent, and seeking ways to augment their work that ultimately improve quality and scale of care, clinicians can help ensure positive outcomes while minimizing unintended consequences.

Dr. Patel-Dunn is a psychiatrist and chief medical officer at Lifestance Health, Scottsdale, Ariz.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article