Artificial intelligence—the development of computer systems able to perform tasks that normally require human intelligence—is being increasingly used in psychiatry. Some studies have suggested AI can be used to identify patients’ risk of suicide12-15 or psychosis.16,17 Kalanderian and Nasrallah18 reviewed several of these studies in Current Psychiatry, August 2019. This article is available at mdedge.com/psychiatry/article/205527/schizophrenia-other-psychotic-disorders/artificial-intelligence-psychiatry.
Other researchers have found clinical uses for machine learning, a subset of AI that uses methods to automatically detect patterns and make predictions based on those patterns. In one study, a machine learning analysis of functional MRI scans was able to identify 4 distinct subtypes of depression.19 In another study, a machine learning model was able to predict with 60% accuracy which patients with depression would respond to antidepressants.20
In the future, AI might be used to change mental health classification systems. Because many mental health disorders share similar symptom clusters, machine learning can help to identify associations between symptoms, behavior, brain function, and real-world function across different diagnoses, potentially affecting how we will classify mental disorders.21
Technology-enhanced psychotherapy
In the future, it might be common for psychotherapy to be provided by a computer, or “virtual therapist.” Several studies have evaluated the use of technology-enhanced psychotherapy.
Lucas et al22 investigated patients’ interactions with a virtual therapist. Participants were interviewed by an avatar named Ellie, who they saw on a TV screen. Half of the participants were told Ellie was not human, and half were told Ellie was being controlled remotely by a human. Three psychologists who were blinded to group allocations analyzed transcripts of the interviews and video recordings of participants’ facial expressions to quantify the participants’ fear, sadness, and other emotional responses during the interviews, as well as their openness to the questions. Participants who believed Ellie was fully automated reported significantly lower fear of self-disclosure and impression management (attempts to control how others perceive them) than participants who were told that Ellie was operated by a human. Additionally, participants who believed they were interacting with a computer were more open during the interview.22
Continue to: Researchers at the University of Southern California...