Guidance for Individuals Receiving Assistance and Services

women with photo frame that reads I support harm reduction

According to SAMHSA’s 2023 National Survey on Drug Use and Health, 22.8% of adults in 2023 had any mental illness in the past year. Of the 58.7 million individuals with any mental illness in 2023, only 53.9% of individuals received mental health treatment. In 2023, less than 60% of adolescents with a major depressive episode received mental health treatment. Of the 40% of adolescents with a major depressive episode who did not receive treatment, 7.7% sought treatment and 33.8% did not seek treatment but thought they should get it.

Individuals do not receive mental health care because of a variety of reasons, including shortages of providers, difficulties obtaining timely access, cost, and stigma. This leads to delays in care and worsening symptoms. Many individuals face an epidemic of loneliness and isolation. Loneliness has been associated with mental health problems including depression, anxiety, and suicidal ideation. This has led to interest in using artificial intelligence (AI) to address unmet mental health needs.

Artificial intelligence refers to computer systems designed to complete tasks requiring human intelligence, such as reasoning, interpretation of data, decision making and problem solving. Artificial intelligence works by using machine learning, drawing on the construction of algorithms and statistical models based on patterns of data. Generative AI refers to computer systems that create new content, in the form of language, visual images, or music.

An AI companion bot is a generative, conversational AI bot that is a type of large language model. When a user engages with a conversational AI bot, the bot appears to mimic human speech. A series of direct messages occurs between the user and the bot. AI bots are programmed to engage the user in friendly and human-like ways. The AI bot supports and validates the user’s ideas to continue engagement. The conversational AI bots use machine learning and algorithms to generate answers and responses based on the large amount of data available to them and are susceptible to producing false information.

Individuals may turn to chatbots to relieve loneliness, for companionship, or for mental health related needs. Specific chatbot based mental health apps may serve different purposes depending on the specific app, including digital screening, companions, coaching and virtual therapy.

Benefits: 

  • AI companion bots and mental health chatbots provide immediate access and support. AI companion bots and mental health chatbots can be used 24/7.
  • Individuals may feel more comfortable using an AI companion bot due to the privacy and anonymity it allows. This can help with some of the judgment or shame individuals fear might be present if they access mental health care or discuss mental health symptoms.
  • Users find AI companion bots to be friendly, supportive and non-judgmental.
  • AI companion bots may be helpful in tracking taking medications. AI companion bots can provide reminders for medications, appointments, physical activity and other lifestyle changes.
  • When used in clinical practice, AI may reduce administrative burden, decrease documentation time, and allow the clinician to focus more on the therapeutic encounter with the patient.

Limitations: 

  • AI companion bots use machine learning and draw upon only the data that is available to them. This introduces the possibility of bias in their responses. Responses may be culturally or regionally irrelevant or inappropriate.
  • AI companion bots can generate AI hallucinations. These responses may appear confident but are factually incorrect.
  • AI companion bots learn from underlying patterns of language. Consequently, AI companion bots lack emotional intelligence or understanding. AI companion bots are unable to identify nuances in communication, which may be seen in an individual’s body language, hesitations in speech, or use of metaphors. This can lead to AI responses that are inappropriate or alarming.
  • There have been concerns noted with responses of AI companion bots when a user expresses thoughts of suicide or self-harm. AI companion bots may censor prompts containing thoughts of suicide or respond with irrelevant or even reaffirming messages to the thoughts of suicide or self-harm.
  • There is no standardization of mental health apps offering psychotherapy, including, but not limited to what type of therapy or service is being provided, what safeguards or guardrails are in place, or how to respond in a mental health crisis.
  • There are data and privacy risks associated with using AI. An individual’s data that is shared with the AI companion bot is stored and may be sold, traded, or marketed.

Risks: 

  • AI companion bots can look, talk and communicate like a real person. This has the potential to make the user forget that the AI bot is not real. As a result, users may become over-trusting with the information that is received from the chatbot or provided to it. Individuals may treat all the information provided to them as factually correct when there may be mistakes. These mistakes can include geographically inaccurate emergency or mental health crisis resources. Individuals are also susceptible to scams that use AI to generate voice cloning, deepfakes, or phishing emails. These scams try to obtain personal and sensitive information. This can put an individual’s financial and psychological wellbeing at risk.
  • AI companion bots are designed to be supportive and validating. This risks inadvertently encouraging delusional or paranoid thinking or encouraging unsafe actions. Current models are not designed to help the user challenge false beliefs or question potentially unsafe or risky behaviors. Persistent memory features can also reinforce delusions by carrying them throughout sessions.
  • There is no current threshold for what defines a potential behavioral health crisis or when human intervention is needed. For example, the statement "I want to go to sleep and not wake up" should lead to further suicide and risk assessment by a clinician or crisis worker. Depending on the large language model being used, certain idiomatic expressions and figures of speech may not be identified as alarming and in need of human intervention.
  • While AI companion bots may potentially prevent isolation and decrease loneliness, in those individuals at risk of manipulation or dependency, they may experience a decreased desire to have social relationships with others. This can reduce their actual support networks.
  • Some individuals may be more vulnerable to these risks- these include individuals who have or are at risk of having psychosis, are socially isolated, have autism or intellectual disability, are having or are at risk of having a behavioral health crisis, and/or are cognitively impaired.

Tips for AI Users: 
Many people use AI to address loneliness, stress or mental health concerns. Here are some tips for AI Users:

💳 Do not provide sensitive personal or financial information with an AI companion bot.

❎ Remember information provided by an AI companion bot may not be correct.

⏰ Set limits around your use of AI such as time limits.

🗣️ Discuss your use of AI with a trusted person in your life.

🩺  If your healthcare provider plans to use an AI tool, they should explain it to you first and ask for your consent.

💜 AI companion bots should not replace seeking treatment.

📱 If you are in a mental health crisis, call or text 988.