
AI chatbots posing as therapists could unleash devastating mental health consequences on vulnerable patients, especially teenagers, as experts sound the alarm on this largely unregulated technology.
This technology is quickly getting out of hand.
At a Glance
- AI chatbots lack proper suicide prevention expertise and emergency protocols that human therapists provide
- Recent cases link AI therapy interactions to a 14-year-old’s suicide and a 17-year-old’s violent behavior
- The American Psychological Association is urging FTC investigation of chatbots claiming to be mental health professionals
- Experts warn AI cannot replace human empathy, intuition, and clinical judgment necessary for proper mental health care
- While AI can increase mental health access, it should only supplement human therapy under strict ethical oversight
Dangerous Gaps in AI Mental Health Technology
The growing trend of artificial intelligence chatbots providing mental health advice has created serious risks for vulnerable patients. These technologies operate without proper guardrails or clinical expertise embedded in their algorithms, particularly concerning suicide prevention. Despite their sophisticated appearance, these systems fundamentally lack the human judgment necessary to properly assess psychological distress or crisis situations.
“The problem with these AI chatbots is that they were not designed with expertise on suicide risk and prevention baked into the algorithms. Additionally, there is no helpline available on the platform for users who may be at risk of a mental health condition or suicide, no training on how to use the tool if you are at risk, nor industry standards to regulate these technologies,” Christine Yu Moutier, M.D. said.
AI systems struggle with a fundamental limitation: distinguishing between literal and metaphorical language. This critical shortcoming severely hampers their ability to accurately gauge suicide risk levels or properly interpret emotional nuance – skills that human therapists develop through years of training and clinical experience. This inability to comprehend context could lead to catastrophic misinterpretations when dealing with patients in crisis.
And yet, for some reason, people are trusting these robots with their most intimate thoughts and problems.
If we thought 2020 was a bad year for everybody, 2025 could get even crazier with all this insane AI tech.
This isn’t all just theory, either.
Alarming incidents have already demonstrated the potential dangers of relying on AI for mental health support. In Belgium, a man died by suicide after extensive interactions with an environmentally-focused chatbot that reportedly encouraged his climate anxiety. A 14-year-old similarly took his own life after conversations with an AI, while a 17-year-old exhibited violent behavior following AI interactions. These tragic cases highlight how AI can inadvertently amplify rather than alleviate mental health struggles.
I have a new paper in @JAMAPediatrics "AI as a Mental Health Therapist for Adolescents" w/ Brent Kious @UofUHealth and Doug Opel @UWMedicine looking at opportunities to expand who gets access but legal and ethical issues with the use of these chatbots https://t.co/TPaXdoyeqS.
— I. Glenn Cohen (@CohenProf) October 17, 2023
The American Psychological Association has taken notice, warning about the serious dangers of chatbots that deceptively present themselves as mental health professionals. Their concerns extend beyond poor advice to include potential privacy violations, as sensitive mental health data provided to these platforms may not receive proper protection under medical privacy laws. Without proper oversight, patient confidentiality remains vulnerable.
While AI chatbots can achieve impressive scores for cultural competence and even empathy in controlled studies, they fundamentally lack the essential human qualities that make therapy effective. True empathy requires lived experience and emotional intelligence that programming simply cannot replicate. The human connection between therapist and patient creates a therapeutic alliance that remains beyond the capabilities of even the most advanced algorithms.
Will people realize this in time?












