Inconsistencies in AI Chatbots’ Responses to Suicide-Related Questions Raise Concerns

A recent study reveals that popular AI chatbots exhibit inconsistent responses when answering suicide-related questions, highlighting the need for safer and more reliable mental health support tools.
Recent research highlights the variable performance of popular AI chatbots when addressing questions related to suicide, a critical concern in mental health support. The study, published in demic journal 3;Psychiatric Services;, evaluated three widely used chatbots: ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google. While all three demonstrated a capacity to respond appropriately to questions about very high and very low suicide risks, their answers to intermediate-risk queries proved inconsistent.
Specifically, ChatGPT and Claude generally avoided giving direct responses to high-risk questions that could potentially promote self-harm, such as asking about methods of suicide. Conversely, Gemini was less inclined to directly answer questions across all risk levels, including more straightforward, factual inquiries like annual suicide statistics in the U.S.
The research involved posing 30 carefully selected questions to each chatbot, scored by clinicians based on the perceived risk of the response being misused for self-harm. Each question was run through the chatbots 100 times to assess response reliability. It was observed that responses to more moderate questions—such as asking for recommendations for someone experiencing suicidal thoughts—were highly variable across the platforms.
Lead researcher Ryan McBain of RAND Corporation emphasized that while current AI models align well with expert assessments for extreme questions, they falter at intermediate levels, potentially compromising safety in sensitive contexts. This inconsistency underscores the urgent need for further refinement, including reinforcement learning with clinical input, to ensure chatbots provide safe, responsible guidance in mental health crises.
Given the increasing popularity of AI chatbots in healthcare and mental health support, experts warn about the risks of harm from potentially dangerous advice. The study advocates for enhanced fine-tuning of these models to minimize risks, especially concerning questions about suicidal ideation, and to improve their ability to provide reliable and safe information.
source: https://medicalxpress.com/news/2025-08-ai-chatbots-inconsistent-suicide.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Enhanced Interventions Needed to Support Doctors’ Mental Health, New Study Finds
A recent review highlights effective strategies and urgent research gaps in improving physicians' mental health, emphasizing skills-based programs and organizational reforms to reduce psychological distress and suicide risk.
Potential Risks of Emotional Wellness Apps Powered by AI
Emerging research warns that AI-powered emotional wellness apps could pose mental health risks, including emotional dependency and manipulation, highlighting the need for stricter regulation and responsible design.
Understanding the Dual Effects of Venting at Work: Building Bonds or Creating Challenges
Recent research reveals that venting at work can strengthen coworker bonds but also poses challenges, emphasizing the importance of boundaries and supportive communication for a healthy workplace dynamic.