Inconsistencies in AI Chatbots’ Responses to Suicide-Related Questions Raise Concerns

A recent study reveals that popular AI chatbots exhibit inconsistent responses when answering suicide-related questions, highlighting the need for safer and more reliable mental health support tools.
Recent research highlights the variable performance of popular AI chatbots when addressing questions related to suicide, a critical concern in mental health support. The study, published in demic journal 3;Psychiatric Services;, evaluated three widely used chatbots: ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google. While all three demonstrated a capacity to respond appropriately to questions about very high and very low suicide risks, their answers to intermediate-risk queries proved inconsistent.
Specifically, ChatGPT and Claude generally avoided giving direct responses to high-risk questions that could potentially promote self-harm, such as asking about methods of suicide. Conversely, Gemini was less inclined to directly answer questions across all risk levels, including more straightforward, factual inquiries like annual suicide statistics in the U.S.
The research involved posing 30 carefully selected questions to each chatbot, scored by clinicians based on the perceived risk of the response being misused for self-harm. Each question was run through the chatbots 100 times to assess response reliability. It was observed that responses to more moderate questions—such as asking for recommendations for someone experiencing suicidal thoughts—were highly variable across the platforms.
Lead researcher Ryan McBain of RAND Corporation emphasized that while current AI models align well with expert assessments for extreme questions, they falter at intermediate levels, potentially compromising safety in sensitive contexts. This inconsistency underscores the urgent need for further refinement, including reinforcement learning with clinical input, to ensure chatbots provide safe, responsible guidance in mental health crises.
Given the increasing popularity of AI chatbots in healthcare and mental health support, experts warn about the risks of harm from potentially dangerous advice. The study advocates for enhanced fine-tuning of these models to minimize risks, especially concerning questions about suicidal ideation, and to improve their ability to provide reliable and safe information.
source: https://medicalxpress.com/news/2025-08-ai-chatbots-inconsistent-suicide.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Long-Term Impact of Post-Stroke Depression on Patient Health and Survival
New research shows that depression after stroke can impact health and survival for up to a decade, highlighting the importance of ongoing mental health support in stroke recovery.
Impact of COVID-19 Pandemic on Parents and Youth: Challenges and Opportunities
A detailed study explores how the COVID-19 pandemic affected families, revealing both challenges like anxiety and grief, and positives such as increased resilience and personal growth, highlighting the need for tailored support systems.
Natural Compound Urolithin A Reverses Anxiety by Improving Brain Mitochondria in Rats
Research shows that Urolithin A, a natural compound produced by gut bacteria, can significantly reduce anxiety in rats by repairing brain mitochondria, opening new avenues for mental health treatments.
Childhood Verbal Abuse Has Long-Term Effects on Adult Mental Health Comparable to Physical Abuse
Research shows childhood verbal abuse has effects on adult mental health similar to physical abuse, emphasizing the need for comprehensive prevention strategies.



