Mia's Feed
Mental Health & Mindfulness

Inconsistencies in AI Chatbots’ Responses to Suicide-Related Questions Raise Concerns

Inconsistencies in AI Chatbots’ Responses to Suicide-Related Questions Raise Concerns

Share this article

A recent study reveals that popular AI chatbots exhibit inconsistent responses when answering suicide-related questions, highlighting the need for safer and more reliable mental health support tools.

2 min read

Recent research highlights the variable performance of popular AI chatbots when addressing questions related to suicide, a critical concern in mental health support. The study, published in demic journal 3;Psychiatric Services;, evaluated three widely used chatbots: ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google. While all three demonstrated a capacity to respond appropriately to questions about very high and very low suicide risks, their answers to intermediate-risk queries proved inconsistent.

Specifically, ChatGPT and Claude generally avoided giving direct responses to high-risk questions that could potentially promote self-harm, such as asking about methods of suicide. Conversely, Gemini was less inclined to directly answer questions across all risk levels, including more straightforward, factual inquiries like annual suicide statistics in the U.S.

The research involved posing 30 carefully selected questions to each chatbot, scored by clinicians based on the perceived risk of the response being misused for self-harm. Each question was run through the chatbots 100 times to assess response reliability. It was observed that responses to more moderate questions—such as asking for recommendations for someone experiencing suicidal thoughts—were highly variable across the platforms.

Lead researcher Ryan McBain of RAND Corporation emphasized that while current AI models align well with expert assessments for extreme questions, they falter at intermediate levels, potentially compromising safety in sensitive contexts. This inconsistency underscores the urgent need for further refinement, including reinforcement learning with clinical input, to ensure chatbots provide safe, responsible guidance in mental health crises.

Given the increasing popularity of AI chatbots in healthcare and mental health support, experts warn about the risks of harm from potentially dangerous advice. The study advocates for enhanced fine-tuning of these models to minimize risks, especially concerning questions about suicidal ideation, and to improve their ability to provide reliable and safe information.

source: https://medicalxpress.com/news/2025-08-ai-chatbots-inconsistent-suicide.html

Stay Updated with Mia's Feed

Get the latest health & wellness insights delivered straight to your inbox.

How often would you like updates?

We respect your privacy. Unsubscribe at any time.

Related Articles

Understanding the Link Between Childhood Trauma and Dental Anxiety

Exploring how childhood trauma and stressful experiences can lead to increased dental fear in adolescents, highlighting the importance of compassionate dental care and open communication.

Innovative Online Program Supports Body Confidence in IBD Patients

A new online program developed by Flinders University aims to improve body image and emotional well-being for people with Inflammatory Bowel Disease, combining mindfulness and cognitive therapy techniques.

Enhanced Stress Management May Foster Greater Extroversion and Openness

Improving daily stress management skills can lead to increased extroversion, agreeableness, and openness over time, according to a 20-year study. Discover how emotional regulation influences personality development.