Mia's Feed
Mental Health & Mindfulness

Inconsistencies in AI Chatbots’ Responses to Suicide-Related Questions Raise Concerns

Inconsistencies in AI Chatbots’ Responses to Suicide-Related Questions Raise Concerns

Share this article

A recent study reveals that popular AI chatbots exhibit inconsistent responses when answering suicide-related questions, highlighting the need for safer and more reliable mental health support tools.

2 min read

Recent research highlights the variable performance of popular AI chatbots when addressing questions related to suicide, a critical concern in mental health support. The study, published in demic journal 3;Psychiatric Services;, evaluated three widely used chatbots: ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google. While all three demonstrated a capacity to respond appropriately to questions about very high and very low suicide risks, their answers to intermediate-risk queries proved inconsistent.

Specifically, ChatGPT and Claude generally avoided giving direct responses to high-risk questions that could potentially promote self-harm, such as asking about methods of suicide. Conversely, Gemini was less inclined to directly answer questions across all risk levels, including more straightforward, factual inquiries like annual suicide statistics in the U.S.

The research involved posing 30 carefully selected questions to each chatbot, scored by clinicians based on the perceived risk of the response being misused for self-harm. Each question was run through the chatbots 100 times to assess response reliability. It was observed that responses to more moderate questions—such as asking for recommendations for someone experiencing suicidal thoughts—were highly variable across the platforms.

Lead researcher Ryan McBain of RAND Corporation emphasized that while current AI models align well with expert assessments for extreme questions, they falter at intermediate levels, potentially compromising safety in sensitive contexts. This inconsistency underscores the urgent need for further refinement, including reinforcement learning with clinical input, to ensure chatbots provide safe, responsible guidance in mental health crises.

Given the increasing popularity of AI chatbots in healthcare and mental health support, experts warn about the risks of harm from potentially dangerous advice. The study advocates for enhanced fine-tuning of these models to minimize risks, especially concerning questions about suicidal ideation, and to improve their ability to provide reliable and safe information.

source: https://medicalxpress.com/news/2025-08-ai-chatbots-inconsistent-suicide.html

Stay Updated with Mia's Feed

Get the latest health & wellness insights delivered straight to your inbox.

How often would you like updates?

We respect your privacy. Unsubscribe at any time.

Related Articles

New Call for Integrated Mental Health Care for Individuals with Epilepsy

A new push for integrated mental health services aims to improve care and quality of life for people living with epilepsy, emphasizing the importance of addressing psychological well-being alongside seizures.

Study Finds Social Media Use Contributes to Increased Depression in Preteens

Research shows that increased social media use in preteens can lead to a rise in depression symptoms, highlighting the need for healthier digital habits among youth.

The Rise of Beta Blockers in Hollywood: Why Celebrities Are Using Them

Celebrities are turning to propranolol, a heart medication, to manage anxiety during high-stakes events. Learn how these beta blockers work and their role in Hollywood's red carpet secrets.

MRI Research Highlights Brain Structure Differences in Children with Restrictive Eating Disorders

MRI studies reveal distinct brain structural changes in children with restrictive eating disorders like anorexia nervosa and ARFID, improving understanding for better treatments.