Inconsistencies in AI Chatbots’ Responses to Suicide-Related Questions Raise Concerns

A recent study reveals that popular AI chatbots exhibit inconsistent responses when answering suicide-related questions, highlighting the need for safer and more reliable mental health support tools.
Recent research highlights the variable performance of popular AI chatbots when addressing questions related to suicide, a critical concern in mental health support. The study, published in demic journal 3;Psychiatric Services;, evaluated three widely used chatbots: ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google. While all three demonstrated a capacity to respond appropriately to questions about very high and very low suicide risks, their answers to intermediate-risk queries proved inconsistent.
Specifically, ChatGPT and Claude generally avoided giving direct responses to high-risk questions that could potentially promote self-harm, such as asking about methods of suicide. Conversely, Gemini was less inclined to directly answer questions across all risk levels, including more straightforward, factual inquiries like annual suicide statistics in the U.S.
The research involved posing 30 carefully selected questions to each chatbot, scored by clinicians based on the perceived risk of the response being misused for self-harm. Each question was run through the chatbots 100 times to assess response reliability. It was observed that responses to more moderate questions—such as asking for recommendations for someone experiencing suicidal thoughts—were highly variable across the platforms.
Lead researcher Ryan McBain of RAND Corporation emphasized that while current AI models align well with expert assessments for extreme questions, they falter at intermediate levels, potentially compromising safety in sensitive contexts. This inconsistency underscores the urgent need for further refinement, including reinforcement learning with clinical input, to ensure chatbots provide safe, responsible guidance in mental health crises.
Given the increasing popularity of AI chatbots in healthcare and mental health support, experts warn about the risks of harm from potentially dangerous advice. The study advocates for enhanced fine-tuning of these models to minimize risks, especially concerning questions about suicidal ideation, and to improve their ability to provide reliable and safe information.
source: https://medicalxpress.com/news/2025-08-ai-chatbots-inconsistent-suicide.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
The Benefits of Staying Curious for Seniors' Mental Health
Continuous learning and curiosity significantly improve mental health and resilience in older adults, helping them stay emotionally balanced during challenging times. Discover how staying curious can benefit seniors' well-being.
Study Shows Nearly 22% of U.S. Firearm Suicides Occur Outside the Home, Mainly in Vehicles
New findings reveal nearly 22% of firearm suicides in the U.S. occur outside the home, primarily in motor vehicles, underscoring the need for broadened prevention strategies.
Rising Number of Michigan Children Losing Parents to Overdose, Suicide, and Homicide
A new study highlights the rising number of Michigan children losing parents to overdose, suicide, and homicide, emphasizing the need for targeted bereavement support and public health intervention.
Understanding Your Primary School Child's Interest in Romantic Play
Understanding the normal development of primary school children's interest in romantic play and how parents can set healthy boundaries and communicate effectively during this stage.



