Limitations of AI in Predicting Suicide Risk, New Study Shows

A new comprehensive study reveals that current AI and machine learning tools are ineffective in accurately predicting suicidal behavior, indicating the need for more research before clinical application.
Recent research highlights the current limitations of artificial intelligence (AI) tools in accurately predicting suicidal behavior. Published in the open-access journal PLOS Medicine on September 11, 2025, the study conducted a comprehensive systematic review and meta-analysis of 53 previous investigations involving over 35 million medical records and nearly 250,000 cases of suicide or hospital-treated self-harm. The findings reveal that despite the increased interest and development of machine learning algorithms aimed at identifying high-risk individuals, these tools perform only modestly. They are notably better at ruling out low-risk cases—showing high specificity—yet they fall short in accurately identifying those who will go on to self-harm or die by suicide, leading to a significant number of false negatives.
Specifically, the algorithms misclassified more than half of the individuals who later presented with self-harm or suicide as low risk. Among those identified as high risk, only 6% actually died by suicide, and less than 20% re-presented for self-harm in clinical settings. The authors emphasize that the predictive accuracy of these machine learning models is comparable to traditional risk assessment scales, which have historically shown poor performance. Furthermore, the overall quality of existing research in this field is subpar, with many studies exhibiting bias or unclear methodology.
The report underscores that current machine learning tools do not justify changes in clinical practice guidelines, which generally discourage reliance on risk assessments for deciding treatment plans for suicide prevention. The authors caution against overestimating the potential of AI in this context, asserting that these algorithms have significant false positive and false negative rates that undermine their clinical utility. They call for more rigorous research and improved methods before AI can be effectively integrated into mental health risk assessment.
This study serves as a reminder that while technology holds promise, current AI solutions are insufficient for reliably predicting suicide and self-harm, and traditional approaches remain critical for effective intervention.
Source: https://medicalxpress.com/news/2025-09-ai-tools-fall-short-suicide.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
The Influence of Social Needs on Late Bedtimes Among College Students
A new study reveals how college students' desire to belong influences their bedtime, leading to shorter sleep durations due to social interactions. Understanding these social drivers is key to improving sleep health in young adults.
The Risks and Potential of Using ChatGPT as a Mental Health Support Tool
Psychology experts warn about safety and privacy concerns surrounding the use of ChatGPT for mental health, emphasizing its limitations and risks compared to professional care.
Enhancing Exercise Enjoyment Through Brain Training, New Study Finds
New research indicates that mental training can enhance exercise tolerance and make physical activity more enjoyable by strengthening brain networks involved in pain perception and stress resilience. Discover how brain training can boost your workout experience.
Long-term Effects of Childhood Financial Hardship on Mental Health in Later Life
Experiencing financial hardship in childhood can lead to increased anxiety, loneliness, and emotional distress decades later. Improving financial stability over time can help mitigate these effects.