Limitations of AI in Predicting Suicide Risk, New Study Shows

A new comprehensive study reveals that current AI and machine learning tools are ineffective in accurately predicting suicidal behavior, indicating the need for more research before clinical application.
Recent research highlights the current limitations of artificial intelligence (AI) tools in accurately predicting suicidal behavior. Published in the open-access journal PLOS Medicine on September 11, 2025, the study conducted a comprehensive systematic review and meta-analysis of 53 previous investigations involving over 35 million medical records and nearly 250,000 cases of suicide or hospital-treated self-harm. The findings reveal that despite the increased interest and development of machine learning algorithms aimed at identifying high-risk individuals, these tools perform only modestly. They are notably better at ruling out low-risk cases—showing high specificity—yet they fall short in accurately identifying those who will go on to self-harm or die by suicide, leading to a significant number of false negatives.
Specifically, the algorithms misclassified more than half of the individuals who later presented with self-harm or suicide as low risk. Among those identified as high risk, only 6% actually died by suicide, and less than 20% re-presented for self-harm in clinical settings. The authors emphasize that the predictive accuracy of these machine learning models is comparable to traditional risk assessment scales, which have historically shown poor performance. Furthermore, the overall quality of existing research in this field is subpar, with many studies exhibiting bias or unclear methodology.
The report underscores that current machine learning tools do not justify changes in clinical practice guidelines, which generally discourage reliance on risk assessments for deciding treatment plans for suicide prevention. The authors caution against overestimating the potential of AI in this context, asserting that these algorithms have significant false positive and false negative rates that undermine their clinical utility. They call for more rigorous research and improved methods before AI can be effectively integrated into mental health risk assessment.
This study serves as a reminder that while technology holds promise, current AI solutions are insufficient for reliably predicting suicide and self-harm, and traditional approaches remain critical for effective intervention.
Source: https://medicalxpress.com/news/2025-09-ai-tools-fall-short-suicide.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Innovative Floor Video Projection Supports Cognitive Development in Neurodiverse Adolescents During Exercise
A new study demonstrates how floor video projection technology can enhance cognitive functions and task understanding in neurodiverse adolescents during physical exercises, promoting inclusive health activities.
Long-Term Impact of 9/11 on Responders: Persistent PTSD Symptoms Over Two Decades
A groundbreaking 20-year study reveals that PTSD symptoms among 9/11 responders often persist or worsen over decades, highlighting the need for extended mental health support.
Understanding What Motivates Runners: The Power of Focus Over Reasons
New research reveals that runners improve performance by focusing on immediate tasks and milestones rather than just their overall reasons for running. Learn how attention strategies boost endurance and race success.
The Profound Moments Internal Medicine Doctors Experience in Patient Care
Almost 68% of internal medicine doctors experience sacred moments with patients, which can reduce burnout and enhance physician well-being. Learn more about the significance of these meaningful connections in healthcare.



