Limitations of AI in Predicting Suicide Risk, New Study Shows

A new comprehensive study reveals that current AI and machine learning tools are ineffective in accurately predicting suicidal behavior, indicating the need for more research before clinical application.
Recent research highlights the current limitations of artificial intelligence (AI) tools in accurately predicting suicidal behavior. Published in the open-access journal PLOS Medicine on September 11, 2025, the study conducted a comprehensive systematic review and meta-analysis of 53 previous investigations involving over 35 million medical records and nearly 250,000 cases of suicide or hospital-treated self-harm. The findings reveal that despite the increased interest and development of machine learning algorithms aimed at identifying high-risk individuals, these tools perform only modestly. They are notably better at ruling out low-risk cases—showing high specificity—yet they fall short in accurately identifying those who will go on to self-harm or die by suicide, leading to a significant number of false negatives.
Specifically, the algorithms misclassified more than half of the individuals who later presented with self-harm or suicide as low risk. Among those identified as high risk, only 6% actually died by suicide, and less than 20% re-presented for self-harm in clinical settings. The authors emphasize that the predictive accuracy of these machine learning models is comparable to traditional risk assessment scales, which have historically shown poor performance. Furthermore, the overall quality of existing research in this field is subpar, with many studies exhibiting bias or unclear methodology.
The report underscores that current machine learning tools do not justify changes in clinical practice guidelines, which generally discourage reliance on risk assessments for deciding treatment plans for suicide prevention. The authors caution against overestimating the potential of AI in this context, asserting that these algorithms have significant false positive and false negative rates that undermine their clinical utility. They call for more rigorous research and improved methods before AI can be effectively integrated into mental health risk assessment.
This study serves as a reminder that while technology holds promise, current AI solutions are insufficient for reliably predicting suicide and self-harm, and traditional approaches remain critical for effective intervention.
Source: https://medicalxpress.com/news/2025-09-ai-tools-fall-short-suicide.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
U.S. Government Ends Specialized Support Line for LGBTQ+ Youth This July
The U.S. government will discontinue the specialized support line for LGBTQ+ youth within the 988 crisis helpline starting July 2025, raising concerns about mental health support for vulnerable communities.
How Green Spaces Enhances Children's Cognitive Development and Family Well-Being
Research shows that green spaces and outdoor environments play a vital role in enhancing children's cognitive skills and improving family well-being. Access to nature during early childhood supports emotional regulation, attention, and reduces household chaos, fostering healthier development.
Impact of Food Insecurity on Mental Health and Resilience: New Insights from Recent Study
New research reveals how food insecurity impacts mental health, resilience, and stress management, emphasizing the importance of quality nutrition for psychological well-being.
Stem Cell-Derived Dopamine Neurons Show Promise in Alleviating Depression in Mice
Scientists have developed stem cell-derived dopamine neurons that can integrate into brain circuits and reduce depression-like behaviors in mice, offering new hope for innovative depression therapies.



