Potential Bias in AI Tools May Undervalue Women's Health Needs in Social Care

New research reveals that AI tools used in social care may underestimate women's health issues, risking gender bias and unequal treatment. Learn how these models impact care quality and fairness.
Recent research from the London School of Economics indicates that large language models (LLMs), widely used across many local authorities in England to support social workers, may be inadvertently exhibiting gender bias. These AI systems, including popular models like Google's Gemma, are increasingly employed to assist in generating case summaries and easing administrative burdens. However, findings suggest that such tools may systematically underrepresent or diminish the portrayal of women's physical and mental health concerns compared to men's. The study analyzed real-world case notes and discovered that descriptions associated with men were more likely to include terms like "disabled," "unable," or "complex," which are crucial indicators of health needs. Conversely, similar issues faced by women were often less emphasized or described less seriously.
The research involved generating over 29,600 pairs of AI-generated summaries for individual case notes, with only gender swapped, to directly compare how the same case was treated for men and women. This revealed significant discrepancies in describing health issues, particularly mental health and physical conditions. Google's Gemma model exhibited more marked gender disparities than benchmark models such as Meta's Llama 3, which did not show language differences based on gender.
Dr. Sam Rickman, the lead researcher, emphasized the potential risks: reliance on biased AI summaries could lead social workers to assess identical cases differently based solely on gender. Since social care access is determined by perceived need, such biases might result in unequal treatment — disadvantaging women in particular.
While LLMs promise to streamline social work processes, their deployment must be transparent, thoroughly tested for bias, and subject to appropriate legal oversight to prevent systemic inequalities. This research marks the first to quantitatively evaluate gender bias in AI-generated social care records, highlighting the need for urgent attention to fairness and equity in AI applications within the public sector.
Source: https://medicalxpress.com/news/2025-08-ai-tools-downplaying-women-health.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Study Finds Recycled Plastics Can Disrupt Hormone Systems and Metabolism
New research shows that recycled plastics can leach chemicals that disrupt hormone systems and metabolism, posing risks to aquatic life and human health. The study highlights the need for stricter regulation of hazardous chemicals in plastics.
Innovative Device Uses Single Drop of Blood to Assess Newborns' Immune Health
A revolutionary diagnostic device uses just one drop of blood to swiftly assess the immune health of newborns, enabling early detection of severe conditions like sepsis and NEC with rapid results. Developed by researchers at SMART and KKH, this technology promises to improve neonatal care significantly.
Navigating Patient Preferences in Palliative Care: When Patients Choose Not to Know Everything
Effective communication in palliative care requires respecting individual patient preferences, cultural backgrounds, and building trust. Discover how interdisciplinary research is guiding better guidelines for discussing prognosis with sensitive, personalized approaches.



