Addressing Stigmatizing Language in Large Language Models and Healthcare Communication

Recent research reveals that large language models can unintentionally use stigmatizing language about individuals with alcohol and substance use disorders. Prompt engineering can significantly reduce this harmful language, enhancing healthcare communication and patient trust.
As artificial intelligence continues to integrate into healthcare communication, recent research highlights a critical concern: large language models (LLMs) may inadvertently perpetuate harmful stereotypes through the use of stigmatizing language. A study conducted by researchers at Mass General Brigham found that over 35% of responses related to alcohol and substance use conditions contained stigmatizing terminology. However, the study also demonstrated that strategic prompt engineering—adjusting input instructions—can significantly reduce such language, achieving an 88% decrease in stigmatizing responses.
Effective, patient-centered language is essential in building trust, improving patient engagement, and enhancing health outcomes. Dr. Wei Zhang, the study's corresponding author, emphasized that stigmatizing language—even when generated by AI—can make patients feel judged, potentially eroding confidence in healthcare providers. Since LLM responses are generated from everyday language, they often include biases or negative stereotypes about patients.
To address this, the researchers tested 14 different LLMs on 60 clinically relevant prompts concerning alcohol use disorder, alcohol-associated liver disease, and substance use disorder. Responses were evaluated by physicians using guidelines from prominent organizations, including the National Institute on Drug Abuse and the National Institute on Alcohol Abuse and Alcoholism. Their findings revealed that responses without prompt adjustments contained stigmatizing language 35.4% of the time, whereas prompt-engineered responses had only 6.3%. Additionally, longer responses tended to have a higher chance of including stigmatizing language.
Moving forward, the study advocates for the development of AI tools that inherently avoid stigmatizing language, which could significantly improve patient interactions. Clinicians are encouraged to review AI-generated content before use and to employ alternative, more inclusive language options. Future research will involve patients and families with lived experiences to refine definitions and ensure that language used by AI aligns with patient needs.
This research underscores the importance of prioritizing language quality in healthcare communication, especially as AI technologies become more prevalent. Ensuring that LLMs promote respectful and non-stigmatizing language is vital for fostering trust and improving health outcomes.
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Innovative Test Enhances Quality Control in Allergy Treatments
A novel immunoassay developed by the Paul Ehrlich Institute allows precise measurement of allergoids in allergy medicines, improving quality control and standardization of allergy immunotherapy products.
How Eye Level Affects Perception of Hill Steepness: New Research Insights
New research shows that people's perception of hill steepness is heavily influenced by their eye level, with lower vantage points leading to greater overestimation of slopes' steepness, impacting navigation and safety.
Ankle Regeneration Insights Offer Hope for Cartilage Repair in Osteoarthritis
New research reveals the ankle's natural capacity to regenerate cartilage, paving the way for innovative treatments for osteoarthritis and joint repair. Discover how smRNAs in ankles might unlock regenerative potential for less adaptable joints.