Addressing Stigmatizing Language in Large Language Models and Healthcare Communication

Recent research reveals that large language models can unintentionally use stigmatizing language about individuals with alcohol and substance use disorders. Prompt engineering can significantly reduce this harmful language, enhancing healthcare communication and patient trust.
As artificial intelligence continues to integrate into healthcare communication, recent research highlights a critical concern: large language models (LLMs) may inadvertently perpetuate harmful stereotypes through the use of stigmatizing language. A study conducted by researchers at Mass General Brigham found that over 35% of responses related to alcohol and substance use conditions contained stigmatizing terminology. However, the study also demonstrated that strategic prompt engineering—adjusting input instructions—can significantly reduce such language, achieving an 88% decrease in stigmatizing responses.
Effective, patient-centered language is essential in building trust, improving patient engagement, and enhancing health outcomes. Dr. Wei Zhang, the study's corresponding author, emphasized that stigmatizing language—even when generated by AI—can make patients feel judged, potentially eroding confidence in healthcare providers. Since LLM responses are generated from everyday language, they often include biases or negative stereotypes about patients.
To address this, the researchers tested 14 different LLMs on 60 clinically relevant prompts concerning alcohol use disorder, alcohol-associated liver disease, and substance use disorder. Responses were evaluated by physicians using guidelines from prominent organizations, including the National Institute on Drug Abuse and the National Institute on Alcohol Abuse and Alcoholism. Their findings revealed that responses without prompt adjustments contained stigmatizing language 35.4% of the time, whereas prompt-engineered responses had only 6.3%. Additionally, longer responses tended to have a higher chance of including stigmatizing language.
Moving forward, the study advocates for the development of AI tools that inherently avoid stigmatizing language, which could significantly improve patient interactions. Clinicians are encouraged to review AI-generated content before use and to employ alternative, more inclusive language options. Future research will involve patients and families with lived experiences to refine definitions and ensure that language used by AI aligns with patient needs.
This research underscores the importance of prioritizing language quality in healthcare communication, especially as AI technologies become more prevalent. Ensuring that LLMs promote respectful and non-stigmatizing language is vital for fostering trust and improving health outcomes.
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Lactate-Induced Protein Modification Impairs Natural Killer Cell Defense Against Cancer
New research reveals how lactate promotes protein modifications that weaken natural killer cells' ability to fight cancer, opening new avenues for immunotherapy enhancement.
Promising Results in EGFR-Mutated Lung Cancer with Iza-bren and Osimertinib Combination Therapy
A new study demonstrates that combining iza-bren with osimertinib yields a 100% response rate in patients with EGFR-mutated non-small cell lung cancer, offering hope for improved first-line treatment options.
Long-Term Cardiac Microvascular Changes in Severe COVID-19 Survivors
Severe COVID-19 survivors may experience prolonged cardiac microvascular dysfunction, impacting blood flow and heart function. New research underscores the importance of understanding these lasting effects for better treatment options.
Unlocking Olympic Potential Through Efficient Elastic Tissues
New research reveals that efficient elastic tissues and fundamental motor skills are key to reaching elite athletic performance, with potential benefits for training and injury prevention.



