Mia's Feed
Medical News & Research

Addressing Stigmatizing Language in Large Language Models and Healthcare Communication

Addressing Stigmatizing Language in Large Language Models and Healthcare Communication

Share this article

Recent research reveals that large language models can unintentionally use stigmatizing language about individuals with alcohol and substance use disorders. Prompt engineering can significantly reduce this harmful language, enhancing healthcare communication and patient trust.

2 min read

As artificial intelligence continues to integrate into healthcare communication, recent research highlights a critical concern: large language models (LLMs) may inadvertently perpetuate harmful stereotypes through the use of stigmatizing language. A study conducted by researchers at Mass General Brigham found that over 35% of responses related to alcohol and substance use conditions contained stigmatizing terminology. However, the study also demonstrated that strategic prompt engineering—adjusting input instructions—can significantly reduce such language, achieving an 88% decrease in stigmatizing responses.

Effective, patient-centered language is essential in building trust, improving patient engagement, and enhancing health outcomes. Dr. Wei Zhang, the study's corresponding author, emphasized that stigmatizing language—even when generated by AI—can make patients feel judged, potentially eroding confidence in healthcare providers. Since LLM responses are generated from everyday language, they often include biases or negative stereotypes about patients.

To address this, the researchers tested 14 different LLMs on 60 clinically relevant prompts concerning alcohol use disorder, alcohol-associated liver disease, and substance use disorder. Responses were evaluated by physicians using guidelines from prominent organizations, including the National Institute on Drug Abuse and the National Institute on Alcohol Abuse and Alcoholism. Their findings revealed that responses without prompt adjustments contained stigmatizing language 35.4% of the time, whereas prompt-engineered responses had only 6.3%. Additionally, longer responses tended to have a higher chance of including stigmatizing language.

Moving forward, the study advocates for the development of AI tools that inherently avoid stigmatizing language, which could significantly improve patient interactions. Clinicians are encouraged to review AI-generated content before use and to employ alternative, more inclusive language options. Future research will involve patients and families with lived experiences to refine definitions and ensure that language used by AI aligns with patient needs.

This research underscores the importance of prioritizing language quality in healthcare communication, especially as AI technologies become more prevalent. Ensuring that LLMs promote respectful and non-stigmatizing language is vital for fostering trust and improving health outcomes.

Stay Updated with Mia's Feed

Get the latest health & wellness insights delivered straight to your inbox.

How often would you like updates?

We respect your privacy. Unsubscribe at any time.

Related Articles

Using Machine Learning to Predict Cognitive Performance from Lifestyle Factors

A groundbreaking study reveals how machine learning can predict cognitive performance based on lifestyle factors such as diet, physical activity, and health measurements, highlighting new avenues for personalized brain health strategies.

New Research Advances the Development of Safer and More Effective Nasal Vaccines

Recent Yale research highlights how nasal vaccine boosters can induce strong mucosal immunity against respiratory viruses like COVID-19, without the need for adjuvants, paving the way for safer and more effective respiratory vaccines.