Mia's Feed
Medical News & Research

Typos and Informal Language in Patient Communications Can Disrupt AI-Driven Medical Advice

Typos and Informal Language in Patient Communications Can Disrupt AI-Driven Medical Advice

Share this article

MIT research reveals that typos, slang, and formatting issues in patient messages can bias AI-driven medical advice, risking misdiagnosis and health disparities. Ensuring model robustness is vital for safe healthcare AI deployment.

2 min read

Recent research conducted by MIT highlights a concerning challenge in deploying large language models (LLMs) for healthcare: nonclinical text variations, such as typos, slang, and formatting inconsistencies, can significantly influence the recommendations made by AI systems. The study reveals that these seemingly minor textual differences—like extra spaces, misspellings, or colloquial language—cause AI models to favor self-management advice over recommending in-person clinical care, even when hospitalization is necessary. Interestingly, certain language styles disproportionately affect recommendations for female patients, increasing the risk of misguidance and potential health disparities.

The findings underscore the critical need for thorough auditing of AI tools before their integration into healthcare services. As LLMs are already being used to draft clinical notes and triage patient messages, the team warns that unvetted models might produce unsafe or biased recommendations. The research involved modifying thousands of patient messages with realistic perturbations mimicking vulnerable groups, such as those with language barriers or health anxiety, and observing the models’ responses.

The results showed a consistent pattern: disruptions in message formatting and language led to a 7-9% rise in recommendations suggesting that patients manage their conditions at home, bypassing necessary medical intervention. Moreover, models demonstrated about 7% more errors for female patients, suggesting gender bias in AI reasoning. These errors are particularly troubling in conversational contexts, like health chatbots, where direct patient interaction is common.

Follow-up studies also indicate that human clinicians are less affected by such text variations, emphasizing that AI models are inherently fragile to these subtle text changes. As such, researchers advocate for the development of more robust model training and evaluation practices, incorporating realistic communication styles, to ensure safe and equitable AI healthcare applications.

This work, presented at the ACM Conference on Fairness, Accountability, and Transparency, prompts a reevaluation of current AI deployment strategies in medical contexts. Ensuring AI models accurately interpret patient messages regardless of stylistic differences is crucial to prevent misdiagnoses and treatment errors, ultimately fostering fairer and safer healthcare AI solutions.

Stay Updated with Mia's Feed

Get the latest health & wellness insights delivered straight to your inbox.

How often would you like updates?

We respect your privacy. Unsubscribe at any time.

Related Articles

Winning Experiences and Social Rank Influence Brain Pathways to Reduce Drug-Seeking Behavior

New research reveals how winning social experiences can reshape brain dopamine systems, reducing drug-seeking behavior by enhancing neural control mechanisms. Learn how social rank influences addiction vulnerability.

Improving Urban Environments Could Prevent Nearly 1 in 10 Asthma Cases, Large-Scale Study Finds

A large European study reveals that nearly 1 in 10 asthma cases could be prevented through better urban planning, reducing pollution, and increasing green spaces.

Disparities in Aging in Place: How Socioeconomic and Social Factors Influence Older Adults

Research highlights social and socioeconomic disparities affecting the ability of older adults to age in place, emphasizing the need for equitable policies and community support systems.