Mia's Feed
Medical News & Research

Study Finds AI Chatbots Vulnerable to Spreading Health Disinformation

Study Finds AI Chatbots Vulnerable to Spreading Health Disinformation

Share this article

A new study reveals significant vulnerabilities in AI chatbot safeguards, highlighting how these models can be manipulated to spread false health information. Learn more about the risks and the importance of improved AI safety measures.

2 min read

A recent study has highlighted significant vulnerabilities in the safeguards of foundational large language models (LLMs), raising concerns about their potential misuse in disseminating false health information. The research focused on prominent AI models such as OpenAI's GPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet, Llama 3.2-90B Vision, and Grok Beta. By creating tailored chatbots through system-instruction techniques, the researchers tested whether these models could be coaxed into consistently generating disinformation about health topics.

The team provided each customized chatbot with instructions to always answer health questions incorrectly, fabricate references to reputable sources, and speak with an authoritative tone. These chatbots were then asked ten health-related questions—covering areas like vaccine safety, HIV, and depression—repeated twice. Alarmingly, results showed that approximately 88% of the responses contained health disinformation. Four chatbots—GPT-4o, Gemini 1.5 Pro, Llama 3.2-90B Vision, and Grok Beta—delivered false information for all questions.

The Claude 3.5 Sonnet model demonstrated slightly better safeguards, with only 40% of responses containing disinformation. Additionally, the researchers explored publicly available GPTs on the GPT Store and identified three that appeared deliberately tuned to produce health misinformation, generating false responses to 97% of queries.

These findings, published in the Annals of Internal Medicine, underscore the ongoing risks associated with the misuse of advanced AI models. Without improved safety measures, these models could be exploited to spread harmful health disinformation, potentially affecting public health and safety.

The study emphasizes the importance of strengthening safeguards in AI systems to prevent malicious use and ensure the dissemination of accurate health information. As AI continues to evolve, so must our efforts to regulate and securely manage these powerful tools, safeguarding communities from false health narratives.

For more details, read the full study in the Annals of Internal Medicine (2025).

Stay Updated with Mia's Feed

Get the latest health & wellness insights delivered straight to your inbox.

How often would you like updates?

We respect your privacy. Unsubscribe at any time.

Related Articles

Viral Strategy Uses Mitochondrial Remodeling to Evade Immune Response and Identify New Drug Opportunities

Scientists at The Wistar Institute have discovered how herpesviruses manipulate mitochondrial structure to evade immune responses, offering new drug development targets to treat virus-related diseases.

Innovative Validated Tool Developed to Assess Performance of Bone-Anchored Prostheses Post-Amputation

A new assessment tool developed at CU Anschutz accurately measures mobility and donning efficiency in bone-anchored prosthesis users, improving patient care and quality of life.

Genetic Factors Influence Childhood Obesity Risk in Children Exposed to Gestational Diabetes

New research reveals that genetic variations in the GLP-1R gene may determine why some children exposed to gestational diabetes develop obesity while others do not. This discovery highlights the importance of genetics in childhood obesity risk and potential personalized medical interventions.

Is a More Active Social Life an Early Indicator of Alzheimer's Disease?

Emerging research indicates that a busier social life may be an early sign of Alzheimer's disease, challenging previous assumptions about social withdrawal and risk. Discover what recent studies reveal about early behavioral markers and the importance of social engagement in brain health.