Mia's Feed
Medical News & Research

Evaluating ChatGPT's Diagnostic Capabilities: Insights, Limitations, and Future Directions

Evaluating ChatGPT's Diagnostic Capabilities: Insights, Limitations, and Future Directions

Share this article

Emerging research evaluates ChatGPT's ability to assist in medical diagnosis, revealing high accuracy in identifying diseases and drugs but highlighting significant knowledge gaps and hallucination issues that need addressing.

2 min read

Recent research investigations have explored the potential of ChatGPT, an advanced generative AI model, in medical diagnosis. As many individuals increasingly turn to AI for health-related inquiries, understanding the accuracy and reliability of these responses is crucial. A study published in the journal iScience by researchers led by Ahmed Abdeen Hamed from Binghamton University assessed ChatGPT's performance in identifying disease terms, drug names, genetic information, and symptoms.

The study involved testing ChatGPT with various biomedical queries. Remarkably, the AI achieved high accuracy rates: 88-97% for disease terms, 90-91% for drug names, and 88-98% for genetic information—outperforming initial expectations significantly. For example, ChatGPT correctly identified that cancer, hypertension, and fever are diseases, while Remdesivir is a drug, and BRCA is a gene linked to breast cancer. However, the AI's performance was less impressive in symptom identification, with accuracy ranging from 49-61%. The discrepancy may stem from the AI's training on informal language rather than standardized biomedical ontologies.

A notable issue uncovered was ChatGPT’s tendency to 'hallucinate' or generate fabricated genetic accession numbers, such as making up DNA sequence identifiers for genes like BRCA1. This highlights a critical challenge in relying solely on the AI for factual biomedical information.

The researchers utilized a machine learning tool called xFakeSci, which detects approximately 94% of false scientific papers, indicating that further integration of biomedical knowledge bases could mitigate hallucinations in large language models. Hamed emphasizes the importance of incorporating biomedical ontologies into these models to improve accuracy and reduce misinformation.

While the findings show promising capabilities, the study underscores that ChatGPT and similar models are not yet ready to replace professional medical advice. Nonetheless, ongoing improvements could lead to powerful tools that assist with initial diagnostics or information validation, provided their limitations are addressed.

Source: https://medicalxpress.com/news/2025-07-chatgpt-reveals-knowledge-gaps-hallucination.html

Stay Updated with Mia's Feed

Get the latest health & wellness insights delivered straight to your inbox.

How often would you like updates?

We respect your privacy. Unsubscribe at any time.

Related Articles

Long-Term Effectiveness of ADHD Medication: Benefits Diminish as Prescriptions Expand

A recent study reveals that while ADHD medication continues to reduce risks such as injuries and criminal behavior, its protective effects have weakened over time due to broader prescription practices and demographic shifts.

Older Adults in Japanese New Towns Rely on Local Health and Amenity Facilities

Research in Japan shows that older adults in traditional New Towns rely heavily on local health and amenity facilities, supporting independent and community-centered aging.

Ethnic Variations in Routine Blood Tests and Cancer Risk Prediction in Primary Care

Research shows that routine blood tests for cancer risk are less effective in Asian and Black patients compared to white patients, highlighting the need for more personalized diagnostic strategies in primary care.

How Seizure Activity Spreads in the Brain and Causes Loss of Consciousness

Recent research uncovers how the spread of seizure activity in the brain leads to loss of consciousness, offering insights for better treatments and management of seizure disorders.