Mia's Feed
Medical News & Research

Evaluating ChatGPT's Diagnostic Capabilities: Insights, Limitations, and Future Directions

Evaluating ChatGPT's Diagnostic Capabilities: Insights, Limitations, and Future Directions

Share this article

Emerging research evaluates ChatGPT's ability to assist in medical diagnosis, revealing high accuracy in identifying diseases and drugs but highlighting significant knowledge gaps and hallucination issues that need addressing.

2 min read

Recent research investigations have explored the potential of ChatGPT, an advanced generative AI model, in medical diagnosis. As many individuals increasingly turn to AI for health-related inquiries, understanding the accuracy and reliability of these responses is crucial. A study published in the journal iScience by researchers led by Ahmed Abdeen Hamed from Binghamton University assessed ChatGPT's performance in identifying disease terms, drug names, genetic information, and symptoms.

The study involved testing ChatGPT with various biomedical queries. Remarkably, the AI achieved high accuracy rates: 88-97% for disease terms, 90-91% for drug names, and 88-98% for genetic information—outperforming initial expectations significantly. For example, ChatGPT correctly identified that cancer, hypertension, and fever are diseases, while Remdesivir is a drug, and BRCA is a gene linked to breast cancer. However, the AI's performance was less impressive in symptom identification, with accuracy ranging from 49-61%. The discrepancy may stem from the AI's training on informal language rather than standardized biomedical ontologies.

A notable issue uncovered was ChatGPT’s tendency to 'hallucinate' or generate fabricated genetic accession numbers, such as making up DNA sequence identifiers for genes like BRCA1. This highlights a critical challenge in relying solely on the AI for factual biomedical information.

The researchers utilized a machine learning tool called xFakeSci, which detects approximately 94% of false scientific papers, indicating that further integration of biomedical knowledge bases could mitigate hallucinations in large language models. Hamed emphasizes the importance of incorporating biomedical ontologies into these models to improve accuracy and reduce misinformation.

While the findings show promising capabilities, the study underscores that ChatGPT and similar models are not yet ready to replace professional medical advice. Nonetheless, ongoing improvements could lead to powerful tools that assist with initial diagnostics or information validation, provided their limitations are addressed.

Source: https://medicalxpress.com/news/2025-07-chatgpt-reveals-knowledge-gaps-hallucination.html

Stay Updated with Mia's Feed

Get the latest health & wellness insights delivered straight to your inbox.

How often would you like updates?

We respect your privacy. Unsubscribe at any time.

Related Articles

The Importance of Securing Canadian Health Data Amid Political Shifts

Canadian health data is vital for innovation, but political shifts pose risks to data sovereignty. Experts call for strengthened privacy laws and local infrastructure to protect national health information.

Harnessing Gut Bacteria for Early Detection of Pancreatic Cancer

Emerging research reveals that analyzing gut bacteria from stool samples offers a promising, non-invasive method for early detection of pancreatic cancer, potentially saving lives through timely diagnosis.