Evaluating ChatGPT's Diagnostic Capabilities: Insights, Limitations, and Future Directions

Emerging research evaluates ChatGPT's ability to assist in medical diagnosis, revealing high accuracy in identifying diseases and drugs but highlighting significant knowledge gaps and hallucination issues that need addressing.
Recent research investigations have explored the potential of ChatGPT, an advanced generative AI model, in medical diagnosis. As many individuals increasingly turn to AI for health-related inquiries, understanding the accuracy and reliability of these responses is crucial. A study published in the journal iScience by researchers led by Ahmed Abdeen Hamed from Binghamton University assessed ChatGPT's performance in identifying disease terms, drug names, genetic information, and symptoms.
The study involved testing ChatGPT with various biomedical queries. Remarkably, the AI achieved high accuracy rates: 88-97% for disease terms, 90-91% for drug names, and 88-98% for genetic information—outperforming initial expectations significantly. For example, ChatGPT correctly identified that cancer, hypertension, and fever are diseases, while Remdesivir is a drug, and BRCA is a gene linked to breast cancer. However, the AI's performance was less impressive in symptom identification, with accuracy ranging from 49-61%. The discrepancy may stem from the AI's training on informal language rather than standardized biomedical ontologies.
A notable issue uncovered was ChatGPT’s tendency to 'hallucinate' or generate fabricated genetic accession numbers, such as making up DNA sequence identifiers for genes like BRCA1. This highlights a critical challenge in relying solely on the AI for factual biomedical information.
The researchers utilized a machine learning tool called xFakeSci, which detects approximately 94% of false scientific papers, indicating that further integration of biomedical knowledge bases could mitigate hallucinations in large language models. Hamed emphasizes the importance of incorporating biomedical ontologies into these models to improve accuracy and reduce misinformation.
While the findings show promising capabilities, the study underscores that ChatGPT and similar models are not yet ready to replace professional medical advice. Nonetheless, ongoing improvements could lead to powerful tools that assist with initial diagnostics or information validation, provided their limitations are addressed.
Source: https://medicalxpress.com/news/2025-07-chatgpt-reveals-knowledge-gaps-hallucination.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
FDA Mandates New Safety Trials for COVID-19 Vaccines in Healthy Children and Adults
The FDA now requires comprehensive clinical trials for future COVID-19 booster shots in healthy children and adults, emphasizing safety and efficacy. This change aims to balance protection for high-risk groups with rigorous scientific standards.
Rethinking AD: Should It Stand for Alzheimer’s Disease or Auguste Deter?
Exploring whether 'AD' should honor Auguste Deter, the first patient described with Alzheimer’s Disease, or the discoverer, Dr. Alzheimer. Discover the human story behind this devastating neurological condition.
New Research Links Tea and Dark Chocolate Consumption to Lower Blood Pressure
New research reveals that daily consumption of tea, dark chocolate, and flavan-3-ol-rich foods may help lower blood pressure and improve vascular health, offering a natural approach to cardiovascular wellness.
Massachusetts Hospital Under Scrutiny Over Brain Tumor Cluster Among Nurses
Concerns rise as a cluster of benign brain tumors among nurses at Newton-Wellesley Hospital prompts ongoing investigations and questions about workplace safety. Experts and unions are calling for more comprehensive testing to ensure hospital environmental safety.