Evaluating ChatGPT's Diagnostic Capabilities: Insights, Limitations, and Future Directions

Emerging research evaluates ChatGPT's ability to assist in medical diagnosis, revealing high accuracy in identifying diseases and drugs but highlighting significant knowledge gaps and hallucination issues that need addressing.
Recent research investigations have explored the potential of ChatGPT, an advanced generative AI model, in medical diagnosis. As many individuals increasingly turn to AI for health-related inquiries, understanding the accuracy and reliability of these responses is crucial. A study published in the journal iScience by researchers led by Ahmed Abdeen Hamed from Binghamton University assessed ChatGPT's performance in identifying disease terms, drug names, genetic information, and symptoms.
The study involved testing ChatGPT with various biomedical queries. Remarkably, the AI achieved high accuracy rates: 88-97% for disease terms, 90-91% for drug names, and 88-98% for genetic information—outperforming initial expectations significantly. For example, ChatGPT correctly identified that cancer, hypertension, and fever are diseases, while Remdesivir is a drug, and BRCA is a gene linked to breast cancer. However, the AI's performance was less impressive in symptom identification, with accuracy ranging from 49-61%. The discrepancy may stem from the AI's training on informal language rather than standardized biomedical ontologies.
A notable issue uncovered was ChatGPT’s tendency to 'hallucinate' or generate fabricated genetic accession numbers, such as making up DNA sequence identifiers for genes like BRCA1. This highlights a critical challenge in relying solely on the AI for factual biomedical information.
The researchers utilized a machine learning tool called xFakeSci, which detects approximately 94% of false scientific papers, indicating that further integration of biomedical knowledge bases could mitigate hallucinations in large language models. Hamed emphasizes the importance of incorporating biomedical ontologies into these models to improve accuracy and reduce misinformation.
While the findings show promising capabilities, the study underscores that ChatGPT and similar models are not yet ready to replace professional medical advice. Nonetheless, ongoing improvements could lead to powerful tools that assist with initial diagnostics or information validation, provided their limitations are addressed.
Source: https://medicalxpress.com/news/2025-07-chatgpt-reveals-knowledge-gaps-hallucination.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Innovative 3D Virtual Staining Technology for Non-Invasive Cancer Tissue Analysis
Researchers have developed a non-invasive 3D virtual staining technique that allows detailed visualization of cancer tissues, advancing digital pathology and cancer diagnosis.
The Role of Education and Social Factors in Rheumatic and Musculoskeletal Diseases
Emerging research emphasizes the critical role of social determinants like income and education in shaping outcomes for patients with rheumatic and musculoskeletal diseases. Findings from the EULAR 2025 congress reveal how these factors influence disease progression and mortality, highlighting the need for targeted interventions to promote health equity.
Limitations of AI in Emergency Room Diagnoses Based on Symptom Presentation
Recent research shows that AI tools like ChatGPT can assist in emergency diagnoses for typical symptoms, but face limitations with atypical cases. Human oversight remains essential for complex diagnoses.
Family Physicians’ Insights on Clinical Integration: A Comprehensive Review
A comprehensive review explores family physicians' experiences and key factors influencing clinical integration, providing valuable insights to improve healthcare coordination and patient outcomes.



