Evaluating ChatGPT's Diagnostic Capabilities: Insights, Limitations, and Future Directions

Emerging research evaluates ChatGPT's ability to assist in medical diagnosis, revealing high accuracy in identifying diseases and drugs but highlighting significant knowledge gaps and hallucination issues that need addressing.
Recent research investigations have explored the potential of ChatGPT, an advanced generative AI model, in medical diagnosis. As many individuals increasingly turn to AI for health-related inquiries, understanding the accuracy and reliability of these responses is crucial. A study published in the journal iScience by researchers led by Ahmed Abdeen Hamed from Binghamton University assessed ChatGPT's performance in identifying disease terms, drug names, genetic information, and symptoms.
The study involved testing ChatGPT with various biomedical queries. Remarkably, the AI achieved high accuracy rates: 88-97% for disease terms, 90-91% for drug names, and 88-98% for genetic information—outperforming initial expectations significantly. For example, ChatGPT correctly identified that cancer, hypertension, and fever are diseases, while Remdesivir is a drug, and BRCA is a gene linked to breast cancer. However, the AI's performance was less impressive in symptom identification, with accuracy ranging from 49-61%. The discrepancy may stem from the AI's training on informal language rather than standardized biomedical ontologies.
A notable issue uncovered was ChatGPT’s tendency to 'hallucinate' or generate fabricated genetic accession numbers, such as making up DNA sequence identifiers for genes like BRCA1. This highlights a critical challenge in relying solely on the AI for factual biomedical information.
The researchers utilized a machine learning tool called xFakeSci, which detects approximately 94% of false scientific papers, indicating that further integration of biomedical knowledge bases could mitigate hallucinations in large language models. Hamed emphasizes the importance of incorporating biomedical ontologies into these models to improve accuracy and reduce misinformation.
While the findings show promising capabilities, the study underscores that ChatGPT and similar models are not yet ready to replace professional medical advice. Nonetheless, ongoing improvements could lead to powerful tools that assist with initial diagnostics or information validation, provided their limitations are addressed.
Source: https://medicalxpress.com/news/2025-07-chatgpt-reveals-knowledge-gaps-hallucination.html
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Innovative Program Significantly Increases HIV Screening in Urgent Care and Emergency Departments
A new program at Intermountain Healthcare has dramatically increased HIV testing rates in urgent care clinics and emergency departments, leading to earlier diagnosis and treatment for more patients. Learn how education, alerts, and coordinated follow-up are transforming HIV prevention efforts.
Research Examines Teen Perceptions and Social Norms Around Distracted Driving
A new study reveals the social perceptions and peer influences driving teens to engage in distracted driving, highlighting key strategies to improve safety.
Stem Cell Grafts Show Promise in Restoring Myelin in Progressive Multiple Sclerosis Animal Models
Recent research indicates that neural stem cell grafts can successfully promote remyelination in mice with models of progressive multiple sclerosis, opening new avenues for regenerative treatments in MS patients.
Long-Term Effectiveness of ADHD Medication: Benefits Diminish as Prescriptions Expand
A recent study reveals that while ADHD medication continues to reduce risks such as injuries and criminal behavior, its protective effects have weakened over time due to broader prescription practices and demographic shifts.



