AI Chatbots in Healthcare: Outperforming Doctors in Diagnosis but Requiring Safeguards Against Overprescribing

Emerging AI chatbots are outperforming doctors in diagnosis but pose risks of overprescribing and inequality. Responsible deployment with safeguards is crucial for safe healthcare innovation.
Artificial intelligence (AI) chatbots are increasingly becoming a part of the healthcare landscape, often surpassing human doctors in diagnostic accuracy. These tools, powered by advanced large language models like ChatGPT, ERNIE Bot, and DeepSeek, are being tested and implemented across hospitals, clinics, and even personal devices. They are designed to assist with clinical decision-making, especially where medical staffing is limited, by engaging in symptom assessment and suggesting possible diagnoses.
Recent research published in npj Digital Medicine highlights both the potential and the pitfalls of AI in health care. The study conducted simulated patient interactions to compare AI chatbot performance with that of human primary care physicians. It found that while chatbots achieved high diagnostic accuracy, they also tended to recommend unnecessary tests and medications at alarmingly high rates—over 90% and 50%, respectively. For instance, in cases of asthma symptoms, some AI systems suggested antibiotics or expensive scans, which are not supported by clinical guidelines.
Moreover, the research uncovered disparities in AI recommendations based on patient characteristics like age, income, or health status. Older and wealthier patients often received more tests and prescriptions, raising concerns about inequality and overuse. These findings underscore the importance of implementing safeguards such as equity checks, audit trails, and human oversight — particularly for high-stakes decisions — before deploying AI tools widely.
AI’s role in healthcare is promising, especially in regions with limited access to medical professionals, but it also presents risks: overprescribing, increased costs, and reinforcement of existing inequalities. As AI continues to evolve, the emphasis must be on responsible development and deployment that prioritizes safety, fairness, and transparency.
The call to action from these findings is clear: co-design AI systems with safety and justice in mind. By understanding both their capabilities and limitations, policymakers, developers, and health providers can foster AI solutions that enhance healthcare access while safeguarding against harm. This ongoing research aims to guide the responsible integration of AI into healthcare systems globally, ensuring equitable benefits for all communities.
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Oral Bacteria Could Serve as a Marker for Head and Neck Cancer Progression
Emerging research suggests that the presence of certain bacteria in the mouth, especially Fusobacterium nucleatum, may serve as a predictive marker for the progression and prognosis of head and neck cancers, opening new avenues for diagnosis and treatment.
Distinct Brain Connectivity Patterns Differentiate Autism and ADHD, New Study Shows
A recent study reveals that autism and ADHD have unique brain connectivity signatures, improving understanding of their neurobiological differences and aiding in future diagnostics and treatment approaches.
Stress-Induced Gene Revealed as Key Player in Diabetes Development
A new study identifies the stress-responsive gene REDD2 as a key factor in pancreatic β-cell damage, revealing promising targets for early intervention in type 2 diabetes development.
HIV Testing and Outreach Decline Amid Funding Cuts in the Southern U.S.
Federal funding cuts in the South threaten to derail progress in HIV testing, prevention, and treatment, risking increased infections and health disparities among vulnerable populations.



