AI Chatbots in Healthcare: Outperforming Doctors in Diagnosis but Requiring Safeguards Against Overprescribing

Emerging AI chatbots are outperforming doctors in diagnosis but pose risks of overprescribing and inequality. Responsible deployment with safeguards is crucial for safe healthcare innovation.
Artificial intelligence (AI) chatbots are increasingly becoming a part of the healthcare landscape, often surpassing human doctors in diagnostic accuracy. These tools, powered by advanced large language models like ChatGPT, ERNIE Bot, and DeepSeek, are being tested and implemented across hospitals, clinics, and even personal devices. They are designed to assist with clinical decision-making, especially where medical staffing is limited, by engaging in symptom assessment and suggesting possible diagnoses.
Recent research published in npj Digital Medicine highlights both the potential and the pitfalls of AI in health care. The study conducted simulated patient interactions to compare AI chatbot performance with that of human primary care physicians. It found that while chatbots achieved high diagnostic accuracy, they also tended to recommend unnecessary tests and medications at alarmingly high rates—over 90% and 50%, respectively. For instance, in cases of asthma symptoms, some AI systems suggested antibiotics or expensive scans, which are not supported by clinical guidelines.
Moreover, the research uncovered disparities in AI recommendations based on patient characteristics like age, income, or health status. Older and wealthier patients often received more tests and prescriptions, raising concerns about inequality and overuse. These findings underscore the importance of implementing safeguards such as equity checks, audit trails, and human oversight — particularly for high-stakes decisions — before deploying AI tools widely.
AI’s role in healthcare is promising, especially in regions with limited access to medical professionals, but it also presents risks: overprescribing, increased costs, and reinforcement of existing inequalities. As AI continues to evolve, the emphasis must be on responsible development and deployment that prioritizes safety, fairness, and transparency.
The call to action from these findings is clear: co-design AI systems with safety and justice in mind. By understanding both their capabilities and limitations, policymakers, developers, and health providers can foster AI solutions that enhance healthcare access while safeguarding against harm. This ongoing research aims to guide the responsible integration of AI into healthcare systems globally, ensuring equitable benefits for all communities.
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Growing Trend of New York-Trained Doctors Choosing to Practice Locally, New Study Finds
A recent study reveals that more newly trained physicians in New York are choosing to stay and practice within the state, highlighting improvements in physician retention and ongoing efforts to address healthcare workforce needs.
PFAS Detected in the Blood of Children in Northern Spain
A study in northern Spain confirms the presence of PFAS chemicals in children's blood, highlighting the need for ongoing monitoring and stronger regulations to safeguard health amidst widespread environmental contamination.
Aspirin Significantly Reduces Recurrence Risk in Colorectal Cancer Patients with Specific Genetic Markers
A low-dose aspirin regimen has been shown to cut the recurrence risk in colorectal cancer patients with specific genetic mutations, offering a promising avenue for personalized therapy.
Could Human Lifespan Extending Technologies Make Immortality Possible?
Exploring whether future medical breakthroughs could allow humans to like forever, recent statements by world leaders have reignited discussions on aging, longevity, and scientific possibilities. While promising research exists, true immortality remains beyond current scientific reach.



