Cybersecurity Risks of Large Language Models in Radiology: A Special Report

A new report explores cybersecurity vulnerabilities in large language models used in radiology and highlights the importance of implementing security measures to protect patient data and clinical workflows.
In a comprehensive new report published in Radiology: Artificial Intelligence, experts highlight the growing cybersecurity threats associated with the use of large language models (LLMs) in radiology and healthcare. As AI tools like OpenAI's GPT-4 and Google's Gemini become more embedded in medical workflows, ensuring their security is paramount.
LLMs, which are capable of understanding and generating human language, are transforming many aspects of healthcare. They assist in clinical decision support, patient data analysis, drug discovery, and improving communication between healthcare providers and patients by simplifying medical terminology. Across the industry, efforts are underway to integrate these advanced language models into daily medical practices.
However, the report emphasizes that the rapid adoption of LLMs brings significant security challenges. Malicious actors can exploit vulnerabilities to access sensitive patient information, manipulate AI outputs, or disrupt clinical workflows through techniques such as data poisoning or inference attacks. These vulnerabilities are not only tied to the AI models themselves but also extend to the broader ecosystem, including insecure deployment environments, which could lead to data breaches, information manipulation, or even installation of malicious software in radiology systems.
Cybersecurity risks in healthcare LLM deployment include vulnerabilities like introducing malicious data during training, bypassing security protocols, and infrastructure attacks that could interfere with image analysis or compromise patient data. The report underscores the importance of thorough risk assessment and the implementation of robust security measures before integrating LLMs into clinical settings.
To mitigate these threats, healthcare institutions are advised to adopt strong cybersecurity practices. These include using strong passwords, enabling multi-factor authentication, maintaining updated software, deploying secure environments, encrypting data, and continuously monitoring model interactions. It is also crucial to vet and approve AI tools through established IT protocols and anonymize sensitive information used during interactions.
Training healthcare staff regularly on cybersecurity best practices is vital, similar to radiation safety training in radiology. Patients should be informed about potential risks but can be reassured that ongoing efforts and regulations are aimed at safeguarding their data.
As Dr. Tugba Akinci D'Antonoli, a neuroradiology fellow, notes, "While the integration of LLMs offers great potential to enhance patient care, understanding and addressing cybersecurity vulnerabilities is essential. Proactive measures today will help secure the future of AI in healthcare."
In conclusion, as the healthcare sector increasingly relies on AI, especially LLMs in radiology, it is imperative to prioritize cybersecurity to protect sensitive data and ensure safe, effective patient care. Ongoing advancements and stricter regulations are key to addressing these evolving threats.
More information: Cybersecurity Threats and Mitigation Strategies for Large Language Models in Healthcare, Radiology Artificial Intelligence (2025).
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
New Study Links Cholesterol Accumulation in Liver Lipid Droplets to Fibrosis in Metabolic Disease
New research shows that cholesterol accumulation within liver fat droplets directly promotes fibrosis in metabolic diseases, opening new therapeutic possibilities.
Study Finds No Link Between Plastic Chemical Exposure and Preterm Birth Risk in Pregnant Women
New Australian research finds widespread exposure to plastic chemicals during pregnancy does not increase preterm birth risk, providing reassurance for expectant mothers.
Innovative Robotic System Tested for Throat Cancer Surgery at King's College London
A new robotic system developed by CMR Surgical has been successfully tested for transoral throat cancer surgery at King's College London, promising safer, more efficient minimally invasive treatments.
New Research Highlights Motor Coordination Challenges Affecting Imitation and Learning in Children with Autism
A recent large-scale study reveals that impaired motor coordination is a key factor influencing imitation and learning difficulties in children with autism, highlighting the importance of early motor skills intervention.