Clinicians using public AI tools without approval—cyber risks ahead


A growing trend reveals that clinicians are turning to readily available, public-facing AI tools and Large Language Models (LLMs) (such as general-purpose chatbots) for clinical assistance without formal institutional approval. This phenomenon, known as "Shadow AI," poses immediate and severe cybersecurity and compliance risks. When patient data is copied or pasted into a public LLM, that data is transferred to a third-party server that is not covered by a Business Associate Agreement (BAA) and is not HIPAA-compliant.

This practice results in an instant HIPAA violation and leads to accidental data exposure without triggering firewalls or alerts. Experts warn that Shadow AI has quickly become the most overlooked threat in digital health. Health systems must urgently implement clear governance policies to restrict the use of non-approved AI tools and instead offer validated, secure, and institutionally monitored LLM resources. The solution lies not in banning AI, but in integrating approved, privacy-compliant AI systems that keep all processing within secure data centers.

Read the original article at https://www.insurancejournal.com/news/international/2025/11/21/848596.htm


Our Opinion: This shows that clinicians are desperate for better and easily available tools, and the easiest tool they find now is the nearest publicly accessible chatbot. The solution is to make an AI-enabled, secure, and HIPAA-compliant alternative readily available. This gives them the convenience they crave while ensuring all data remains within the approved institutional firewall.


Follow us on Instagram, Twitter, and Facebook to stay up to date with what's new in healthcare all around the world.
 

Comments

Popular posts from this blog

Cybersecurity in Healthcare insights: 27th Nov- 3rd Dec 2025

Cybersecurity in Healthcare Insights: 20th Nov- 26th Nov 2025

Healthcare vendor breach: 1.2 million files alleged stolen—patients exposed