What if your doctor’s AI agent leaked your private health information, directly to the internet?
What if that information contained more than just administrative data–what if it contained your name, Social Security number, date of birth, medical record number, health insurance information, and more–not to mention your email, username and password?
What if you didn’t find out your information had been leaked for six months?
It may sound like a dystopian movie plot, but that is exactly the nightmare that nearly half a million patients are living right now–after an Agentic AI vendor caused their sensitive personal information, including health data, to be publicly leaked.
An AI Agent vendor called Serviceaide was reported at the center of a data leak that affected roughly 483,000 people. Almost half a million people–patients of a specific healthcare provider– are being notified that their most sensitive and personally identifiable data has been thrown carelessly into public view.
Serviceaid said that it “can’t rule out” the possibility that the information had been copied or otherwise exfiltrated.
These people aren’t just numbers–they’re human beings who trusted their healthcare providers to keep their information safe.
And this isn’t just a password breach. While a password can be changed, identifiers like Social Security and medical record numbers are virtually permanent. Personal diagnosis and prescription information publicly linked to these identifiers is a privacy nightmare scenario.
No one assumes that doctors are writing their own software. In fact, most people would probably prefer doctors to spend time diagnosing & treating their conditions, than managing the day-to-day aspects of running a business.
What patients do assume is that third-party vendors who handle their data are vetted, technically sound, and legally compliant.
And that they are only granted access when absolutely necessary.
But Catholic Health, the healthcare provider who used Serviceaide, whose patients trusted them with their most personal data–didn’t contract with the Agentic AI provider because it was necessary.
They did it to save money.
From Serviceaide’s LinkedIn page:
“Serviceaide leads the way in Agentic AI-powered Agents designed to transform enterprises through automated first-contact resolution…By deflecting and resolving routine tasks, businesses can substantially reduce costs, empower employees to tackle high-value projects, and enhance overall productivity and operational efficiency.”
AI Agents are often billed as a means of increasing productivity. The hype is real, and Agentic AI startups are selling promises of “enhanced efficiency” wrapped in FOMO.
Catholic Health bought it.
For nearly half a million people, the results have been devastating.
And as businesses race to deploy Agentic AI, it’s only going to get worse.
If you’re paying attention, you’ve probably already thought of (at least) one very big question: how did Serviceaide get access to such comprehensive–and deeply personal–information to begin with?
Serviceaide isn’t a medical diagnostic tool. It wasn’t supposed to be able to read patients’ health data, was it? And even if it could, according to industry best practices, there should have been no way for it to publicly expose these records, right?
So how could something like this happen?
Non-Human Identities (NHI) Meet Human Data
Secure data management is all about resources. A human working with data would need to access various resources, like tools and services, in order to do their job effectively. These can include databases, developer tools, cloud services, and more.
These resources often have powerful effects on the systems they interact with, and sometimes deep access into critical data that is processed and stored by the enterprise. In order to control these effects, access to the resources has to be effectively managed. Identity plays a tremendous role in managing who has access to which resources, at what times, and to what degree.
Identity management isn’t an easy task, but at least with humans, the concept itself is straightforward and easy to understand. We intuitively know that humans have unique identities, and we can use these real-world identifiers to tie in to software-based identity management systems.
AI Agents don’t have inherent identities–they aren’t human. So managing their permissions, access, and ability to interact with systems in general becomes a more theoretical, and completely software-based, problem.
In order to access the resources needed to complete their assigned tasks, AI Agents often must operate under what are called Non-Human Identities (NHIs).
These NHIs allow agents to interact with tools, databases, and other system resources.
Image: How agents access tools. Source: OWASP Agentic AI Threats and Mitigation Guide
According to the OWASP Agentic AI Threats and Mitigation Guide, “Non-Human Identities (NHI)—such as machine accounts, service identities, and agent-based API keys—play a key role in agentic AI security.”
But while traditional, human-centered authentication methods are often poorly implemented, NHIs can be even worse.
In fact, according to OWASP, “NHIs may lack session-based oversight, increasing the risk of privilege misuse or token abuse if not carefully managed.”
In authentication terms, sessions are a primary means of controlling access.
As an example, think about logging on to your online banking system. Even though you may have provided your user name and password once, a secure banking app will still make you log in again for subsequent visits. In fact, if the app is open but inactive for too long, it may log you out and force you to authenticate yourself again, before you can access your banking data.
This ensures that a malicious user can’t access or even take over your account just because you potentially forgot to end your session by logging out or closing the app. These security measures enforce that it’s really you accessing your information.
This is the power of sessions.
Sessions protect both data and system resources–in other words, they help to guard both the data itself, and what can be done with it.
Without robust identity or session management, Agents become empowered to potentially access things they never should–in ways that have real-world consequences.
The Compounding Factor: Unsupervised Agency
Even with access to sensitive information and system resources, a deterministic system can still be controlled.
But Agents are non-deterministic by design.
Also by design: Agentic AI is highly unsupervised.
The point of Agentic deployments is to allow the software to function with minimal-to-no human oversight. After all, that’s a core part of the Serviceaide value proposition: to remove costly human labor from the equation.
Removing human supervision from any system with non-deterministic outcomes should ring alarm bells for anyone concerned with system security and integrity. Adding in potentially unregulated access to data is just rolling the dice.
Poor session and privilege management, combined with unsupervised agency, is a recipe for a privacy disaster in the making.
There’s no evidence in the Serviceaide report to indicate that an external (or internal) malicious actor was involved in the Catholic Health data exfiltration.
The insider threat was the Agents themselves.
The Threat Model
Identity management becomes even harder when Agentic non-human identities enter the mix
Poorly secured Agentic deployments risk exposure of sensitive data, with potentially devastating consequences
When Agentic deployments fail, users are the most vulnerable
Resources To Go Deeper
Singh, Chetanpal, Rahul G. Thakkar and Jatinder Warraich. “IAM Identity Access Management—Importance in Maintaining Security Systems within Organizations.” European Journal of Engineering and Technology Research (2023): n. Pag.
Yu, Miao, Fanci Meng, Xinyun Zhou, Shilong Wang, Junyuan Mao, Linsey Pang, Tianlong Chen, Kun Wang, Xinfeng Li, Yongfeng Zhang, Bo An and Qingsong Wen. “A Survey on Trustworthy LLM Agents: Threats and Countermeasures.” ArXiv abs/2503.09648 (2025): n. pag.
Khan, Raihan, Sayak Sarkar, Sainik Kumar Mahata and Edwin Jose. “Security Threats in Agentic AI System.” ArXiv abs/2410.14728 (2024): n. Pag.
He, Feng, Tianqing Zhu, Dayong Ye, Bo Liu, Wanlei Zhou and Philip S. Yu. “The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies.” ArXiv abs/2407.19354 (2024): n. pag.
Go Even Deeper
Li, Yuanchun, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zi-Liang Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Yaqiong Zhang and Yunxin Liu. “Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security.” ArXiv abs/2401.05459 (2024): n. pag.
Executive Analysis, Policy Angle, & Talking Points
AI Agents are often presented to executives as cost-cutting measures. It’s tempting to believe that expensive human labor can be automated away with software.
And that’s a totally fair belief to hold: historically, we’ve seen countless processes that used to require human input eventually relegated to mechanization. So it makes sense.
But what vendors aren’t telling c-suites can be incredibly costly:
Keep reading with a 7-day free trial
Subscribe to Angles of Attack: The AI Security Intelligence Brief to keep reading this post and get 7 days of free access to the full post archives.