Replacing Federal Agencies With AI Agents: What Could Possibly Go Wrong?
AI Agents have been proposed as a stand-in for federal workers let go in the recent cuts to government workforces. What’s the threat model–and was this the plan all along? | Edition 9
I’m making this post available & free to everyone because I believe this story is important. I hope you enjoy it.
It started with Presidential Executive Order 14179, signed shortly after Trump’s return to office. The EO was more than a revocation of the 2023 Biden-era order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”; it was a direct rebuttal. With a stated purpose to revoke “certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence”, the EO was an unmistakable shot at the Biden AI policy–and a signal for things to come under the Trump administration.
Then came DOGE. The meme-monikered quasi-agency was tasked with improving government efficiency, ostensibly via the use of sophisticated tech, including AI. Initially this AI-driven approach seemed to be a reality–sort of. All of it was documented not by major successes at reducing overhead, but by the program’s sometimes comical, always dystopian, algorithmically-driven failures.
Regardless, the DOGE tech effort was viewed by many as a first test of the new administration’s innovation-friendly AI policies.
High-Tech Promises, Low-Tech Results
But after its many high-tech promises, the Department of Government Efficiency ultimately settled on the low-tech, low-hanging fruit that corporations have used to cut costs since time immemorial: firing workers. Or, in corporate lingo, “workforce reductions.”
Claiming the layoffs were now AI-flavored was the most 2025 part of the whole situation.
For all the forecasts of AI change on the horizon, the firings and buyouts felt so old-school. Wasn’t AI supposed to improve efficiency? So if we implemented AI workflows, shouldn’t government output have increased with the same staffing? Logically speaking, that is.
We’ll never know, though–at least not for the foreseeable future. Strangely, the AI systems supposedly in use weren’t being implemented to increase worker capabilities.
They were being used to fire people.
Is peak organizational effectiveness really just workforce minimization? Is that what the almighty AI said?
A recent article written by a Forrester VP & Principal Analyst goes even further, suggesting AI as a replacement for the workers themselves. The piece, “US Federal Workforce Layoffs: Can AI Agents Step In?” makes its title a question seemingly begged by AI’s very existence: Now that Agentic AI apparently exists, how long before we can use it to get rid of humans?
And maybe that was the DOGE gambit all along: Why use AI to simply trim the workforce, when you could actually replace it?
The Use Case Mirage
While many felt–and argued–that using untested Generative AI applications to fire government workers on a systemic level might, you know, turn out badly, to others it was a validation of several societal themes at once: the perceived inefficiency of government, the ability of business and capitalism culture to just “get it done”, and the beneficent concentration of technocratic power.
AI, for better or worse, has come to be associated in the popular consciousness with all three of these themes. How you feel about them most likely relates to how you feel about AI.
But regardless of moral and philosophical leanings, Agentic AI remains (in many cases) a technology in search of a use case–a feature the article tacitly concedes in the acknowledgement that “Finding the app that drives value” will be the “most important thing” for government agencies looking to deploy AI Agents.
In this framing, Agentic AI isn’t a solution whose time has come, but rather an opportunity to find applications.
Since the applications aren’t obvious, they have to be created, or even sought out.
This subtle, but absolutely critical narrative shaping sets up a scenario where the earliest adopters–those who start the search for Agentic use cases sooner rather than later–might obtain a decisive advantage over those who don’t.
Or they might waste a ton of money on deployments that never materialize, suffer a devastating security breach when they actually do, or some combination of the two. But hey, that’s part of the adventure, right?
It’s FOMO meets silicon valley’s version of a slot machine. Except instead of gambling with your own cash, you’re betting VC money and the personal data of trusting users.
Source: Forrester.
Agentic Threat Modeling Is Hard
The Forrester article gives a list of the “top 11 AI agent use cases for the US federal government”, organized along two axes representing risk (low to high), and difficulty of deployment.
The choice to use two axes results in 2-dimensional understanding of Agentic AI threat vectors. One of these dimensions attempts to quantify a project’s risk, while the other is all about ease of implementation. (The models that we choose matter; but more on that in another post.)
Not surprisingly, the author recommends what they must have felt was a conservative approach that dovetails with the newness of the technology: easy implementation, lowest risk application. In the words of the article: “Aim for low risk and high value until you’ve gained experience.”
The application recommended with (supposedly) both the lowest risk and lowest deployment difficulty is “Customer Information Retrieval”. At first glance, this may seem like a completely reasonable proposal. If looking up customer information is all that’s required of an AI Agent, what could possibly go wrong?
Maybe they should ask Catholic Health, who saw nearly half a million sensitive patient records–including personally identifiable information (PII) like social security numbers and personal health information–leaked onto the open internet by their Agentic AI customer service provider.
Maybe they should look into the multiple confirmed, transferable and unpatchable vulnerabilities that can allow AI Agents to exfiltrate sensitive data, and worse.
If it’s really true that the “next generation of AI agents is defined by autonomous goal setting, dynamic reasoning, real-time learning, enhanced model control, and the ability to execute complex actions” as the article posits, then we need to take a sober look at what all that autonomy and complexity mean in practice.
It turns out that threat modeling for Agentic AI isn’t as easy as it sounds.
Agentic Crisis Management Is Its Own Crisis In The Making
Although the author admits that “Ambition must also be tempered with reality”, the “critical design challenges” enumerated curiously do not include security.
Mentioning explainability, bias mitigation, reasoning guardrails, and data governance is a nice start, but ignores the elephant in the room: that AI Agents are autonomous, buggy threat vectors, and that as their autonomy increases, they may become threat actors.
That despite all the best intentions, AI Agents aren’t securable by traditional methods–or maybe even at all.
Even worse, AI Agents are presented as a security improvement when deployed. Suggested applications include fraud detection (an arena where Predictive AI has already found excellent success), cybersecurity, and emergency response–a mission-critical domain where a nondeterministic tool like current Agentic AI should never, ever be deployed.
Applying AI agents to “crisis management” may sound good on paper, but what happens when the system starts hallucinating? Decisions where lives are on the line should always be made by humans.
These potential applications are presented as “bolstering” human security through Agentic deployments. Absolutely zero mention is made of the security vectors introduced by the AI Agents themselves.
Agent-Washing, Federal Edition
Agentic AI may be a technology in search of a use case–but when it does find an application, it’s often a tech in search of an outcome, too. And sometimes, it’s not even real at all.
Gartner recently predicted that over 40% of Agentic AI projects will be cancelled by 2027. The same report also found that out of thousands of so-called “agentic” applications, only about 130 or so were true AI Agents.
Take a moment with that: More than 40% of Agentic AI projects will never bring in long-term ROI. Meanwhile, companies selling “agentic” applications are heavily incentivized to agent-wash workflows, repackaging them as new technology to Federal orgs desperate to both modernize and fill in gaps left by employees who are gone.
The crux of the problem is simple, and devastating: Federal agencies are expected to find Agentic AI use cases by trial and error, deploy AI Agents with limited budgets and reduced staff, sort out legitimate vendors from the fakers in a market where only a few technologists truly understand the systems they’re selling–oh, and ensure the security of vast networks of citizen & national security data that will need to be connected for the Agents to operate. Did I miss anything? Probably; there are far more moving parts to a deployment like this than could be listed here.
So it’s a bit fantastical to assume that cash-strapped, understaffed government agencies will simply acquire this knowledge on the fly. Or that the totally unregulated, AI-gold-rush market will altruistically offer to remedy an information gap that favors their side.
The market incentives, as they say, simply aren’t there.
The real question is when government agencies in the US and elsewhere attempt to address staffing shortages and budget constraints with some flavor of Agentic AI, who will protect their most vulnerable citizens–and national secrets–from the security and privacy chaos that ensues?
Threat Model
Many federal agencies are historically under-funded and now acutely understaffed; the temptation to see AI as a panacea has never been stronger.
Agentic AI has significant, intractable security issues that aren’t being adequately addressed in an industry that’s just barely learning how to red team these systems.
With the vast, overwhelming majority of so-called “Agentic” AI applications being repackaged old workflows oversold as new technologies, government agencies will have to sort out fact from fiction when contracting for AI services–and accepting the liabilities they bring.
Talking Points
Using a government appointment to fire Federal workers with the intent of replacing them with your proprietary AI would be on a very fine line between smart business and a national-security-level conflict of interest.
If we expect Federal agencies to implement Agentic AI, we need to seriously consider how to support these efforts with real expertise–or risk wasting taxpayer money on fake “Agentic” startups.
Without a serious emphasis on securing Agentic AI systems now, understaffed, under-resourced Federal agencies attempting to deploy untested (and possibly fake) AI Agents with access to citizen data and/or mission-critical functions is a recipe for disaster.
Resources To Go Deeper
This edition’s research focus is on technical threat modeling, understanding Agentic protocols, and redefining the security perimeter for AI Agents:
Habler, Idan, Ken Huang, Vineeth Sai Narajala and Prashant Kulkarni. “Building A Secure Agentic AI Application Leveraging A2A Protocol.” ArXiv abs/2504.16902 (2025): n. Pag.
Hu, Botao Amber, Yuhan Liu and Helena Rong. “Trustless Autonomy: Understanding Motivations, Benefits and Governance Dilemma in Self-Sovereign Decentralized AI Agents.” ArXiv abs/2505.09757 (2025): n. Pag.
Narajala, Vineeth Sai and Om Narayan. “Securing Agentic AI: A Comprehensive Threat Model and Mitigation Framework for Generative AI Agents.” ArXiv abs/2504.19956 (2025): n. pag.