How is building software like building a house?
It all starts with architecture.
Just like architects and structural engineers work together to produce a plan for construction, software architects and engineers design systems that guide the development of the applications we use every day.
The process typically starts with technical drawings that help communicate ideas, which are then translated into concrete code.
Not just any code.
Code that operates in the real world.
For all types of engineering, operating in the real world means working within real-world constraints–like rain.
If you’ve ever wondered why houses in rainy climates have sloped roofs, it’s for a simple reason: so that when it rains, water runs off and to the ground harmlessly, instead of pooling and damaging the structure.
When we engineer systems that operate in real life, patterns start to emerge. Sloped roofs, while not universal, are a recognizable design pattern. And software has architectural patterns of its own.
Agentic AI is no different. Just like with any other software, patterns allow architects and engineers to ensure that the designs they create and implement are capable of operating under real-world constraints.
Or at least, that’s the goal.
Image: A Multi-Agent deployment architecture, from the OWASP Agentic AI - Threats and Mitigations guide
Agentic Patterns
With business and governments racing to implement Agentic AI applications, it’s no wonder that practitioners are scrambling to understand the underlying patterns that can allow autonomous AI Agents to operate under real-world business scenarios.
Recently, the OWASP Agentic AI Initiative published their Agentic AI – Threats and Mitigations Guide, which “provides a definition of Agentic terms, capabilities, and architecture” for engineers implementing these systems.
There are two broad categories: Single Agent, and Multi-Agent architectures.
The differences between the two may seem obvious at first glance–after all, how much could an Agentic deployment change just because we’ve added more agents?
The answer is a lot.
With a single agent deployment, you introduce non-deterministic potential into the system. When we deploy with multiple agents, this non-deterministic potential becomes amplified.
The Coordinating Agent: Your Agentic Deployment’s Single Point of Failure
The OWASP guide lists ten different variations of the Multi-Agent category, including Hierarchical Agents, Self-Learning and Adaptive Agents, RAG-Based and Content-Aware Agents.
But one pattern which deserves special attention? The Coordinating Agent.
Many, if not most multi-agent patterns, require an agent to coordinate actions taken by other agents in the system. This makes sense if you think about it–for true autonomy, an agentic system needs a “manager” agent, which can act as an intermediary for other agentic identities and ensure smooth performance.
There’s just one problem: When one agent coordinates, that agent becomes a single point of failure.
An attack on a coordinating agent can induce a cascading series of agentic errors or outright adversarial actions.
What’s the harm that can occur when one agent turns all the others into rogue operators?
The answer: It depends on the use case. The more enabled agents are to interact with real-world consequences, the worse the potential outcomes.
These outcomes are magnified when mission- or safety-critical systems are at play.
For example, the more embedded AI agents are in the secure systems of a business, the greater the potential for security breach, disclosure of sensitive information–or worse.
When you deploy a multi-agent architecture, remember: your Coordinating Agent may become your biggest threat.
Plan accordingly.
The Threat Model
Agentic deployments introduce uncertainty through non-deterministic behavior
Multi-agent patterns introduce new failure modes, and amplify failure effects
Coordinating agents can become single points of failure for not just the Agentic system, but all other systems the agents are connected to
Resources To Go Deeper
Masterman, Tula, Sandi Besen, Mason Sawtell and Alex Chao. “The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey.” ArXiv abs/2404.11584 (2024): n. Pag.
Shavit, Yonadav, Sandhini Agarwal, Miles Brundage, Steven Adler Cullen O’Keefe, Rosie Campbell, Teddy Lee, Pamela Mishkin, Tyna Eloundou, Alan Hickey, Katarina Slama, Lama Ahmad, Paul McMillan, Alex Beutel, Alexandre Passos and David G. Robinson. “Practices for Governing Agentic AI Systems.”
Go Even Deeper
Putta, Pranav, Edmund Mills, Naman Garg, Sumeet Ramesh Motwani, Chelsea Finn, Divyansh Garg and Rafael Rafailov. “Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents.” ArXiv abs/2408.07199 (2024): n. pag.
Executive Analysis, Policy Angle, & Talking Points
While teams struggle with how to handle the uncertainty Agentic deployments introduce through non-deterministic behavior, leaders need to recognize that multi-agent patterns both create & amplify these effects.
If coordinating agents can become single-points-of-failure for any systems they’re deployed into, a question is raised: How do we trust AI Agents in a zero-trust environment?
Keep reading with a 7-day free trial
Subscribe to Angles of Attack: The AI Security Intelligence Brief to keep reading this post and get 7 days of free access to the full post archives.