Angles of Attack: The AI Security Intelligence Brief

Angles of Attack: The AI Security Intelligence Brief

Key Components of Agentic Architectures: Orchestration & Control Flow Mechanisms

Breaking down Orchestration & Control Flow Mechanisms of Agentic Systems, how they impact Agentic security, and how to think around modeling Agentic AI threats now--and into the future | Edition 14

Disesdi Susanna Cox's avatar
Disesdi Susanna Cox
Aug 13, 2025
∙ Paid

The OWASP GenAI Security Project released their Securing Agentic Applications Guide v1.0, and it’s full of valuable guidance on Agentic systems. There’s a ton of information–so I’m breaking it down here.

In this series, I’ll be laying out some of the ways that Key Components (KCs) of Agentic AI deployments expose Agentic systems to threats–and how we can think about and model these threats effectively.

There are six (6) KCs defined in the Guide, and together, they form a bedrock understanding of how Agentic architectural components function–from a security point of view.

We’re beginning this series with KC2 Orchestration (Control Flow Mechanisms), rather than starting with KC1 (LLMs), because understanding Agentic control mechanisms is foundational to Agentic security.

Here’s what the Guide has to say about KC2 Orchestration in its introduction: These components “Dictate the agent's overall behavior, information flow, and decision-making processes”.

A little more detail reveals that as we’ve seen time and again, in Agentic AI, architectures matter:

“The specific mechanism (e.g., “sequential” for layered architectures, “dynamic” for blackboard architectures, or “coordinated” for multi-agent systems) depends on the architecture and impacts the agent's responsiveness and efficiency.”

When it comes to securely deploying AI Agents, understanding their control and orchestration mechanisms is non-negotiable.

Workflows & Planning: Where It All Starts

Workflows and planning play an important role in Agentic deployments–in fact, you can consider them foundational. This makes sense from a process perspective: AI Agents need some way to understand the steps of a task, as well as a mechanism to plan individual increments of a task and responses.

In other words: AI Agents need to understand the steps involved in a task (which we call workflows), and they need to be able to plan dynamically–often coordinating many Agents–to accomplish these steps in order (this is called hierarchical planning).

Hierarchical planning enables many Agents to coordinate via an orchestrator, which ensures that workflows/tasks are completed in order.

This is why Orchestration & Control Flow Mechanisms are the second Key Component in the Agentic Security Guide, right behind KC1, the LLMs which power Agentic systems and provide the “brain”. Without orchestration & process flow controls, any proposed Agentic deployment would, in practice, devolve into pure chaos.

Workflows: Architectural Dependencies, Systemic Brittleness, and Ease of Attack

Within OWASP’s KC2, there are two sub-components, KC2.1 (Workflows) and KC2.2 (Hierarchical Planning).

Let’s look at Workflows first. From the Guide:

KC2.1 Workflows: A structured, pre-defined sequence of tasks or steps that agents follow to achieve a goal. Workflows define the flow of information and actions within the agent's operation and among agents. They can be linear, conditional, or iterative, depending on the task’s complexity.

Several aspects of workflows are clarified here. They are structured, not chaotic, and pre-defined, versus ad-hoc. They don’t have to flow linearly, but they do need to provide a structure and definition of the Agentic system’s task processes.

I would add that workflows are highly dependent on architectural patterns at play. Of course, this is also highly task-dependent, which points to a new reality that bears noting: in Agentic systems, we see that the shape of the system–its architecture–becomes highly dependent on the task.

Put another way, we are seeing a stronger, more contextual dependent relationship between the task and the solution architecture.

This gives rise to a couple of observations, speaking as a security researcher: systemic brittleness, and ease of attack.

Glass Houses

First, we can understand that the nature of the task will likely give us clues to the internalities of the system in ways that traditional software never did. Recognizing these patterns will be a quick signal to set apart the elite researchers.

In practice, this means that if I know the task an Agentic deployment is addressing, I can infer certain aspects of its architecture.

Remember that Agentic security is highly dependent on architectures–the mitigations chosen must be chosen specific to the deployment architecture.

This means that as an attacker, I already have an information advantage–because architecture holds the key to any specific deployment’s vulnerabilities.

But there’s more.

Because the mitigations are architecturally-determined as well, I know within certain bounds what the defense might be cooking up, too.

With this understanding, I now know (for most practical purposes) what security mitigations I can expect to run into.

Combine this with the smallest amount of domain knowledge and LinkedIn HUMINT research and I, as an attacker, may as well be looking at your system schematics.

Your Agentic system is now in a glass house. And all the attacker needed to know is what you’re using it for.

Brittle Systems Break Harder

The second side effect of task-dependent architecture is brittleness. Simply put, the more specialized the system, the more brittle it becomes to any data which falls outside expectations.

Presented with a new task, these systems break–but not in deterministic ways, like traditional software. Emergent and cascading effects of Agentic meltdowns will likely prove to be pretty interesting, if not spectacular.

This is because there is a certain tension in any Agentic deployment, and that is one of human expectations.

Humans want deterministic results from non-deterministic inputs. That’s the dream, isn’t it? To create order out of chaos?

Many humans want machines that can interpret the gray areas of natural language to produce outputs, which is actually totally easy to do–until you add in the factor that these humans want a certain, specific output.

Forging determinism from the soft boundaries of natural language via statistics was never going to go smoothly.

So when we deploy Agents, we desire some degree of autonomy–that’s kind of the whole point–but we also want the results to be correct.

The instructions might be akin to something like find me the best price on a flight to Atlanta, make sure it’s not too early in the morning, no stops, get me Business class or better but only if it’s within the price range, and make sure I can check 2 bags.

There are many steps and a lot of uncertainty in that request. You want a specific outcome: a specific experience flying to Atlanta. For an Agentic deployment, there are many, many places where things can go wrong.

Your financial information. Your personal information. Or just incorrectly completing the task. It’s a minefield for a piece of software to navigate; so the software becomes more specialized to the task.

The greater the degree of determinism desired or required, the more specialized the system tends to become–and consequently, the more brittle, and easily broken.

Hierarchical Planning: Single Points of Failure (SPOFs) and the Orchestration Problem

This brings us to Key Capability 2.2: Hierarchical Planning.

If we think of any task as a series of sub-tasks, we can break down each of these steps, and assign it an order–a place in the line of things that need to get done.

Almost every task imaginable can be broken down this way. If you think about it, even something as simple as reaching for your keys can involve many steps.

If you had to tell a naive robot how to retrieve your keys from a briefcase or a counter, how would you do it?

This thought experiment can reveal a subtle and deceptively simple truth about human vs. AI worldviews: even something as simple as the basic tasks we take for granted are informed by our human experience.

From this principle, a second concept emerges: the degree of experience an entity has with a task will affect how easy or difficult the task is–and with AI Agents, we can’t assume task familiarity.

Let’s take the thought experiment with the keys a little bit further.

Would the order of steps matter? It wouldn’t make sense to pick up the keys from the briefcase before the briefcase was opened, would it?

From this we can observe that even seemingly simple tasks contain multiple steps, often with ordered dependencies, and may require dynamic adaptation in the moment.

Human brains do much of this work under the hood, without bothering the conscious thought process. For AI Agents, everything has to be spelled out. Agentic control flow mechanisms are how this process is implemented in practice.

All Agentic tasks require control flow mechanisms to direct the processes which ultimately create a completed task. In many cases, creating value from an Agentic deployment requires multiple Agents within the system, in order to accomplish the task at hand. Such multi-Agent deployments require a router–or orchestrator–to coordinate among multiple steps, and multiple Agents.

The orchestrator itself has steps that it must follow, in order to effectively route tasks and necessary communications between/among Agents.

These range from intaking the task, decomposing it into sub-tasks, routing to specific Agents, and monitoring the process. In this respect, process & flow control are equally important for the orchestrator.

Here are the Guide’s five processes of the orchestration component:

  • Understanding the Task: The orchestrator receives the initial complex task or request.

  • Decomposing the Task: The orchestrator then analyzes the task and breaks it down into a series of sub-tasks.

  • Routing to Specialized Agents: The orchestrator identifies which specialized agent is best suited to handle each sub-task and assigns the sub-task accordingly.

  • Orchestrator monitors agent performance, identifies inefficiencies, and autonomously adjusts workflows to optimize efficiency.

  • The Orchestrator can interface with the user directly or with a “master” agent that helps coordinate the various agents.

For Agentic AI, orchestration components are necessary for mission success.

What does this mean, from a practical, threat-oriented perspective?

It means that in Agentic systems, orchestration/control flow components are critical to the deployment–which makes them ideal targets for attack.

The Threat Model

  • Orchestration components are necessary to deploy Agentic AI successfully, but they come with their own attack surface.

  • System specialization can increase task effectiveness, but this always comes with a trade-off increase in system brittleness.

  • Workflow control vulnerabilities introduce attacks at the very beginning of the process–where planning happens–creating cascading effects throughout the entire system.

Resources To Go Deeper

  • Noothigattu, Ritesh, Djallel Bouneffouf, Nicholas Mattei, Rachita Chandra, Piyush Madan, Kush R. Varshney, Murray Campbell, Moninder Singh and Francesca Rossi. “Teaching AI Agents Ethical Values Using Reinforcement Learning and Policy Orchestration.” International Joint Conference on Artificial Intelligence (2019).

  • Zhuge, Mingchen, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin and Jürgen Schmidhuber. “Language Agents as Optimizable Graphs.” ArXiv abs/2402.16823 (2024): n. Pag.

  • Yu, Chaojia, Zihan Cheng, Hanwen Cui, Yishuo Gao, Zexu Luo, Yijin Wang, Hangbin Zheng and Yong Zhao. “A Survey on Agent Workflow – Status and Future.” 2025 8th International Conference on Artificial Intelligence and Big Data (ICAIBD) (2025): 770-781.

Executive Analysis, Research, & Talking Points

An Intro To The Control Flow Attack Surface

The OWASP Guide lists seven threats associated with orchestration/control flow components.

They’re summarized as Vulnerabilities in workflow control, specifically: Goal manipulation, Lack of auditability, Identity confusion, Overwhelming human oversight, and Multi-agent attacks.

Taking a closer look reveals some interesting patterns.

Keep reading with a 7-day free trial

Subscribe to Angles of Attack: The AI Security Intelligence Brief to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Disesdi Susanna Cox · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture