Angles of Attack: The AI Security Intelligence Brief

Angles of Attack: The AI Security Intelligence Brief

Key Components of Agentic Architectures: Operational Environments

The new skill gap between Agentic AI leaders and the ones who get left behind: Those who can spot multi-leveled agencies, and those who can’t | Edition 28

Disesdi Susanna Cox's avatar
Disesdi Susanna Cox
Nov 16, 2025
∙ Paid

The OWASP GenAI Security Project released their Securing Agentic Applications Guide v1.0, and it’s full of valuable guidance on Agentic systems. There’s a ton of information–so I’m breaking it down here.

In this series, I’m laying out some of the ways that Key Components (KCs) of Agentic AI deployments expose Agentic systems to threats–and how we can think about and model these threats effectively.

There are six (6) KCs defined in the Guide, and together, they form a bedrock understanding of how Agentic architectural components function–from a security point of view.

You can find the previous edition, on Agentic Key Component Group 5 (KC5), here.

One major contribution of the OWASP Guide is in the area of Secure Agentic Architectures. These are architectural patterns which, theoretically, might represent the maximally securable state of such a deployment.

What does the maximally securable state of a deployment mean?

As most security professionals will tell you, there is nothing in existence that is totally securable. Agentic deployments are no different.

Agentic systems in fact present their own unique obstacles to security, including novel attack surfaces, and cascading failures with the dual challenges of often unmanageable permissions, and the inherited single-channel vulnerabilities of LLMs.

When industry deploys Agents, they need to be deployed to the highest level of security design possible. That doesn’t make them secure.

But it does require engineering to the highest possible standards. So there is an important reason for defining threat modeling, secure development practices across the lifecycle, and architectural patterns that support secure deployment.

In order to understand the core defining principles of these patterns, we need to continue defining the Key Components of Agentic architectures.

In previous editions, we’ve covered four of the other first five Key Components, KC2 - KC6. You can find each of these linked below.

  • Key Components of Agentic Architectures: Orchestration & Control Flow Mechanisms (KC2)

  • Key Components of Agentic Architectures: Reasoning and Planning Paradigms (KC3)

  • Key Components of Agentic Architectures: Memory (KC4)

  • Key Components Of Agentic Architectures: Tool Integration Frameworks (KC5)

Why start with KC2 and not KC1? Well, because KC1 deals with the LLM capabilities of Agents themselves, which, while important, is both a more well-known vector, and, in practice, a slightly different engineering topic.

So we started with KC2, and we’re finishing this series with a deep dive into KC6: A look into the Operational Environments of Agents, or Agencies, themselves. How time flies.

Let’s dive in.

Key Component 6 (KC6): Operational Environment (Agencies)

Key Component 6 (KC6) is different from the previous five in its granularity. It’s more detailed in many respects, providing a glimpse into the complexity of deploying Agents, with their ability to interact with tools, data, or even the outside world.

And this makes sense–KC6 focuses on the places where Agents become Agentic: Their operational environments. The Guide also refers to these operational environments as agencies.

There are seven subcomponents within KC6. Several of these are broken down further, providing even greater detail. The seven KC6 subcomponents are listed below:

  • KC6.1 API Access

    • KC6.1.1 Limited API Access

    • KC6.1.2 Extensive API Access

  • KC6.2 Code Execution

    • KC6.2.1 Limited Code Execution Capability

    • KC6.2.2 Extensive Code Execution Capability

  • KC6.3 Database Execution

    • KC6.3.1 Limited Database Execution Capability

    • KC6.3.2 Extensive Database Execution Capability

  • KC6.3.3 Agent Memory or Context Data Sources

  • KC6.4 Web Access Capabilities

  • KC6.5 Controlling PC Operations

  • KC6.6 Operating Critical Systems

  • KC6.7 Access to IoT Devices

You may have noticed that KC6.1, KC6.2, and KC6.3 all break down into deeper levels of granularity. These distinctions are important–and it’s also important to understand why they need to be made in the first place.

In other words, what makes the first subcomponents of KC6 so different from the rest?

Defining The Agencies

The Operational Environments of KC6, otherwise known as Agencies, represent the specific capabilities that Agents can access beyond their training data updates.

The underlying systems that power Agents, Large Language Models (LLMs), do not spontaneously learn. They are “restricted to the data available up to their last training update” (per the Guide), and this limitation cannot be circumvented.

However, Agents extend the data-restricted, text-bound nature of LLMs with their ability to interface with external environments. These abilities can include the use of tools,

function calls, and more.

These agencies are the capabilities that facilitate Agentic interaction with external environments. They allow Agents to gather, process, and make use of data, allowing them to operate in, and interact with, a variety of environments.

KC6.1 API Access

Access to the outside world, for all Agentic intents and purposes, more often than not requires some form of API interaction. The Guide divides Agentic API interaction levels into two classes: KC6.1.1 Limited API Access, and KC6.1.2 Extensive API Access.

The difference is crucial–and right in the name. The effects might be more subtle.

The central difference lies in the portion of the API call parameters that are, by design, LLM-generated. In a Limited API Access pattern, An agent utilizing LLM capabilities to generate some of the parameters to a predefined API call

This can take the form of an Agent utilizing LLM to generate a parameter for a particular call. Contrast this with an Agent drawing on its LLM-backed capabilities to generate the entire URL to a REST API (as an example from the Guide), or to interactively traverse a GraphQL API–these are both examples the Guide gives of Extensive API Access patterns for Agents.

In deployments with Extensive API Access capabilities, a compromised Agent could exploit any excessive authorization, giving it the ability to generate unwanted API calls, and to launch attacks on the API.

KC6.2 Code Execution

Similar to the patterns in KC6.1, Agentic Code Execution capabilities come in two varieties: Limited, and Extensive.

Knowing the architectural differences between these two is an important first step towards avoiding costly mistakes.

Parameters play a role in this definition as well, but really only with regard to Limited Code Execution Capability, a pattern in which Agents might generate some parameters of a predefined function, but not the code itself. In this case, the parameters become attack vectors, introducing the possibility of injection attacks via the Agent/LLM.

An Agentic with Extensive Code Execution capability is capable of running code that is generated via the LLM. These extensive capabilities introduce the possibility of the Agent running arbitrary code, potentially uncommanded.

KC6.3 Database Execution

If you’re sensing a pattern with how the Execution Agencies are defined, you’re correct: Limited patterns allow the LLM to modify or create certain parameters, while Extensive capabilities generally deploy fully LLM-created fields.

In architectural patterns with database capabilities, KC6.3.1 Limited Database Execution Capability refers to allowing the LLM-based Agent to run specific queries or commands, and only these queries or commands, against a database.

In practice, these systems work by allowing the Agent a carefully limited permission set, such as read-only permissions at the table or row level, or restricting write access to specific parameters, pre-constructed queries, etc. In such a deployment, a compromised agent is limited in its abilities–but could still exfiltrate from a database, or write malformed/malicious data to a table or database.

When an Agent possesses KC6.3.2 Extensive Database Execution Capability, it has the ability to use LLM capabilities to generate and run many operations against a set of tables or complete database.

Compromising an Agent with KC6.3.2 capabilities looks different: The Agent might have the ability to “alter any record in the database, delete a database/table, or access and leak any information stored within the database.”

This, obviously, must be a consideration in threat modeling.

The real future challenges will be in transparency and explainability, as Agentic deployments in the wild have already become so complex that determining which Agency Capability Pattern(s) are in deployment is already an enterprise operational challenge.

Soon it might be an existential one.

Stay frosty.

The Threat Model

  • Agents with Limited Agency capability patterns allow the LLM to modify or create certain parameters; failure modes depend on the application and the tooling access levels, and the parameters themselves.

  • Agents with Extensive Agency capability patterns generally deploy fully LLM-created fields–threat modeling for these deployments should always include worst-case scenarios for full access to anything the Agent touches.

  • Being able to spot the difference between these two Capability Patterns can mean the difference between successful deployment–and disaster.

Resources To Go Deeper

  • Fourney, Adam, Gagan Bansal, Hussein Mozannar, Cheng Tan, Eduardo Salinas, Erkang (Eric) Zhu, Friederike Niedtner, Grace Proebsting, Griffin Bassman, Jack Gerrits, Jacob Alber, Peter Chang, Ricky Loynd, Robert West, Victor Dibia, Ahmed M. Awadallah, Ece Kamar, Rafah Hosn and Saleema Amershi. “Magentic-One: A Generalist Multi-Agent System for Solving Complex Tasks.” ArXiv abs/2411.04468 (2024): n. Pag.

  • Lai, Hanyu, Xiao Liu, Iat Long Iong, Shuntian Yao, Yuxuan Chen, Pengbo Shen, Hao Yu, Hanchen Zhang, Xiaohan Zhang, Yuxiao Dong and Jie Tang. “AutoWebGLM: A Large Language Model-based Web Navigating Agent.” Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2024): n. Pag.

  • Kumar, Priyanshu, Elaine Lau, Saranya Vijayakumar, Tu Trinh, Scale Red Team, Elaine T. Chang, Vaughn Robinson, Sean M. Hendryx, Shuyan Zhou, Matt Fredrikson, Summer Yue and Zifan Wang. “Refusal-Trained LLMs Are Easily Jailbroken As Browser Agents.” ArXiv abs/2410.13886 (2024): n. pag.

Executive Analysis, Research, & Talking Points

Smart Leaders Don’t Look At Code, They Look At Patterns

What sets the first three KC6 Agencies apart? In other words, when Agentic deployments become too big to manage by hand–which any Agentic deployment at scale definitionally is–what patterns can leaders look for to identify threats on an architectural level, before they impact the business?

Here’s what makes the first KC6 subcomponents different, and how to spot where they apply:

Keep reading with a 7-day free trial

Subscribe to Angles of Attack: The AI Security Intelligence Brief to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Disesdi Susanna Cox
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture