Angles of Attack: The AI Security Intelligence Brief

Angles of Attack: The AI Security Intelligence Brief

Key Components of Agentic Architectures: Memory

How memory, sharing, sessions and more combine to introduce new threat vectors for Agentic AI systems | Edition 18

Disesdi Susanna Cox's avatar
Disesdi Susanna Cox
Oct 06, 2025
∙ Paid
6
2
Share

The OWASP GenAI Security Project released their Securing Agentic Applications Guide v1.0, and it’s full of valuable guidance on Agentic systems. There’s a ton of information–so I’m breaking it down here.

In this series, I’ll be laying out some of the ways that Key Components (KCs) of Agentic AI deployments expose Agentic systems to threats–and how we can think about and model these threats effectively.

There are six (6) KCs defined in the Guide, and together, they form a bedrock understanding of how Agentic architectural components function–from a security point of view.

You can find the previous edition, on Agentic Key Component Group 3 (KC3), here.

—

What makes a system Agentic?

When we think about the definition of Agentic within the context of AI, it’s inseparable from the same root concept as agency: the ability to act, especially with autonomy.

You might also recognize the core auto from words like autonomy and automate.

The language we choose to describe AI systems exposes two core principles in building them: First, that AI systems must provide a degree of autonomy in function; second, that this autonomy needs to be implemented at scale.

Everything about AI–from productionizing it, to securing it–depends on autonomy and scale. This is why MLOps is inseparable from MLSecOps in practice: if you can’t operationalize, you can’t scale.

In principle, a predictive AI model or a GenAI system that mostly works is good enough to productize. The reality is much messier: the underlying technical support systems that enable AI to exist in production require skilled talent, and capital investment–much to the disappointment of many a would-be vibe coded billionaire.

Back to the original question, what makes one of these autonomous systems Agentic? And how can we understand this from a first principles perspective?

If we define Agentic AI as systems capable of taking autonomous actions in the real world which would normally be undertaken by humans, we can now go back to an underlying principle component that makes it all possible: memory.

Memory is a core component of Agentic systems, giving them the ability to act in the real world. Without memory, something as basic as ordering tasks would be impossible. Memory is a bedrock component of Agentic architectures. Without persistence of some type, there is no ‘Agentic’ function.

So memory plays a key role–and thus forms a Key Component (KC)--of Agentic systems.

The Memory Modules of an Agentic system are plural for a reason: memory isn’t just one concept in Agentic deployments, but really a network of interconnected systems that allow Agents to integrate context, specialize, communicate, and most importantly, act.

Defining Memory More Formally: Context And Retrieval Windows

Humans tend to have a gradient of retrieval windows for memories; you can likely remember events from very recently, to years ago. Contrast this with AI Agents, where memory as an application has to be coded–and, as a result, their memory contexts divided more granularly.

For Agents, there are two main contexts for memory: short and long-term. This isn’t to say that a memory gradient doesn’t exist for AI Agents in practice, but for engineering purposes, it makes sense to divide these into two main areas.

Short-term memory typically refers to the immediate context, whereas long-term memory might include aspects like past interactions (for use cases where personalization is desired) or relevant situational information.

How to think about it: The long-term memory group is all part of a larger context than an immediate interaction. This is a useful distinction to remember.

An example of memory in use is Retrieval-Augmented Generation (RAG), where a vector database is engineered as a kind of external simulation of longer-term, contextual memory. RAG integrations are intended to provide Agents with access to external knowledge sources via semantic search, with the goal of improving their ability to interact and perform tasks.

This isn’t true long-term memory in the way that a human experiences it–RAG is more or less a mechanism for inserting context into the Agentic workflow on an as-needed basis.

A Deeper Understanding Of Agentic Memory: Classifying Interactions

Diving deeper into the core memory components, patterns emerge. One such pattern is that Agentic memory can be understood by Agentic interactions.

The relationship of interactions and memory in Agentic systems becomes incredibly important for mission-critical engineering, for both security and general deployment. It’s also important to understand that these interactions occur across many dimensions, involving more than one party: user interactions, Agent-to-Agent communications, and session logs for the Agent itself, which might contain information on the Agent’s interactions with outside systems as well.

There are six core memory components defined in the OWASP Guide:

  • KC4.1 In agent session memory

  • KC4.2 Cross-agent session memory

  • KC4.3 In agent cross-session memory

  • KC4.4 Cross-agent cross-session memory

  • KC4.5 In agent cross-user memory

  • KC4.6 Cross-agent cross-user

It’s worth noting that these groupings represent sets of components; there are in fact many memory components of AI Agents in practice, with their own vulnerabilities, from user error to supply chain.

No Possibility Of Quarantine

When it comes to Agentic memory risk there’s an elephant in the room: Because they are so heavily interconnected with interactions, the most effective memory attacks, by definition, cannot be quarantined.

In most Agentic deployments, memory is shared across multiple Agents, and multiple sessions. This is important to remember, because it recalls the cascading effects of security impacts in all Agentic systems: If an agent is compromised in a specific session, the downstream effects can include compromising more sessions, or even other agents.

One Agent’s security failure can affect other Agents, or even an entire system.

For AI Agents, there’s no separating context and memory from the engineering realities of sessions and interactions.

Context As Memory, Memory As Attack Vector

In practice, deploying AI Agents means treating context and memory as interchangeable.

They are of course not interchangeable in human intelligence. But for engineering purposes the practical reality is that this is the nomenclature–and methodology–of deploying Agentic systems.

Another application of the concept of context as memory: Defending Agentic systems, and the sensitive data they usually have access to, against accidental disclosure.

Agentic support systems may use techniques like classification or compartmentalization to prevent leaking sensitive or private data. These systems are intended to inform an AI Agent what it shouldn’t say or reveal–a “guardrail” against accidental disclosure of information. They provide a form of readily-available context for what disclosures are appropriate for the task, user authorization level, etc.

With context/memory playing such critical roles in Agentic systems, they also become an important attack vector.

To attack an Agent’s memory modules is to attack a core part of what makes it Agentic.

Memory and context being treated as the same engineering problem opens up a critical security vector: If an attacker can disrupt an Agentic system’s contextual references, they’ve affected its ability to protect its critical data, defend other Agents in the network, and most critically, perhaps even to act.

The Threat Model

  • Agentic systems rely on memory as context to work well and prevent data disclosure–making them an important attack vector.

  • Compromising the memory components of one Agent can compromise other Agents, and even lead to system-wide privacy failure.

  • Attacking any of an Agent’s many memory components means attacking its ability to act–the very thing that makes it Agentic–providing both a wide attack surface and a convenient vector for total system disruption.

Resources To Go Deeper

  • Chen, Zhaorun, Zhen Xiang, Chaowei Xiao, Dawn Xiaodong Song and Bo Li. “AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases.” ArXiv abs/2407.12784 (2024): n. pag.

  • Wei, Qianshan, Tengchao Yang, Yaochen Wang, Xinfeng Li, Lijun Li, Zhenfei Yin, Yi Zhan, Thorsten Holz, Zhiqiang Lin and XiaoFeng Wang. “A-MemGuard: A Proactive Defense Framework for LLM-Based Agent Memory.” (2025).

  • Xu, Wujiang, Zujie Liang, Kai Mei, Hang Gao, Juntao Tan and Yongfeng Zhang. “A-MEM: Agentic Memory for LLM Agents.” ArXiv abs/2502.12110 (2025): n. Pag.

  • Wang, Yu and Xi Chen. “MIRIX: Multi-Agent Memory System for LLM-Based Agents.” ArXiv abs/2507.07957 (2025): n. Pag.

Executive Analysis, Research, And Talking Points

Modeling Threats To The Six Core Memory Components

Agentic memory components comprise key technical architectures that make Agentic actions possible.

Here are some of their biggest attack vectors–and how their shared interactions impact threat modeling exercises:

Keep reading with a 7-day free trial

Subscribe to Angles of Attack: The AI Security Intelligence Brief to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Disesdi Susanna Cox
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture