Angles of Attack: The AI Security Intelligence Brief

Angles of Attack: The AI Security Intelligence Brief

Why Last Week’s Joint Advisory On AI Security Is The Biggest News You May Have Missed

Angles of Attack Edition 3: MLOps & Lifecycle Data Security Come To The Fore

Disesdi Susanna Cox's avatar
Disesdi Susanna Cox
May 26, 2025
∙ Paid
4
2
Share

When the NSA, CISA, and DOJ come together to issue a public security advisory–it means something.

When that advisory is about AI security, it’s time to pay attention.

Last week, three major US agencies, the National Security Agency (NSA), the Cybersecurity and Infrastructure Security Agency (CISA), and the Department of Justice (DOJ), issued a joint advisory on best practices for securing data used in AI systems.

This advisory comes during the same week that the US House of Representatives narrowly voted to approve a budget bill with an amendment that could restrict AI regulation for the next decade.

Some analysts & proponents of the amendment posit that the goal is to give the US federal government time to study and address the AI issue cohesively, rather than continuing to allow a patchwork of state regulations to arise. Others feel that it’s a giveaway to Big Tech, and an opportunity for large model providers to operate unregulated.

While the latter may be true for Congress, whose members depend on financial support for reelection–and for whom alliances with Big Tech’s deep pockets may be particularly alluring–this joint NSA/CISA/DOJ advisory signals that the government as a whole is paying increasingly close attention to the security of AI systems.

With the DOJ et al signaling an interest in AI security, what might this mean for future enforcement?

Much AI Overlap, Little Agreement

Inside the more than 20-page advisory is a wealth of information relevant to securing AI/ML data systems.

But there are three key aspects which bear noting, especially in the current political climate.

First, it might help to back up and take a look at the recent history of AI regulation in context with AI security taxonomies.

What do US state-level AI regulations and AI security taxonomies have in common?

Up until now, there has been overlap, but not a lot of agreement.

What does overlap without agreement mean? To be fair, it’s a subjective comparison.

In this case, it’s meant to evoke the tendency of both regulations and taxonomies to cover what is largely the same subject matter, but using differing approaches.

Because they share subject matter, AI security taxonomies–like the emergent patchwork of AI regulations–tend to address similar subject areas, but using very different language, and with potentially different applications.

For a concrete example, we can look at three specific (and widely adopted) approaches to understanding AI security.

If someone wants to understand AI security threats through the lens of the Confidentiality, Integrity and Availability (CIA) model, it’s probably a good idea to reference the NIST AI 100-2 E2025 adversarial machine learning taxonomy.

For practical application of AI security controls, mapped to the AI security lifecycle, they may find it helpful to check the OWASP AI Exchange.

If they’re a fan of MITRE ATT&CK, they may find MITRE ATLAS to be an accessible resource.

And so forth.

There are of course other taxonomies, frameworks, and more, intended for various use cases and audiences.

In this light, it’s easy to see why varying approaches to AI security can be a good thing–and why they exist in the first place.

So it’s notable that three US agencies, plus allied agencies, issued an advisory jointly agreeing on both a taxonomical and practical approach to AI data security.

Image: Agencies issuing last week’s joint advisory on AI data security. Source.

Three Areas Of Alignment, Key Changes Signaled

AI security concerns are no longer theoretical. As businesses and governments race to implement AI-driven solutions, they require practical & immediately applicable security methods and controls.

We need our AI security taxonomies and our controls to align.

Three notable areas of agreement in the advisory, that likely signal a sea change in the ways we will understand and apply AI security:

  1. The central role of data to AI systems, not just as an asset, but as an attack vector

  2. The need for operationalization, as a necessary condition for AI security

  3. The focus on the AI lifecycle, and understanding it as a whole

For many of us in the field who have been talking about AI lifecycles and the centrality of AIMLOps to AI security for a while now, this focus comes as hopeful news that these agencies–and perhaps the broader US government–are taking the practical realities of securing AI systems seriously.

But where does this focus come from, and what could it mean for AI security in industry?

The advisory gives three goals for its guidance:

  1. Raise awareness of the potential risks related to data security in the development, testing, and deployment of AI systems

  2. Provide guidance and best practices for securing AI data across various stages of the AI lifecycle

  3. Establish a strong foundation for data security in AI systems by promoting the adoption of robust data security measures and encouraging proactive risk mitigation strategies

Emphasis is added to highlight the importance of item number two, specifically the AI development lifecycle.

Also on the list of best practices are a number of techniques, including measures from data encryption and secure storage, to data provenance tracking and more.

The message is clear: when data becomes a means of attack, simply storing data securely is no longer enough.

The Threat Model

  • It’s significant that in an era of uncertainty around AI regulation, US government agencies are stepping up with a united front to codify what matters in AI security.

  • With government agencies signaling the importance of data security to AI systems, treating data as an asset–rather than an attack surface–is no longer enough. Smart organizations will need to build MLOps and MLSecOps to stay ahead of the curve.

  • Whether this move signals a shift in future regulation, industry expectation, or both, leaders would be wise to build in compliance & best practices from the ground up, before technical debt makes the situation more costly.

Go Deeper

  • Kreuzberger, Dominik, Niklas Kühl and Sebastian Hirschl. “Machine Learning Operations (MLOps): Overview, Definition, and Architecture.” IEEE Access 11 (2022): 31866-31879.

  • Wazir, Samar, Gautam Siddharth Kashyap and Parag Saxena. “MLOps: A Review.” (2023).

  • Symeonidis, Georgios, Evangelos Nerantzis, Apostolos Kazakis and George A. Papakostas. “MLOps - Definitions, Tools and Challenges.” 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC) (2022): 0453-0460.

Go Even Deeper

  • Mart'inez-Fern'andez, Silverio, Justus Bogner, Xavier Franch, Marc Oriol, Julien Siebert, Adam Trendowicz, Anna Maria Vollmer and Stefan Wagner. “Software Engineering for AI-Based Systems: A Survey.” ACM Transactions on Software Engineering and Methodology (TOSEM) 31 (2021): 1 - 59.

Executive Analysis, Policy Angle, & Talking Points

We can–and should–talk about the importance of increasing awareness of the potential risks to AI systems, where data moves from being the “new oil” to new attack vector.

We also can–and should–talk about what it means that these particular agencies came to the consensus that AI data security is a matter of such importance to national security that it warranted addressing publicly.

Keep reading with a 7-day free trial

Subscribe to Angles of Attack: The AI Security Intelligence Brief to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Disesdi Susanna Cox
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture