Last week at the RSA Conference, one theme came through loud and clear: the security industry is working hard to get its arms around AI agents. From CrowdStrike's Falcon Data Security to Cisco's DefenseClaw to Microsoft Entra Agent ID, vendors are racing to govern and secure agents. It's a race to discover AI agents, give them identities, monitor their behavior, and shut them down when something goes wrong.
It's meaningful progress. But there's a gap that largely went unaddressed - and it's worth talking about.
The industry is building fences around agents. But what about the data itself?
Most of what was announced at RSAC this year follows the same basic logic: find the agent, identify it, watch what it does, and stop it if it misbehaves. That's perimeter security, applied to a new problem. And it works, up to a point.
The challenge is that this approach assumes you can enumerate all your agents, that they behave predictably, AND that you can catch problems before real damage is done. The data from the conference tells a different story. According to TechRepublic, 63% of organizations can't enforce purpose limitations on their AI agents, and 60% can't terminate a misbehaving one fast enough to matter.
What does this point to? You can’t rely on just controlling the agent, you need to control the data as well.
There are clear gaps with data security that the industry needs to solve for including:
- Data is still moving through agents. The dominant architecture - DSPM tooling, browser-level DLP, agentic MDR - assumes data will be copied, staged, or fetched into agent workflows. This creates a new governance problem with every copy.
- ID isn’t enough. Identity-based tools like Microsoft Entra Agent ID assign an ID to an agent. But an ID alone doesn't restrict what fields that agent can actually see on any given query. Identity and access are still two different things for most of these tools.
- Permissions are coarse and static.Traditionally, permissions are set at deployment time and remain static, but AI agents are dynamic and somewhat unpredictable. The "right" data access permissions can change moment to moment. Static configuration and behavioral monitoring leave a gap in between.
- Runtime containment is insufficient. TechRepublic confirmed that 60% of organizations still can't terminate a misbehaving agent in time to limit damage. If stopping harm means killing the agent, that's a slow, blunt instrument.
- Compliance is after the fact. Every vendor that mentioned compliance at RSAC framed it as logging and reporting layered on top of an existing architecture - not governance that's native to the data request itself. Real-time policy enforcement at the time and point of access notably absent.
- Shadow AI governance is reactive. But discovery by definition, always lags deployment. While vendors like Nudge Security, Astrix, and CrowdStrike are doing important work trying to discover unknown agents before they cause harm, the real battlefield is delivering policy enforcement where the data lives. Enforcing policy directly at the data source - rather than the agent - means even an undiscovered agent gets the same treatment as a sanctioned one.
- MCP is expanding the attack surface, fast. Several RSAC sessions flagged MCP as a new class of risk that gives agents broad reach across enterprise tools. The protocol is being adopted much, much faster than the security and governance frameworks are being adapted to contain it.
The Bigger Picture
What RSAC 2026 put on full display is that the industry has made real strides on agent identity, discovery, and monitoring. But the industry still lacks - as the pattern across every major announcement last week suggests - a governance model that operates at the data layer. One that doesn't depend on knowing which agents are running, who built them, or whether they've been discovered yet.
That's the conversation the industry needs to be having. And based on what we saw last week, it's just getting started.