There's a particular kind of clarity that emerges at RSAC when the entire industry finds itself working through the same structural problem at the same time, even if the people in the room haven't quite agreed on what to call it yet. RSAC 2026 was that kind of year. Jen Easterly presided for the first time, AI banners covered every surface of Moscone, and the show floor had more agentic AI positioning than any previous conference I can remember. The energy was real, but so was the frustration underneath it. Every serious conversation I had throughout the week, whether in investor roundtables, customer meetings, or the late-night sidebar conversations that tend to be more honest than anything on stage, eventually arrived at the same place.
People weren't just talking about AI agents. They were trying to figure out how to secure them, gain visibility into what they were doing, and act on what they found, at a speed that actually matched the environment in which they were operating.
And in almost every case, the conversation would hit a wall, because the foundational layer that all three of those problems depend on, identity tooling, wasn't able to answer.
How the Agents Arrived
Before you can talk about securing AI agents, you have to be honest about why the problem grew this fast, and through how many doors the agents actually came in. The first and most visible door is developer tooling. Engineering teams across every enterprise are now purchasing AI coding platforms like Cursor, Windsurf, and GitHub Copilot, and these tools do far more than autocomplete a function. They spin up agents that read entire repositories, query databases to understand schema, call internal APIs, and make architectural recommendations in the background. Every team that adopts one of these platforms has effectively onboarded a fleet of NHIs into their environment, each with access to source code, secrets, and infrastructure, and none of them went through a security review to get there.
The second door is embedded AI in SaaS. Salesforce shipped Agentforce. ServiceNow has AI agents woven into its workflows. HubSpot, Notion, Slack, and dozens of other platforms are doing the same. The important thing to understand here is that organizations aren't choosing to deploy these agentic identities. The agents are arriving with tools the organization already bought, and each one inherits whatever level of access the SaaS platform already has, without separate onboarding, a separate identity, or a separate risk assessment attached to it.
The third door is experimentation. Every team with a credit card and an API key is building custom agents, connecting LLMs to internal knowledge bases, wiring up MCP servers to enterprise data, and building agentic workflows that pull from CRMs, ticketing systems, and cloud infrastructure. Most of this work is happening entirely outside the security team's line of sight.
The result of all three doors opening simultaneously is that every enterprise now has a population of AI agents that is growing faster than its human workforce. And these agents are fundamentally different from traditional software. They are non-deterministic; they reason and make decisions based on context rather than fixed instructions, and they operate at a speed and scale that human-centric security infrastructure was never designed to handle. That mismatch between how agents actually behave and how most enterprise security is structured was the tension underneath almost everything at RSAC 2026, and it surfaced most clearly in three overlapping conversations that kept pulling people back.
What Everyone Was Actually Talking About
The first conversation was about access. Specifically, how do you enforce just-in-time access, just-enough access, and task-based access control for agentic identities that operate at machine speed, across thousands of simultaneous sessions, in environments that span multiple clouds, SaaS platforms, IDPs, and even On-prem? The controls we have today were designed for a world where a human opens a ticket, waits for approval, and logs into a system. Agents don't work that way, and the access models built around human workflows are architecturally unable to keep pace.
The second conversation was about visibility. Which agents are running in my environment, what data are they touching, which APIs are they calling, and what risks are they introducing? The question I kept hearing from CISOs and CIOs, sometimes phrased carefully and sometimes bluntly: are these agents bypassing my existing security controls? In most environments today, the honest answer is yes, because the controls were designed for a world where every identity was a person.
The third conversation was about what happens after you find something. The traditional posture management model, where you surface findings, rank them by severity, display them on a dashboard, and wait for a human to decide what to do, was built for a world that moved at human speed. In an environment where thousands of agents are making access decisions in parallel around the clock, that model introduces a gap between detection and response that is measured in hours or days while the agents operate in seconds. The conversations that gained the most traction at RSAC were the ones focused on automated prioritisation, contextual remediation, and the ability to fix issues at runtime without requiring a human in the loop for every alert.
At first glance, these three conversations seem like they belong to different teams and different vendor categories. Access management on one side, security posture and visibility on another, and security operations handling the response. Three separate budget line items, three separate evaluation processes. But that framing is exactly what's holding the industry back, because all three conversations depend on the same underlying foundation: knowing what identities exist in your environment, understanding what each one can access, and having the context to act on that information in runtime rather than after the fact.
The Foundation That Wasn't There
Every agent, regardless of where it came from or what it's doing, has to access something to be useful, and the moment it accesses anything, a privilege decision has already been made. The question is whether that decision was made deliberately or whether it happened by default because someone spun up a tool and it inherited whatever permissions the underlying service account already had.
That privilege decision, made at the identity layer, is the first domino in the chain. If an agent has broader access than its task requires, the exposure exists before any data security tool enters the picture. If there's no visibility into what an agent accessed, when, and under what context, the ability to prioritise findings or detect anomalous behaviour becomes guesswork rather than analysis. And if the identity layer can't provide contextual information about what an agent normally does versus what it just did differently, automated remediation has nothing to work with.
The access conversation, the visibility conversation, and the remediation conversation all trace back to the same dependency: a unified identity layer that spans AI Agents, NHIs and across every cloud, every SaaS application, every identity provider, and every on-premises system in the environment. When that layer exists and covers every identity type in one place, access governance becomes enforceable, visibility becomes actionable, and automated response has the context it needs to operate intelligently. When that layer is fragmented across multiple tools that each cover a piece of the environment, every downstream control inherits the gaps.
What struck me most at RSAC was how many organisations are now evaluating a third category of identity tooling specifically for Agentic Identities, on top of the tools they already have for human identities and for service accounts and API keys. Three consoles, three policy engines, three incomplete views of the same environment. The AI governance discipline that works for human identities, inventory, access review, anomaly detection, and automated cleanup needs to extend to every identity type through the same fabric that covers everything in one place, rather than through a separate product bolted onto the stack.
The Layer Nobody Is Governing
One technical reality kept surfacing underneath every one of these conversations, whether people used the term or not: Model Context Protocol.
MCP is how AI agents connect to external data sources, tools, and enterprise systems at runtime. It's the integration layer that makes agentic AI functional, the mechanism through which agents query databases, call APIs, and interact with enterprise tools dynamically and in context. And it is precisely because MCP is the access layer for agents that it is the most consequential ungoverned surface in most enterprise environments today. Consider what happens in practice. A developer spins up an AI coding agent with access to a code repository. That agent, through MCP, connects to the CI/CD pipeline, queries a staging database to understand the schema, and pulls context from an internal wiki. Each of those connections represents a privilege decision that was made implicitly rather than explicitly, with no review, no approval, and no record of what data moved through the connection afterward. Multiply that pattern by every agent across every team in an organization, and the scale of what is happening outside the visibility of traditional security controls becomes difficult to overstate.
The challenge is that you can have the best identity governance in the world for your human users and still have a completely ungoverned path at the MCP layer. Addressing that gap requires an AI governance control plane that understands how agents authenticate, what they are authorized to do, and how those decisions are enforced at runtime, as part of the same identity fabric that governs every other identity in the environment, rather than as a standalone product that adds yet another console to the stack.
The Path Forward
The CISOs I spoke with at RSAC weren't looking for a magic button, and they weren't interested in three-year transformation roadmaps. They were looking for a rational sequence of steps they could begin executing now, with the understanding that full-scale Agentic Identity Proliferation is likely just a few months away for most enterprises, and the architectural groundwork has to happen before the buying does. The sequence that kept emerging in conversations was consistent. Start with discovery and visibility, because you cannot govern what you cannot see, and most organizations are genuinely surprised by the number of service accounts, API keys, OAuth tokens, and unmanaged agent identities that exist in their environment the first time they run a comprehensive scan across clouds, SaaS platforms, identity providers, and on-premises systems. That visibility alone changes every subsequent decision about access policy and remediation strategy.
From there, enforce access at runtime. Just-in-time access, task-based access control, and least-privilege enforcement have to operate at the speed agents operate, which means static policies and manual approval workflows become structural bottlenecks the moment agent adoption scales. And once visibility and access governance are running on the same identity fabric, the context needed for intelligent prioritization and automated remediation finally becomes available, because the system knows what the identity is, what it normally does, and what it just did differently. Each step depends on the one before it. You can't automate what you can't govern, and you can't govern what you can't see.
The Takeaway
If I had to distill what RSAC 2026 told me into a single working thesis, it would be this: the organizations that deploy AI agents at scale without creating years of security debt are the ones that treat identity as the foundation rather than one category among several. Every conversation on the floor about securing agent access, gaining visibility into agent behavior, and operationalizing the response traced back to the same dependency: a unified identity architecture that covers every identity type in one place. The hybrid workforce of humans and agents is already being built. Architectural decisions that will determine whether those organizations are ready are being made right now. The practitioners who left RSAC with the clearest path forward were the ones who recognized that access, visibility, and remediation are expressions of the same challenge, and that challenge starts and ends with identity.
.png)






