According to CRN, Okta President and COO Eric Kelleher says the security of AI agents is now the number one identity-related threat worrying the company’s customers. This concern has dominated conversations over the past quarter, which ended October 31. Financially, Okta is thriving, with Q3 revenue hitting $742 million, a 12% year-over-year increase that beat analyst estimates of $730.4 million. Earnings also surpassed expectations, coming in at 82 cents per share versus the anticipated 76 cents. In response, Okta is pushing its “Okta for AI Agents” offering, unveiled in September, which treats agents as “first-class citizens” in its identity stack, allowing them to be managed, provisioned, and secured with policies similar to human users.
The Rush and the Risk
Here’s the thing: everyone’s deploying AI agents, but almost no one is confident they’re secure. Okta’s own survey spells it out—90% of organizations are using agents, but a mere 10% feel good about how they’re governed. That’s a staggering gap. It paints a picture of a frantic, “deploy now, figure out security later” mindset that’s all too common in tech. Companies are scared of being left behind, so they’re giving these autonomous software entities access to sensitive data and systems before they have a real handle on the risks. Basically, we’re building the plane while it’s already in the air, and the passengers are starting to get nervous.
What Securing an Agent Even Means
So what does Okta mean by securing an agent? It’s not just about a login box. Kelleher talks about vaulting credentials, rotating them automatically, and using “just-in-time” provisioning so an agent only has permissions when it’s actively needed for a task. The goal is to eliminate “standing permissions”—that constant, always-on access that is a goldmine for attackers if compromised. The idea is to apply the same identity governance rules you’d use for a human employee: who can access what, when, and for how long. It makes sense conceptually. But the devil is in the implementation. How do you define an agent’s “job role”? What does “least privilege” look like for a piece of code that’s designed to act autonomously?
A Market Opportunity in a Problem
Let’s be clear: this is a massive market opportunity for Okta. They’re framing a new, complex problem as a natural extension of their existing identity solution. By saying they can make agents “first-class citizens,” they’re telling customers, “You don’t need a whole new security paradigm; you just need us to do what we already do, but for bots.” It’s smart positioning. Their strong quarterly earnings give them the runway to invest in this narrative and try to own the category early. The risk for the industry, however, is treating this as just a checkbox. Securing autonomous AI agents that can make decisions and take actions is fundamentally different from managing user logins. It requires a deeper rethink of trust and control. Okta’s approach is a necessary first step, but is it sufficient for the long term? I think we’re just at the beginning of figuring that out.

l161s1