Enterprise identity was built for humans — not AI agents

Presented by 1Password


Integrating agentic capabilities into enterprise environments is transforming the threat landscape by introducing a new class of actors into identity systems. The challenge lies in AI agents taking actions within sensitive enterprise systems without the visibility or control that traditional identity and access systems were designed to provide.

AI tools and autonomous agents are rapidly spreading across enterprises, outpacing security teams’ ability to monitor or regulate them. Meanwhile, existing identity systems are ill-equipped to handle delegated human authority, short-lived execution contexts, or agents operating in tight decision loops.

This necessitates a reevaluation of the trust layer itself, as highlighted by NIST’s Zero Trust Architecture (SP 800-207), which emphasizes that all subjects, including applications and non-human entities, are considered untrusted until authenticated and authorized.

In an agentic world, AI systems must have distinct, verifiable identities of their own, rather than relying on shared or inherited credentials.

“Enterprise IAM architectures assume all system identities are human, relying on consistent behavior, clear intent, and direct human accountability to establish trust,” explains Nancy Wang, CTO at 1Password and Venture Partner at Felicis. “Agentic systems challenge these assumptions, requiring a shift in how we represent agents’ authority and accountability.”

Impact of AI Agents on Development Environments

The modern development environment faces a breakdown in identity assumptions when AI agents are introduced. Traditional IDEs were not designed with AI agents in mind, leading to new risks that conventional security models are unprepared for.

AI agents can inadvertently breach trust boundaries, with seemingly benign elements like documentation or project content influencing their behavior in unexpected ways. Input sources now extend beyond executable files, encompassing various data points that impact agent decision-making processes.

Challenges of Agents Acting Without Intent or Accountability

Highly autonomous agents with elevated privileges pose a significant threat, lacking the context to discern legitimate requests for authentication or the authority under which they operate. Ensuring proper constraints on agent actions becomes essential to prevent unauthorized activities.

Rethinking Security Architecture for Agentic Systems

Securing agentic AI entails a fundamental rethink of enterprise security architecture, necessitating shifts towards identity-centric controls, context-aware access policies, zero-knowledge credential handling, comprehensive auditability requirements, and clear trust boundaries across humans, agents, and systems.

“The future of enterprise security lies in adapting identity systems to encompass context, delegation, and accountability for both humans and AI agents,” Wang emphasizes. “This evolution is critical for governing autonomous agents effectively.”


Sponsored articles on VentureBeat are produced by companies with a business relationship with the platform and are clearly labeled as such. For more information, contact sales@venturebeat.com.