Credential management for AI agents

by Rachel Sudbeck
May 7, 2026 - 13 min

The proliferation of credentials outside centralized visibility and control is known as “credential sprawl,” and attackers are eager to take advantage of it.
Unfortunately, credential management is a broad problem that only grows in complexity as organizations add new tools, employees, and partners. Today’s companies have to manage an ever-growing number of credentials that go well beyond traditional passwords, such as developer secrets, passkeys, shared logins, API keys, SSH keys, service accounts, and SSO access tokens. Each of these, if exposed in an attack or breach, can have severe consequences, and developer secrets pose particular, systemic risk.
Addressing credential sprawl has become especially urgent due to the rise of AI-based tools and agents. AI agents are a primary driver of credential sprawl because they create, use, and replicate credentials at machine scale. They have unique access needs and can behave both autonomously and unpredictably. Companies that want to integrate AI-based tools must carefully consider how to mitigate these risks to avoid an exponential rise in unmanaged and vulnerable credentials.
How do AI agents increase credential risk?
AI agents increase credential security risks through their reliance on non-human identities like API keys and service accounts, which are frequently overprivileged, long-lived, and poorly audited. Agents create and use these credentials at machine scale, beyond centralized oversight, leading to rapidly expanding credential sprawl with limited oversight for security teams. And while AI tools and agents pose new and distinct risks, they’re also expanding on credential problems that have existed for years, stemming from SaaS sprawl, shadow IT, and unsafe developer practices.
Traditional security tools are falling behind
As security analyst Francis Odum shared in his enterprise identity security report, “As organizations increasingly adopted SaaS applications, the need for enterprise-grade password management became more pronounced. Employees frequently relied on personal credentials for work accounts, increasing the risk of credential reuse and security incidents. While Single Sign-On (SSO) and Multi-Factor Authentication (MFA) became standard controls, they often failed to cover the full range of enterprise applications, leaving visibility gaps…”
In its research report, 1Password found that the average company has a third of its apps outside SSO’s protection. Our report also noted that, “One major indicator of how SSO is falling short is the amount of access that comes from employees whom IT believed to have been successfully offboarded. Over one-third (38%) of employees have successfully accessed a prior employer’s account, data, or applications after leaving the company.”
Now, AI is accelerating SaaS sprawl even further beyond what SSO was built for. 1Password’s research also found that 1 in 4 employees has used AI applications that weren’t approved by their company, and over a third of employees admit to having knowingly disregarded their company’s AI policies.
Employees are experimenting with AI coding tools, browser extensions, writing assistants, data analysis tools, and agent platforms, often before IT has evaluated or approved them. Many of these tools don’t integrate cleanly with enterprise SSO, and even when they do, adoption frequently begins outside official onboarding processes. Shadow AI poses serious risks, as even innocuous apps can contain security flaws that expose company data and credentials.
Each unmanaged app and AI tool represents at least one unmanaged credential that an organization can’t secure. And as the number of unmanaged credentials grows, so does the likelihood that one is exposed, overprivileged, forgotten, or used to create a direct path to unauthorized access. The result is an ever-expanding layer of applications and credentials that exist outside centralized governance.
Unmanaged AI agent access
AI agents represent an entirely new class of identities; they require varying levels of access, and they operate in ways that are frequently invisible to security tools.
As The Hacker News put it, “AI agents don't operate in isolation. To function, they need access to data, systems, and resources. This highly privileged, often overlooked access happens through non-human identities: API keys, service accounts, OAuth tokens, and other machine credentials.”
All NHIs pose credential risk – over-privileged service accounts, for example, have been putting CI/CD pipelines at risk for years – but the way that AI agents use them has increased their sprawl drastically. Figures vary, but in 2025, there were somewhere between 82 – but potentially up to 144 – non-human identities (NHIs) for every 1 human identity in the average enterprise environment. Regardless, that number is growing fast.
More concerning is the fact that many of these machine identities have highly privileged levels of access, often without the level of scrutiny that would typically be applied to highly privileged users. In fact, a recent study found that 1 in 20 NHIs carries full-admin privileges even though only 38% of total NHIs had been active within the last 9 months.
What this means is:
AI agents are being given access to these highly-privileged NHIs.
That access is often going unmanaged by security teams, who may not be able to differentiate it from normal activity.
Agents can retain this access after it is needed, use it in ways that are harmful, or expose it via prompt injection or other forms of compromise.
Together, these behaviors create a rapidly expanding layer of credentials that exist outside centralized identity systems.
Agentic applications and capabilities are evolving at unprecedented speed, and new tools are often being adopted before their risks are understood. Jason Meller, VP and Security Strategist at 1Password, wrote two blog posts on how powerful – and frightening – these tools can be.
“The short version: agent gateways that act like OpenClaw are powerful because they have real access to your files, your tools, your browser, your terminals, and often a long-term ‘memory’ file that captures how you think and what you’re building. That combination is exactly what modern infostealers are designed to exploit.”
–Jason Meller, Vice President and Security Strategist, 1Password
While OpenClaw certainly garnered some attention, its issues aren’t isolated to one tool alone. In MIT’s “AI Agent Index,” researchers found that the majority of agent developers share little about their tool’s security. “25/30 agents disclose no internal safety results, and 23/30 agents have no third-party testing information.” OpenClaw is an indicator of how severe the security risks can be when AI agents are given unmanaged levels of access; its popularity, and its security risks, have quickly forced security teams to reckon with the fact that the standard enterprise perimeter is not equipped to handle the issues of agentic AI.
AI worsens credential security practices
AI-based tools are also exacerbating credential sprawl by replicating poor credential security practices.
Vibe coding (using generative AI to write code) tends to reproduce poor security habits. For example, one largely vibe coded platform, Moltbook, was quickly found to have a misconfigured database within it that exposed over a million API authentication tokens, along with email addresses and private messages.
Again, this isn’t exclusive to a single platform. GitGuardian analyzed the use of Copilot – Microsoft’s AI assistant (used for vibe coding, among other things) – and they found that repositories with Copilot active are 40% more likely to have at least one leaked secret.
Vibe coding can also enable employees with less coding experience, and therefore less coding security training, to push through code that hasn’t received the standard checks and scrutiny.
Developer secrets, meanwhile, pose their own security challenges. Secrets sprawl is a particularly dangerous subset of credential sprawl; developer credentials tend to live outside of traditional identity security systems, and developers often hardcode secrets into code for simplified access during their workflows. If these hardcoded secrets aren’t discovered during security or access reviews, they pose serious threats to company security, as seen in a recent Uber breach, which began when the hacker “...located a PowerShell script with hard-coded privileged credentials for Uber’s Privileged Access Management (PAM) solution…”
Unfortunately, hardcoded secrets are only growing as a problem. GitGuardian’s 2025 report, The State of Secrets Sprawl, shows how rapidly this problem is accelerating. “In 2024, we found 23,770,171 new hardcoded secrets added to public GitHub repositories. This figure represents a 25% surge in the total number of secrets from the previous year.” As they put it, “secrets sprawl is steadily worsening over time.”
Secrets sprawl can spread in a number of ways, including when developers accidentally expose secrets in public-facing code. However, GitGuardian’s report highlights a more basic concern: “[while] source code management tools have been the primary focus of secrets detection… secrets appear wherever teams collaborate, often in collaboration and project management tools like Slack, Jira, or Confluence.”
Plaintext secrets being sent through apps like Slack represents a dangerously lax approach to secrets hygiene. Unfortunately, cybercriminals are aware of this trend. Dark Reading reports that “...cybercriminals and nation-state actors alike are following a proven playbook and capitalizing on ‘bad secret hygiene’ to further their campaigns.”
AI is now accelerating this dynamic. As developers use AI copilots to generate code, spin up infrastructure, or automate workflows, machine credentials are created and reused at greater speed. All of this is expanding the identity surface far beyond what traditional identity and access management (IAM) and privileged access management (PAM) systems were designed to govern.
Traditional identity security is falling behind
Monitoring how employees use and store credentials has always been challenging. But AI fundamentally changes the identity security model.
AI tools and agents don’t authenticate, store, or use credentials the way humans do. They rely on embedded tokens, API keys, service accounts, and programmatic access patterns. They operate continuously, duplicate easily, and often persist long after their original purpose has ended.
Traditional identity security tools were designed for human behavior, with interactive logins, session-based authentication, and clearly defined privilege tiers. They were not designed to govern autonomous software identities that scale and authenticate programmatically without supervision.
In a way, this is almost by design. As Saumitra Das put it in an article for Corporate Compliance Insights, “By nature, autonomous agents are trained to find the easiest and most efficient way to complete the assigned job. This means that they can often identify ways around guardrails…”
Traditional access control methods are quickly proving to be inadequate, as AI and event-driven automation create NHIs at a scale we haven’t seen before. As TechTarget reported, “Most legacy IAM and privileged access management (PAM) tools were never designed to handle that level of volume and churn.”
The article goes on to point out some of the issues related to how NHIs use credentials, including:
NHIs use a broad array of authentication methods, like JSON tokens, cloud IAM roles, OAuth2 secrets, and API keys. Each of these has its own unique security needs.
NHIs are often given outsized access and long-lived credentials so that teams can ensure the tool will have the access needed to automate various business processes.
Anomaly detection can’t always notice when something has gone wrong with an AI agent, since they don’t really have “normal” behavioral patterns to deviate from.
Each of these factors can seriously damage the efficacy of a company’s security stack.
How can teams manage credential sprawl for agents?
Traditional IAM tools and strategies cannot manage the (sprawling) issues of credential sprawl, especially in a world where so much access isn’t coming from people at all. Rather, teams will require a multi-pronged effort that approaches the problem from multiple angles.
Unified access management for AI Agents and humans
AI-related credential sprawl reflects a fundamental change in how authority is delegated inside the enterprise. AI systems are no longer tools that assist humans; agents increasingly act with independent access to applications, data, and workflows. Yet most access controls still assume a human at the keyboard.
Employees, and developers in particular, are encouraged to adopt AI to improve productivity, but without purpose-built tools to safely delegate access to agent and machine identities, workers resort to unsafe workarounds outside the reach of traditional security tools. Addressing AI-related credential sprawl requires tools that govern non-human access without slowing down workflows.
1Password® Unified Access helps teams create a framework to:
Discover risk: Identify unmanaged AI tools and agents running on developer and end-user devices, and detect credentials and secrets stored in local files and developer environments.
Secure credentials: Vault exposed credentials and remove access for risky AI tools and agents. Deliver credentials to agents, automation, and CI/CD at runtime to reduce long-lived secrets and ensure they’re used only when needed.
Audit agent actions: Gain clear attribution for every action, showing when and how credentials are being used and who’s using them across humans, agents, and machines.
SaaS management
Credential sprawl and SaaS sprawl are irrevocably intertwined. For IT and security teams to effectively determine where and how credentials are being stored, they need to know what applications their employees are using.
The unfortunate nature of SaaS sprawl, though, is that it’s next to impossible for teams to find the time or resources to take control of it manually.
1Password SaaS Manager solves this problem through automation. With over 40,000 app integrations, it lets teams build and maintain a complete inventory of the apps their employees use – including the apps that can’t be secured behind SSO. That includes capabilities for continuous app discovery to illuminate the use of shadow IT – and shadow AI apps – across an organization.
With automated onboarding and offboarding workflows, teams can also ensure that employee access to apps is provided only when needed, without running the risk of unapproved access from improperly offboarded employees.
Identifying which applications are in use, whether they’re company approved or not, is a critical step to making sure that every credential is being used and stored securely. A team cannot achieve wall-to-wall credential security if any part of their application surface is going unmanaged.
AI is here: do you know where your credentials are?
Credential sprawl is far from a new problem. But rather than improving, it only seems to be getting worse, as teams are faced with an ever-growing number of credentials across an ever-growing number of endpoints and apps. Credentials are hidden in codebases, Slack messages, AI chatbots, spreadsheets – and they probably still find a home on a sticky note or two.
An updated and enforceable credential management strategy has never been more crucial. In blunt terms: every unmanaged credential puts your ecosystem at risk.1Password is the critical solution for companies to reign in and control how credentials are used across their ecosystems. By building on the strong security of our password manager, we’re building systems that will let teams manage credentials wherever they may be, from the spreadsheet to the AI agent.
Want to learn more? Read the full ebook on AI credential risk management. Ready to start managing credential sprawl? Reach out for a demo.
