Blog "Non-Human Identities: The Access Governance Blind Spot"

"Non-Human Identities: The Access Governance Blind Spot"

Non-Human Identities: The Access Governance Blind Spot

The number you need to know: non-human identities are growing at 44% year-over-year, and in most organizations they now outnumber human users by a significant margin.

That number matters because almost every access governance platform on the market was built for human workflows. A person requests access. A manager approves. A quarterly review certifies it. Rinse, repeat.

That model doesn't work for a service account. It doesn't work for an API key. It definitely doesn't work for an AI agent running autonomously at 3 AM with write access to your production database.

The result is a massive blind spot — and every year it gets bigger.


What "Non-Human Identity" Actually Means

Non-human identities (NHIs) are credentials that authenticate workloads, services, or automated processes rather than individual people. The three major categories:

Service accounts — machine identities used by applications and internal services to authenticate with each other. A Jenkins pipeline that needs to push to GitHub. A monitoring daemon that needs to read CloudWatch logs. An ETL job that writes to the data warehouse.

API keys — bearer credentials that grant programmatic access to external services. Your Salesforce integration. Your Stripe webhook receiver. The third-party analytics SDK that calls home on every page load.

AI agents — autonomous workloads with real permissions in production systems. Your coding assistant with access to every repo. Your ops agent that can query databases. Your customer success bot that sends emails from your domain with no human in the loop.

Three different shapes. The same underlying problem: none of them go through a human approval workflow, and most governance platforms treat them as invisible.


Why the Standard Governance Model Fails Here

Human access governance is built around a few assumptions:

1. An identity maps to a person 2. That person's manager can certify whether the access is appropriate 3. Quarterly reviews are frequent enough to catch stale permissions 4. Access requests are initiated by someone who can explain why they need it

Every one of these assumptions breaks for non-human identities.

No person, no manager. Service accounts don't have managers. Nobody is personally accountable for what a Jenkins pipeline can access six months after the engineer who created it left the company.

No self-certification. An API key can't respond to an access review. Most governance workflows require a human to click a button. The key just sits there, active, until someone explicitly revokes it — which almost never happens.

Quarterly is too slow. Non-human identities are created constantly — every new integration, every new deployment, every new AI feature. By the time a quarterly review runs, the environment has changed completely. Stale service account credentials accumulate faster than any manual process can clear them.

No access requests. Engineers provision service accounts and API keys directly. They don't go through an approval queue. They grant the permissions needed to make something work, move on, and never revisit the scope.


The Real Attack Surface

The governance gap isn't theoretical. Non-human identity abuse is consistently among the top vectors in real-world incidents:

The attack surface grows with every new integration, every new service, every new AI feature. And it grows silently — none of it shows up in your human access review.


What Your Competitors Are (and Aren't) Doing

This isn't a space where solutions are mature. Most platforms are playing catch-up.

ConductorOne has started mentioning AI agents in its marketing, which is worth noting. But its governance model is built around human request-and-approval workflows. Bolting on language about agents doesn't change the underlying architecture.

Vanta and Drata are compliance platforms. Their job is to collect evidence that your controls exist — not to enforce those controls. They'll tell you whether you have a policy for service account rotation. They won't enforce least-privilege on the accounts themselves.

Opal is a solid human access management platform. Its entire workflow model is built around human identities, manager approval, and access requests. Non-human identities aren't in scope.

The honest summary: non-human identity governance is a problem nobody in the access governance space has actually solved. Most platforms acknowledge the problem exists. None of them have rebuilt their governance model to treat machine identities as first-class.


What Proper Non-Human Identity Governance Looks Like

The answer isn't to shoehorn service accounts into human workflows. It's to govern all identities — human, service, agent — through the same underlying model: least-privilege, time-bound, policy-enforced, and audited.

What that means in practice:

Continuous discovery, not quarterly reviews. Every service account, API key, and agent credential gets enumerated continuously. When a new one appears, it's flagged immediately. When one goes stale (no activity, tied to a decommissioned system, or orphaned from an employee departure), it surfaces automatically.

Policy-driven scoping. Instead of ad-hoc permission grants ("give it what it needs to work"), non-human identities get scoped by policy at creation: minimum permissions for the declared function, time-bounded by default, expiring unless explicitly renewed.

Automatic rotation enforcement. API keys and service account credentials that haven't been rotated in 90 days get flagged. Rotation can be triggered automatically for supported integrations, or routed to the owning team with a deadline.

Audit parity with human identities. Every non-human identity gets the same audit trail as a human user: when it was created, by whom, what permissions it holds, what it has actually accessed, and when. "What can this service account do?" has the same one-query answer as "what can this engineer do?"

Lifecycle tied to workload status. When a service is decommissioned, its credentials are revoked automatically. When an AI agent is shut down, its access disappears. Identity lifecycle follows workload lifecycle — not a manual cleanup ticket.


Why Vigil Was Built for This

Most access governance platforms started with human workflows and are now trying to extend them to cover machines. That's a retrofit. The underlying data models, review workflows, and policy engines were designed for people — and they show the strain when applied to service accounts and agents.

Vigil was built from the start around the principle that identity is identity. A service account is a first-class identity. An API key is a first-class identity. An AI agent is a first-class identity. Each one gets governed the same way: least-privilege access, time-bounded grants, policy-enforced scope, continuous monitoring, and a full audit trail.

We don't have a "human governance" product with a "machine accounts" tab added later. The governance model is unified from day one.


The Shift That's Already Happening

The 44% YoY growth in non-human identities isn't slowing down. Every new SaaS integration adds API keys. Every new microservice adds service accounts. Every new AI feature adds agents. The non-human identity footprint compounds.

The question isn't whether you have a non-human identity problem. You do — every organization with more than a handful of SaaS tools does. The question is whether you can see it, and whether your governance program covers it.

Quarterly reviews of human users, run in spreadsheets, don't answer that question. A governance platform that treats machine identities as second-class doesn't answer it either.

Book a demo with Vigil to see your full identity footprint — human and non-human — in a single view. See the service accounts, the API keys, the AI agents, and what each one can actually access. No pre-audit prep. No manual enumeration. Just the real picture.


Related Reading

Non-human identity governance is part of a broader access governance problem. If AI agents are already in your stack, Why AI Agents Need Access Governance covers the agentic sprawl pattern in detail. If your organization is still running manual quarterly reviews, 5 Signs Your Company Has Outgrown Manual Access Reviews covers the signals that the manual model has already broken. And if offboarding is the specific pain — and it often surfaces the machine identity problem — The Hidden Cost of Manual Offboarding covers what gets missed when identity lifecycle isn't automated.


Ready to govern every identity — not just the human ones? See Vigil live or explore the dashboard to see what your access posture actually looks like.

See it in action

Vigil's live dashboard shows real access flags across a 10-user org — right now.

Explore Live Dashboard → Book a Demo