Automated Access Reviews: Why SOC2 Teams Still Use Spreadsheets
You know what happens three months before your SOC2 audit? Your compliance team opens a spreadsheet, adds a new tab, and sends an email to every manager: "Please certify who should have access to what."
Then the waiting begins.
Managers don't respond for two weeks. You send a reminder. One manager delegates to their skip-level, who delegates to a senior engineer, who finally guesses at who has what access and sends back a reply that's half-guessed and three weeks late.
By the time you've collected responses from everyone, two things are true:
1. The spreadsheet reflects what should be true, not what actually is 2. Your actual environment has changed three times since you started the review
The result is a quarterly nightmare: hundreds of rows, dozens of emails, week-long delays, and a final "access review" that's three weeks stale before you print it for the auditor.
And it's still the industry standard.
Why Spreadsheets Became the Access Review Tool
Before SOC2, before compliance mandates, there was no standardized way to track access at scale. Most companies didn't track it at all — engineers added permissions as needed, and nobody asked questions until something broke.
SOC2 changed that. Auditors now require a documented quarterly access review. So teams did what made sense in 2015: they built a spreadsheet.
You can see why. A spreadsheet is:
- Universally accessible — anyone can open Excel
- Zero-cost — no new tools to buy or configure
- Visible — you can see every row, every permission, every decision
- Backwards-compatible — it works with however your team is currently structured
So the spreadsheet became the standard tool for access reviews. It still is.
The problem: spreadsheets were never designed to do what compliance actually requires.
Where the Spreadsheet Model Breaks
Manual data entry is the first failure point. Someone has to manually pull a list of users, or systems, or roles from each platform you use. Active Directory. AWS. GitHub. Jira. Salesforce. Google Workspace. Then they have to manually map those to the spreadsheet.
By the time the data is entered, it's already a day old. And it's incomplete — you've captured what you know about, not everything that actually exists. Service accounts that were provisioned for a one-time integration. API keys that developers created and forgot about. Cloud IAM roles that someone granted two years ago and never revisited.
Response bias is the second failure point. The review asks: "Do these people still need access?" Managers respond based on memory, not data. Most will say yes because removing access is uncomfortable — there's always a chance someone might need it later, and asking them to restore it is friction.
You end up certifying access that nobody is using. You miss access that shouldn't exist.
Speed of change outpaces review cycles. Between access reviews, your environment doesn't stay frozen. Engineers join and leave. New integrations get added. Roles change. Permissions get granted for specific projects and never revoked when the project ends.
A quarterly review captures a snapshot of March. In June, when the audit happens, your actual environment is unrecognizable. And in September, when the next review starts, you're reviewing permissions that don't match reality again.
Audit risk is the result. Your documentation shows one picture. Reality shows another. An auditor could interpret this two ways: either your documentation is incomplete (your controls don't work), or your environment doesn't match your documented controls (you're not enforcing what you say you do).
Either way, it's a finding.
What Non-Human Identities Add to the Problem
Service accounts and API keys make the spreadsheet problem worse.
When you're reviewing human access, you can at least ask the person's manager: "Does Alice still need admin access to GitHub?" The manager can think through the question.
When you're reviewing a service account — say, a Jenkins pipeline with production database write access — there's no manager to ask. You look at the account. You see it was created two years ago. You have no idea what it's for or who owns it.
Is it still needed? Nobody knows.
Is it being used? You might have logs, but reviewing logs for a service account across all your systems is thousands of rows. Most teams skip it.
So the spreadsheet cells stay checked. The service account stays active. And your audit evidence shows you reviewed it — even though the review was just "we saw the row and didn't unchecked it."
This is especially risky for non-human identities because they're the ones that often get forgotten about. A service account created for a temporary project that shipped to production. An API key that was proof-of-concept. A shared database password that half the ops team knows.
Non-human identities are growing 44% year-over-year, and most of them live in the gaps between your quarterly reviews.
How Automated Access Reviews Actually Work
Instead of sending a spreadsheet to managers and waiting, automated access reviews work backwards — they show you what actually exists, flag what shouldn't, and let you enforce policy instead of just documenting it.
Real-time enumeration, not point-in-time snapshots. The system continuously discovers what access actually exists — users, roles, service accounts, API keys, everything. When you run a review, you're reviewing what's actually true today, not what someone guessed at last week.
Data-driven decisions, not memory-based guesses. Instead of asking "do they still need this?", the system shows actual usage: when was this access last used? For human users, that might be "they logged in yesterday." For a service account, it might be "this key made 47 API calls in the last 30 days." For a rarely-used role, it might be "last used 180 days ago."
With that data, the decision stops being a guess.
Automatic flagging of risky patterns. The system identifies things that always deserve review: stale access (no usage in 90 days), overprivileged access (more permissions than the role actually needs), shared credentials (multiple people know the password), access without an active manager (the person who requested it left).
Instead of reviewing everything equally, compliance teams focus on the access that actually matters.
Faster enforcement than spreadsheet approvals. In a spreadsheet workflow, someone approves removal and you manually go remove the access. In an automated system, the approval happens once and enforcement is instant — the access is revoked in real-time across all systems.
Continuous compliance, not quarterly noise. The spreadsheet approach creates a compliance theater cycle: prep for three weeks, collect responses for two weeks, argue about rows for one week, submit the final spreadsheet, and don't think about it again until next quarter.
With automation, reviews run continuously. Every day, the system checks your access posture. Every day, it surfaces what has changed, what has been added, what is stale. Your quarterly audit submission isn't a panicked sprint — it's a 30-second report of documented controls that have been running continuously.
Why Compliance Teams Are Stuck with Spreadsheets (For Now)
The biggest reason teams stay with spreadsheets isn't that they're good — it's that building access review automation is harder than it looks.
Identity sprawl is the core problem. You have user directories (Active Directory, Okta, Google Workspace). You have cloud platforms (AWS, Azure, GCP) with their own IAM systems. You have SaaS (GitHub, Jira, Salesforce) where access looks completely different. You have databases with different permission models. You have kubernetes clusters, Lambda functions, serverless APIs.
Every system represents identity differently. Every system requires different API calls to enumerate access. Every system has different update latency.
Building an access review tool that covers all of these is not a spreadsheet problem. It's an integration problem.
Non-human identity discovery is the second problem. Service accounts hide everywhere. API keys are created outside your identity system. Shared credentials are documented nowhere. Discovering all the non-human identities across your environment — across Active Directory, cloud platforms, git repos, CI/CD systems, databases, third-party APIs — is a research project for every organization.
Most access review tools are built for human identities first. Non-human identities are retrofitted, which means they're never complete.
Proving it works is the third problem. Your auditor needs to see that your controls are actually enforced. A spreadsheet is obviously enforced — someone reviewed it and signed it. An automated system needs to prove that its recommendations are actually implemented, that access really was removed when flagged, that the system is resilient to edge cases.
Building that proof requires a lot of work that a spreadsheet just...doesn't need to do.
What Vigil Does Here
Vigil was built to eliminate the spreadsheet access review cycle.
We continuously enumerate access across your entire identity footprint — human users, service accounts, API keys, cloud IAM, database roles, everything. When you run an access review, you're reviewing what actually exists today, not a stale snapshot.
We show you usage data for every identity — when they logged in, what they accessed, which permissions they actually used. Reviews become data-driven decisions, not guesses.
We automatically flag the access that deserves scrutiny: stale access, overprivileged grants, shared credentials, orphaned service accounts, non-human identities without ownership.
We route everything through your documented access review process, so you have continuous audit evidence that reviews are happening. Quarterly audits aren't a sprint — they're a 30-second export of documented controls.
And we handle non-human identities properly — service accounts, API keys, and agents are first-class identities in the review process, not an afterthought or a separate tool.
The result: SOC2 compliance that doesn't require a spreadsheet spreadsheet sprint every quarter.
The Compliance Timeline Question
Here's what usually happens:
Now: You're in the four months before your SOC2 audit. Your compliance team is asking about access reviews. It's not urgent yet, so spreadsheets seem fine.
Q+2 months: Audit notices arrive. Access review prep starts. Spreadsheets are sent out. Chaos begins.
Q+3 months: The audit happens. You submit your access review evidence. It's three weeks stale, incomplete, and evidence of spot-checks by managers instead of continuous controls. Auditor asks questions.
Q+6 months: Next audit preparation begins. Same cycle repeats.
Automated access reviews shift that timeline. Instead of a panicked sprint before the audit, you have continuous documentation of what access exists, why it exists, and that it's being reviewed.
When the auditor asks about your access review process, you show them a system that runs continuously, not a spreadsheet from three weeks ago.
Related Reading
Access reviews are one part of an access governance program. If non-human identities are a blind spot for you, Non-Human Identities: The Access Governance Blind Spot covers why service accounts and API keys are invisible to most access governance platforms. If you're managing dozens of access requests and approvals, Why Access Approval Queues Are Killing Security Productivity covers how to eliminate that work. And if you're facing an upcoming audit and need to prove your access controls work, 5 Signs Your Company Has Outgrown Manual Access Reviews covers the audit readiness signals.
Ready to automate your access reviews instead of racing through a spreadsheet sprint? See Vigil live to watch continuous access governance in action, or explore the dashboard to see what your actual access posture looks like.
See it in action
Vigil's live dashboard shows real access flags across a 10-user org — right now.