Deepsec (Vercel Security Harness): a practical guide to setup, troubleshooting, and triaging findings
A practical guide to Deepsec, Vercel's security harness: how to configure it correctly with your Claude Code subscription, run scans on your codebase, troubleshoot common issues, and triage findings effectively.
A practical guide to deepsec
A complete guide to using deepsec — Vercel's security harness — covering what each command does, how to configure it correctly to use your Claude Code subscription (instead of routing through Vercel AI Gateway), and how to triage the findings it produces.
Context: deepsec is a security analysis tool that uses coding agents (Claude or Codex) to investigate vulnerabilities in your codebase. It uses Opus 4.7 at maximum thinking by default, which makes it expensive — but also remarkably capable.
This guide covers deepsec ≥ 2.0.2. Earlier versions had a critical billing bug fixed in PR #43. If you're on a previous version, update with
pnpm update deepsecbefore continuing.
TL;DR — the correct flow
# 1. Clean setup
claude login # your Pro/Max subscription
# 2. Init and bootstrap
cd ~/projects/my-app
npx deepsec init
cd .deepsec
pnpm install
# 3. Pipeline (in order)
pnpm deepsec scan # free, no AI
pnpm deepsec process --project-id my-app # EXPENSIVE, uses AI
pnpm deepsec revalidate # EXPENSIVE, uses AI
pnpm deepsec export --format md-dir --out ./findings # free, no AI
📝 Requires deepsec ≥ 2.0.2. Earlier versions had a bug that routed traffic through Vercel AI Gateway even when the user hadn't opted in. If you're on an older version, update with
pnpm update deepsec.
Historical context: behavior change in v2.0.2
✅ If your deepsec version is ≥ 2.0.2, you can skip this section. It's documented only for users coming from earlier versions and to explain why some repos may have leftover artifacts like a
.vercel/stub inside.deepsec/.
In pre-2.0.2 versions, the local authentication logic had counterintuitive behavior. The README stated that deepsec would automatically use local claude or codex subscriptions, but in practice the priority order meant that having Vercel CLI logged in (common in Next.js projects) directed all traffic to Vercel AI Gateway, billing the user's account without explicit opt-in.
The internal flow was roughly:
- Deepsec called
getVercelOidcToken()during preflight - That function walked up from
.deepsec/looking for a.vercel/project.json - If found, it read the Vercel CLI auth
- Requested a fresh OIDC token
- Expanded it to
ANTHROPIC_AUTH_TOKEN+ANTHROPIC_BASE_URL=https://ai-gateway.vercel.sh
Typical result for users without Vercel AI Gateway credit:
Agent SDK error: Claude Code returned an error result:
API Error: 402 Insufficient funds. Please add credits to your account...
Visit https://vercel.com/d?to=...
Additionally, trying to avoid the Gateway by setting ANTHROPIC_BASE_URL=https://api.anthropic.com without a token produced a silent failure mode: each file completed in ~5 seconds with 0 tokens, 0 turns, 0 findings, marked as analyzed. The tool reported the scan as successful without ever calling the AI.
Both behaviors changed in PR #43 ("Use the vercel OIDC token for the gateway if no primary API token is present"), merged on May 5, 2026 and shipped in deepsec 2.0.2.
Starting in v2.0.2, the OIDC token is only used as an explicit fallback when no other credential is present, and the local claude CLI subscription takes precedence. This means you can have Vercel CLI logged in and deepsec will correctly use your Claude Code subscription without diverting traffic to the Gateway.
How to configure deepsec to use your Claude Code subscription
Step 1: Clean environment setup
# Verify there are no conflicting environment variables
env | grep -iE "anthropic|openai|gateway"
# Should return nothing relevant
# Verify claude CLI is authenticated with your subscription
claude --version
claude --print "test"
# Should respond normally
Step 2: Remove previous installations if any
If you've tried running deepsec before and it failed, residual state may contaminate the setup:
cd your-project
rm -rf .deepsec
Step 3: Init from scratch
npx deepsec init
cd .deepsec
pnpm install
Don't run cp .env.example .env or edit .env.local. The init will tell you:
# Set AI_GATEWAY_API_KEY in .env.local (or skip if claude/codex CLI is logged in)
Skip that step. Your logged-in claude CLI should be sufficient.
Step 4: Verify it's using your subscription
Before running the full pipeline, do a small test:
pnpm deepsec process --project-id my-app --filter apps/api/src/middleware --limit 2
You should see something like:
Investigating 2 file(s) with Claude Agent SDK (claude-opus-4-7)
Turn 1 (15s, 4 tool calls)
Turn 2 (22s, 3 tool calls)
What you DON'T want to see:
Turn 1 (4s, 0 tool calls)repeated (silent failure — would indicate an old version)402 Insufficient fundswith a Vercel link (Gateway active — would indicate an old version)
If you see either of those, you're probably on an old deepsec version. Update with pnpm update deepsec and verify with cat node_modules/deepsec/package.json | grep version.
Pipeline commands: what each one does
scan — Static analysis with regex
pnpm deepsec scan
| Aspect | Detail |
|---|---|
| Uses AI? | ❌ No |
| Cost | Free |
| Time | ~15s for 2k files |
| Output | Files marked as pending with candidates |
What it does: Runs ~110 predefined regex patterns across your codebase and flags suspicious files. Doesn't produce final findings — just "candidates" worth investigating with AI.
What to expect: A list of files with candidates (e.g., "this file has template literals in HTML, possible XSS"). No verdict, no severity.
process — AI-powered investigation
pnpm deepsec process --project-id my-app
| Aspect | Detail |
|---|---|
| Uses AI? | ✅ Yes (Opus 4.7 at max thinking) |
| Cost | $$$ — burns subscription tokens |
| Time | Minutes to hours depending on size |
| Output | Files with findings[] and analyzed status |
What it does: For each candidate file from scan, it launches an AI agent that reads the code, traces data flows, looks for mitigations, considers context and produces findings with severity (CRITICAL, HIGH, MEDIUM, LOW, BUG).
What to expect: A list of findings per file, each with:
- Vulnerability category (
xss,sql-injection,auth-bypass, etc.) - Severity
- Reasoning
- Affected lines
Useful flags:
# Process only files in a specific path (useful for testing)
pnpm deepsec process --project-id my-app --filter apps/api/src
# Process at most N files
pnpm deepsec process --project-id my-app --limit 10
# Re-analyze everything from scratch (DELETES previous findings, use carefully)
pnpm deepsec process --project-id my-app --reinvestigate
# Change concurrency (default is usually fine)
pnpm deepsec process --project-id my-app --concurrency 5 --batch-size 5
Idempotency: process without --reinvestigate only processes files in pending state. If interrupted (rate limit, dead session), just rerun the command and it resumes where it left off.
revalidate — Reduces false positives
pnpm deepsec revalidate --project-id my-app
| Aspect | Detail |
|---|---|
| Uses AI? | ✅ Yes (same model, similar cost to process) |
| Cost | $$$ — comparable to process |
| Time | Similar to process |
| Output | Findings tagged with verdict |
What it does: For each finding generated, launches the agent to re-evaluate it: is it actually exploitable? Is it mitigated upstream? Was it fixed in recent commits? Assigns a verdict:
- TP (True Positive) — real bug, worth fixing
- FP (False Positive) — the
processagent got it wrong - Fixed — already fixed in some commit
- Uncertain — can't determine without more context
What to expect:
Revalidation complete. Run: 20260506180136-...
TP: 116 FP: 4 Fixed: 0 Uncertain: 0
Useful flags:
# Re-revalidate specific findings (by path)
pnpm deepsec revalidate --filter apps/api/src/routes/auth
# Force re-validation of already revalidated findings
pnpm deepsec revalidate --force
Idempotency: revalidate without --force only processes findings without a verdict yet. If it ended with "120/122 revalidated", running it again only processes the 2 missing ones.
export — Export findings to readable files
pnpm deepsec export --format md-dir --out ./findings
| Aspect | Detail |
|---|---|
| Uses AI? | ❌ No |
| Cost | Free |
| Time | Seconds |
| Output | Folder with one .md per finding |
What it does: Reads findings + verdicts from data/<id>/files/ and exports them in a human-readable format. No AI, no cost.
Available formats:
md-dir— one Markdown per finding (recommended for human review)json— single JSON with all findings (for pipelines / tooling)
status — View project state
pnpm deepsec status --project-id my-app
| Aspect | Detail |
|---|---|
| Uses AI? | ❌ No |
| Cost | Free |
Shows:
- How many files in each status (
analyzed,pending,processing) - Finding counts by severity
- Revalidation results
- Recent runs with their equivalent cost
Expected output when pipeline finishes:
Project: my-app
Files tracked: 650
Status
analyzed: 650 ← 100%
pending: 0
Findings
CRITICAL: 1 | HIGH: 11 | MEDIUM: 41 | BUG: 64
Revalidated: 122/122 TP: 116 FP: 4 Fixed: 0 Uncertain: 0
Important: The USD costs shown (
\(24.10,\)10.93) are the equivalent value if you were paying the API directly, not what Anthropic actually charged you. If you use a subscription, everything comes from your quota —$0actually billed.
Verifying the analysis was real (not a Gateway bug zombie)
After process, especially if you had previous Gateway issues, verify that files were actually analyzed:
# Count files marked as "analyzed" but with 0 tokens (zombies from the bug)
find data/<project-id>/files -name "*.json" -type f -exec python3 -c "
import json, sys
try:
d = json.load(open(sys.argv[1]))
h = d.get('analysisHistory', [])
if h and d.get('status') == 'analyzed':
last = h[-1]
if last.get('usage', {}).get('inputTokens', 0) == 0:
print(sys.argv[1])
except: pass
" {} \; 2>/dev/null | wc -l
If this count is > 0, those files were marked as analyzed without actually going through the AI. Re-process the project:
pnpm deepsec process --project-id my-app --reinvestigate
Estimated cost
For a medium codebase (~650 files):
| Stage | Tokens (approx) | $ equivalent |
|---|---|---|
scan |
0 | $0 |
process |
300k - 500k | $20 - $30 |
revalidate |
100k - 200k | $8 - $15 |
export |
0 | $0 |
| Total | ~500k - 700k | ~$30 - $45 |
If you use a Claude Max (\(100/month) subscription, this pipeline fits comfortably and leaves you headroom for normal use the rest of the month. Pro (\)20/month) will likely max out partway through a large project.
Triaging findings: what to do after exporting
With 116 confirmed TPs (true positives), you need a systematic review plan:
Prioritize by severity
- CRITICAL → review today, doesn't wait
- HIGH → this week
- MEDIUM → next two weeks
- BUG / HIGH_BUG → code quality backlog (don't block security)
Group by category, not by file
Many findings are the same bug repeated across multiple files. Grouping by vulnSlug (e.g., all xss together) lets you fix 5 findings with one change (e.g., adding sanitization to a shared helper).
Filter by real attack surface
A finding in apps/api/src/routes/public.ts (receives input from the internet) weighs much more than one in apps/web/src/components/InternalDashboard.tsx. Critical question for each finding: is the input arriving here from an attacker?
For each TP, three questions
- Is it really exploitable, not theoretical?
- Is there upstream mitigation the agent didn't see? (middleware validation, rate limiting, auth)
- Does the fix introduce regressions? (are there tests covering the path?)
Common errors and solutions
"402 Insufficient funds" with vercel.com link
Cause: You're on an old deepsec version (pre-2.0.2) where the OIDC token fallback was active by default.
Solution: Update deepsec:
pnpm update deepsec
cat node_modules/deepsec/package.json | grep version
# Should be >= 2.0.2
"0 tool calls, 0 tokens" in every batch
Cause: The SDK has no valid credentials and is failing silently. Common in pre-2.0.2 versions when trying to avoid the Gateway with ANTHROPIC_BASE_URL=https://api.anthropic.com without a token.
Solution:
- Update to deepsec ≥ 2.0.2 (
pnpm update deepsec) - Verify
claude --print "test"responds normally - Make sure you DON'T have
ANTHROPIC_BASE_URLorANTHROPIC_AUTH_TOKENin.env.localor shell (unless you actually want to use the Gateway or a direct API key)
Claude rate limit during process
Cause: Your plan hit its usage limit (5h on Pro, longer windows on Max).
Solution: Wait for the window to reset and re-run pnpm deepsec process. It's idempotent and resumes where it left off.
Files stuck in processing forever
Cause: A previous run died without releasing locks.
Solution:
# Clean up zombie runs
rm -rf data/<project-id>/runs/*
# Re-run process — files in "processing" will be reprocessed
pnpm deepsec process --project-id my-app
What to commit and what not to
.deepsec/ contains a mix of configuration files (which DO go to git) and run output (which should NOT go to git for security reasons). The init already generates a reasonable .gitignore, but you need to verify it covers exports too.
What DOES get committed
These files are the "scan recipe" and let any team member replicate the setup:
deepsec.config.ts— project config (matchers, paths, plugins)package.json+pnpm-lock.yaml— for reproducibilitypnpm-workspace.yamlAGENTS.md— instructions for coding agentsREADME.md.gitignoredata/<id>/INFO.md— project context injected into prompts (the most valuable file!)data/<id>/SETUP.md— per-project instructionsdata/<id>/config.json(if it exists) —priorityPaths,promptAppend,ignorePaths
What does NOT get committed
# .deepsec/.gitignore
node_modules/
.env*.local
.vercel/
# Scan output — regenerated by `deepsec scan` / `process`. INFO.md
# and SETUP.md (manually edited) stay tracked.
data/*/files/
data/*/runs/
data/*/reports/
data/*/project.json
# Auto-detected metadata (regenerated each run, contains absolute paths)
data/*/tech.json
# Exported findings (regenerated by `deepsec export`)
findings/
exports/
init generates the first sections automatically, but does NOT include tech.json, findings/, or exports/. Add them yourself:
cat >> .deepsec/.gitignore << 'EOF'
# Auto-detected metadata (regenerated each run, contains absolute paths)
data/*/tech.json
# Exported findings (regenerated by `deepsec export`)
findings/
exports/
EOF
💡 Why ignore
tech.json: It contains the absoluterootPathof the machine where the scan was run (e.g.,/Users/jdoe/projects/my-app), which reveals the developer's username and local structure. It also regenerates on every run, so versioning it just creates noise in diffs.
Why findings and exports should NOT go to git
The files in data/<id>/files/ and the export folder contain:
- Exact lines of vulnerable code with snippets
- Detailed agent reasoning about how to exploit each vuln
- Verdicts from revalidate
If your repo is public, that's basically a step-by-step attack guide for your app. Even in private repos, consider that git history is forever — a finding that's fixed today still shows in the commit history how the bug looked.
Verify nothing leaked into tracking
If you committed something before configuring the gitignore properly:
git ls-files .deepsec/ | grep -E "findings/|/files/|/runs/|/reports/"
If it returns anything, untrack them:
git rm -r --cached .deepsec/findings/ 2>/dev/null
git rm -r --cached .deepsec/data/*/files/ 2>/dev/null
git rm -r --cached .deepsec/data/*/runs/ 2>/dev/null
git rm -r --cached .deepsec/data/*/reports/ 2>/dev/null
git commit -m "chore: untrack deepsec output files"
⚠️
git rm --cachedonly removes them from future tracking, not from history. If findings expose unfixed vulns and the repo has many collaborators, consider cleaning history withgit filter-repoor BFG.
Where do findings live then?
Exported findings are local and temporary. Three common patterns to manage them:
- Local without persisting — for one-off scans, each person reviews their own
- Separate private repo —
my-project-security/with restricted access - Ticket system (recommended) — relevant TP findings become Linear/Jira/GitHub issues with assignees and deadlines; FPs are discarded
For serious teams, option 3 is the standard: deepsec generates the list, humans triage, and important findings flow into normal workstreams.
Resources
- Official repo: https://github.com/vercel-labs/deepsec
- Docs: https://github.com/vercel-labs/deepsec/tree/main/docs
- Launch blog post: https://vercel.com/blog/introducing-deepsec-find-and-fix-vulnerabilities-in-your-code-base
Best practices for teams
General recommendations based on the experience of implementing deepsec on a real project.
Designate an owner
Although anyone on the team can run deepsec following this guide (the files in .deepsec/ are versioned and the setup is reproducible with pnpm install), it helps to designate someone as process owner to avoid duplication and keep triaging consistent.
When to scan
Typical cases where running a full scan makes sense:
- After merging a large feature touching auth, input handling, or payment/sensitive data flows
- Before a production release with significant changes
- When a vuln is reported in one of the stack's critical dependencies
- Periodically (every 3-6 months) as general hygiene
- When deciding to re-evaluate previous findings after applying fixes
For small or isolated changes, scanning the modified directory is enough:
pnpm deepsec process --project-id my-app --filter apps/api/src/routes/<module>
Findings management
We recommend integrating critical and high findings into the team's ticket system (Linear, Jira, private GitHub Issues, internal system, etc). After each scan, the owner triages the TPs and creates entries for the ones requiring action, assigning them to the corresponding devs.
Exported findings (./findings/) should stay only local on the machine of whoever scanned — they don't get committed to the repo (see "What to commit and what not to" section).
Expected costs
For a medium codebase (~650 files), a full pipeline (process + revalidate) consumes between $30-50 in API equivalent and takes 15-30 minutes of wall-clock.
Options to cover the cost:
- Personal Claude Max subscription (\(100/month) — viable for ad-hoc team scans. \)0 actually billed if it fits in your quota.
- Vercel AI Gateway with
AI_GATEWAY_API_KEY— recommended for CI/CD or frequent scans. Allows spending caps and attribution to the corporate account. - Direct Anthropic API key — if the team already pays for direct API for other purposes.
Escalation policy
For CRITICAL or HIGH severity findings, it helps to have a direct escalation channel to the security owner before any release. MEDIUM and BUG findings can follow the normal ticket flow.
Final notes
- If you find secrets in findings, rotate them immediately. The presence of a secret in a finding implies it was in the source code.
- The USD cost report that deepsec shows is always the API equivalent, not what was charged to your account. If you use a subscription, the
$are reference values. --reinvestigatedeletes previous work. Only use it when you really want to start over.
This guide covers deepsec ≥ 2.0.2 — last updated: May 2026.

