Dev Community Jaspernoboxdev

Kenji Sato
-
dev community jaspernoboxdev

We watched Claude Code include a Stripe secret key in a debug log. It was trying to help — we had asked it to figure out why a payment integration was failing, and it printed the full HTTP request, headers and all. Authorization: Bearer sk_live_... , right there in the conversation context. Stored on Anthropic's servers, in the conversation history, visible in the terminal scrollback. That is when we built the DLP guard. AI coding assistants are the most productive tools we have ever used.

They are also the biggest credentials risk most developers are not thinking about. Here are six ways your secrets leak through AI agents — with reproduction steps, severity ratings, and fixes for each one. 6 leak vectors identified 30s to install the DLP guard 0 false positives in production Leak #1 — Reading your .env file 1. AI agents read your .env file Critical Every AI coding assistant with file access can read your .env file. It is a plaintext file in your project directory.

The agent reads project files to understand context. There is no access control, no authentication, no prompt asking "should this tool see your Stripe key?" How to reproduce it Create a .env file with a test secret. Open any AI coding assistant. Ask it to "explain the project structure" or "help me debug the API integration." Watch the agent's tool calls — it will read the .env file as part of understanding your codebase.

# Create a test .env echo 'TEST_SECRET=this-value-should-not-appear-in-chat' > .env # In Claude Code: > "What API services does this project use?" # Agent reads .env → TEST_SECRET value is now in context # In Cursor (0.45+): # Open the project → Cmd+L → "Summarize the project config" # Cursor indexes .env as part of the workspace # In GitHub Copilot: # Open .env in a tab → Copilot Chat has access to open file contents The agent is not being malicious. It is doing what you asked: understand the codebase.

Your .env is part of the codebase. The fix: Do not have a .env file. Store secrets in the macOS Keychain instead. Load them at runtime with eval "$(noxkey get org/project/KEY)" . There is no file for the agent to read. Leak #2 — Debug output exposure 2.

Secrets in debug output Critical "The API call is failing, can you debug this?" The agent helpfully prints the full request to show you what is happening: # You ask Claude Code: > "The Stripe charge endpoint is returning 401, can you debug?" # Agent output: Here's the failing request: curl -X POST https://api.stripe.com/v1/charges \ -H "Authorization: Bearer sk_live_51ABC...xYz" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "amount=2000¤cy=usd" The issue is that your API key has been rotated. The key starting with sk_live_51ABC is no longer valid...

That key is now in your conversation history. On the AI provider's servers. In your terminal scrollback. If you are sharing your screen or recording a demo, it is in the recording too. The agent saw the value in process.env.STRIPE_SECRET_KEY (referenced in your code) and included it because showing the full request seemed helpful for debugging. Right call for debugging. Wrong call for security. The fix: A DLP guard that scans agent output before it enters the conversation.

NoxKey's guard matches against 8-character fingerprints of your stored secrets: # Set up the DLP guard (one-time) noxkey guard install # What it catches: # Agent output: "Authorization: Bearer sk_live_51ABC..." # Guard: "BLOCKED — output contains value matching myorg/project/STRIPE_KEY (peek: sk_live_5)" # The output never enters the conversation context. 3. Credentials stored in conversation logs High "Here is my config, help me fix this deployment." You paste your .env into ChatGPT.

Here is where that data goes: - Transmitted over HTTPS to the API endpoint - Stored in the conversation database - Logged for abuse detection and safety monitoring - Potentially queued for human review if it triggers filters - Retained per the data retention policy (which can change) - Backed up across the provider's infrastructure Six copies minimum, on infrastructure you do not control, with retention policies you did not agree to read. Even if you delete the conversation in the UI, the data was transmitted and stored.

Deletion from the frontend does not guarantee deletion from logs, backups, or training pipelines. How to reproduce it # In ChatGPT, Claude, or any AI chat: > "Here's my .env file, can you help me debug?" > DATABASE_URL=postgresql://admin:s3cretP@ss@db.example.com/prod > STRIPE_KEY=sk_live_... # That data is now stored on the provider's servers. # You cannot un-send it. You cannot verify deletion. # If you used a wrapper app or browser extension, add another copy. The fix: Never paste credentials into any chat interface.

If the agent needs access to a secret, it should flow through the OS — Keychain to encrypted handoff to process environment — not through the conversation. 4. API keys hardcoded in generated code High You ask the agent to "set up the Stripe integration." It generates a config file.

Because it saw your API key in the environment (or in a file it read earlier), it hardcodes the value: # You ask Cursor: > "Set up Stripe with the charge endpoint" # Cursor generates config/stripe.ts: export const stripeConfig = { secretKey: "sk_live_51ABC...xYz", // High AI agents that can execute code spawn subprocesses. Environment variables are inherited by child processes by default: Your shell STRIPE_KEY=sk_live_...

inherits claudeinherits STRIPE_KEY inherits nodeinherits STRIPE_KEY inherits bash -c "curl ..."inherits STRIPE_KEY inherits curlhas full access to STRIPE_KEY Every process in that tree has your Stripe key. The agent did not steal it — it inherited it, the same way every child process inherits environment variables on Unix. When the agent spawns a `curl` command to test your API, that process can access `$STRIPE_KEY`.

### How to reproduce it Set a secret in your shell export TEST_SECRET="this-is-sensitive" In Claude Code: "Run: echo $TEST_SECRET" Output: this-is-sensitive The agent accessed an inherited environment variable and printed it to the conversation context. Most agents do not actively exfiltrate credentials this way. But the capability is there. An agent with code execution and inherited environment variables has everything it needs to make authenticated API calls on your behalf. **The fix:** [Process-tree detection](https://noboxdev.com/blog/process-tree-agent-detection) identifies when an AI agent — not a human — is requesting secrets.

When an agent is detected, secrets are delivered via encrypted handoff instead of raw values. Bulk export commands are blocked. The agent gets scoped access instead of full environment inheritance. Leak #6 — The vector nobody is talking about ## 6. Credentials exposed through MCP tool-use High MCP (Model Context Protocol) tools and function-calling plugins let agents make HTTP requests, query databases, and interact with external services. When those tools need authentication, the credentials often flow through the agent's context.

MCP server config (e.g., in .cursor/mcp.json or claude_desktop_config.json): { "mcpServers": { "database": { "command": "npx", "args": ["@modelcontextprotocol/server-postgres", "postgresql://admin:s3cretP@ss@db.example.com/prod"] } } } That connection string — with the password — sits in a JSON config file. The agent reads it. The MCP server process inherits it. If the agent logs the connection for debugging, the password is in the conversation. It gets worse with HTTP-based tools. An agent calling a REST API through an MCP tool might construct the request with an `Authorization` header.

The tool call and its parameters — including the auth header — are part of the conversation context. Logged, stored, and visible in the conversation history.

### How to reproduce it Set up an MCP server with credentials in the config Ask the agent to "query the database for recent users" Watch the tool call — the connection string (including password) appears in the agent's tool invocation log Or: configure an API tool with an auth header Ask the agent to "fetch my account details from the API" The Authorization header appears in the tool call parameters **The fix:** Never put credentials in MCP config files as plaintext. Use environment variable references that resolve at runtime.

Better yet, have the MCP server pull credentials from the Keychain directly, so the agent never sees or transmits the auth values. ## The root cause behind every AI agent secret leak All six leaks share one root cause: **secrets and agents occupy the same space**. The secret is in a file the agent reads, an environment it inherits, a config it parses, or a conversation it participates in. Secrets and agents occupy the same space. The fix is separation. The solution is separation.

Secrets flow through secure channels — Keychain to encrypted handoff to process environment. Agents operate in their context — text, conversation, code generation. The two never mix. Install the DLP guard in 30 seconds ## The DLP guard: catch leaked API keys automatically The NoxKey DLP guard scans agent output against 8-character fingerprints of every secret in your Keychain. If any output contains a value matching a stored secret, it blocks the output before it enters the conversation.

Install the guard noxkey guard install It runs automatically as a PostToolUse hook in Claude Code. No config. No setup beyond the install command. What it looks like when it catches something: Agent runs: curl -v https://api.stripe.com/v1/charges Agent output contains: "Authorization: Bearer sk_live_51ABC..." Guard: BLOCKED — matches myorg/project/STRIPE_KEY Output is redacted before entering context. You see: [REDACTED — credential detected] The agent sees: nothing. The secret never entered the conversation. This catches leaks 2, 3, and 4 — debug output, conversation logs, and generated code that includes credential values.

It does not prevent the agent from accessing `.env` files or inheriting environment variables (leaks 1 and 5), which is why [eliminating .env files](https://noboxdev.com/blog/stop-putting-secrets-in-env-files) and using encrypted handoff are still necessary. The guard matches on 8-character prefixes, so extremely short secrets or secrets that share prefixes with common strings could theoretically produce false positives. In practice, API keys and tokens are long enough that this is not an issue. We have been running it for months with zero false positives across ~200 stored secrets.

## How to secure your API keys from AI agents right now The leaks are real. The fixes exist. Here is the priority order: - **Delete your .env files.** Move secrets to the Keychain. This eliminates leak #1 entirely. ([Here is how we did it.](https://noboxdev.com/blog/why-i-deleted-every-env-file)) - **Install the DLP guard.** One command. Catches leaks #2, #3, and #4 automatically. - **Use encrypted handoff.** Process-tree detection + encrypted responses prevent leak #5. - **Audit your MCP configs.** Remove hardcoded credentials from tool server configurations.

**Stop pasting credentials into chats.** There is no technical fix for voluntarily sending secrets to a third party. Install NoxKey and the DLP guard — 30 seconds total brew install no-box-dev/noxkey/noxkey noxkey guard install Key Takeaway AI agents leak secrets through six vectors: reading .env files, debug output, conversation logs, generated code, process inheritance, and MCP tool-use. All six share the same root cause — secrets and agents occupy the same space.

The fix is separation: [store secrets in the Keychain](https://noboxdev.com/blog/stop-putting-secrets-in-env-files), deliver them via encrypted handoff, and install the DLP guard to catch any values that slip through. [Touch ID](https://noboxdev.com/blog/touch-id-api-keys) ensures no agent can access a secret without your physical confirmation. --- *NoxKey is free and open source. `brew install no-box-dev/noxkey/noxkey` — [GitHub](https://github.com/No-Box-Dev/Noxkey) | [Website](https://noxkey.ai)* Top comments (0)

People Also Asked

JasperNoBoxDev - DEV Community?

They are also the biggest credentials risk most developers are not thinking about. Here are six ways your secrets leak through AI agents — with reproduction steps, severity ratings, and fixes for each one. 6 leak vectors identified 30s to install the DLP guard 0 false positives in production Leak #1 — Reading your .env file 1. AI agents read your .env file Critical Every AI coding assistant with f...

GitHub - JasperNoBoxDev/blindspot?

# Create a test .env echo 'TEST_SECRET=this-value-should-not-appear-in-chat' > .env # In Claude Code: > "What API services does this project use?" # Agent reads .env → TEST_SECRET value is now in context # In Cursor (0.45+): # Open the project → Cmd+L → "Summarize the project config" # Cursor indexes .env as part of the workspace # In GitHub Copilot: # Open .env in a tab → Copilot Chat has access ...

JasperNoBoxDev · GitHub?

Your .env is part of the codebase. The fix: Do not have a .env file. Store secrets in the macOS Keychain instead. Load them at runtime with eval "$(noxkey get org/project/KEY)" . There is no file for the agent to read. Leak #2 — Debug output exposure 2.

No-Box-Dev Noxkey · Discussions · GitHub?

The fix is separation: [store secrets in the Keychain](https://noboxdev.com/blog/stop-putting-secrets-in-env-files), deliver them via encrypted handoff, and install the DLP guard to catch any values that slip through. [Touch ID](https://noboxdev.com/blog/touch-id-api-keys) ensures no agent can access a secret without your physical confirmation. --- *NoxKey is free and open source. `brew install no...

Stop Putting API Keys in .env Files - DEV Community?

It does not prevent the agent from accessing `.env` files or inheriting environment variables (leaks 1 and 5), which is why [eliminating .env files](https://noboxdev.com/blog/stop-putting-secrets-in-env-files) and using encrypted handoff are still necessary. The guard matches on 8-character prefixes, so extremely short secrets or secrets that share prefixes with common strings could theoretically ...