bitsalt
MCP Security Threat Model — Part 1 of 8

How 24,000 Secrets Ended Up in MCP Configurations

Jeff Moser, BitSalt

In early 2026, researchers scanning public MCP configurations found over 24,000 exposed secrets: API keys, database credentials, OAuth tokens, cloud provider keys. Not from a single breach. Not from a vulnerability in the MCP protocol itself. From configuration files that engineers wrote, committed, and deployed without thinking much about what was in them.

This is the first post in the MCP Security Threat Model series. We’re going to work through the full attack surface of MCP-enabled AI agents in developer environments. Credential exposure is where we start because it’s the most immediate, most reproducible, and most underappreciated risk in the current landscape.

What actually happened

MCP (Model Context Protocol) servers are configured with JSON files that tell the client what tools are available and how to connect to them. A typical configuration for a database tool might look like this:

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://user:password@host:5432/db"],
      "env": {
        "PGPASSWORD": "your-actual-password"
      }
    }
  }
}

That PGPASSWORD value is sitting in a config file. In most setups, that config file lives at ~/.cursor/mcp.json, ~/.claude/mcp.json, or somewhere similar in the user’s home directory. Engineers share these files in onboarding docs. They commit them to internal repos to help teammates get set up. They include them in dotfiles repos that happen to be public.

The 24,000 figure comes from systematic scanning of public repositories and shared configuration examples. It’s not sophisticated. It’s grep.

Why this keeps happening

The MCP ecosystem moved fast. The protocol was announced by Anthropic in November 2024. By early 2025, hundreds of MCP servers existed. By late 2025, major AI coding tools including Cursor, Claude Code, and Cline had native MCP support. Engineers were connecting agents to their databases, their GitHub accounts, their cloud providers, their internal tooling.

The onboarding pattern for almost every MCP server looks like this:

  1. Here’s the server you install
  2. Here’s the configuration JSON
  3. Here’s where to put your API key in the config

Step 3 is where it goes wrong. The path of least resistance is to put credentials directly in the configuration file. It works immediately, with no friction, and then that configuration file ends up somewhere it shouldn’t be.

The root cause isn’t ignorance. It’s that the ecosystem grew faster than the security tooling, the documentation, and the organizational awareness needed to match it.

The three patterns we see

Hardcoded credentials in config files. The most common case. An API key, a database password, a cloud provider secret embedded directly in the MCP server configuration. Often committed to version control because the config file is needed for the project to work.

Environment variable leakage. Many MCP configurations use environment variables, which is the right instinct. But those variables have to come from somewhere. In development, they often come from .env files that get committed. In CI/CD, they end up in configuration that gets logged. The credential still leaks. It just takes one more hop.

Overpermissioned credentials. Even when credentials aren’t directly exposed, the ones used in MCP configurations are often scoped too broadly. A GitHub token with full repository write access when only read is needed. A database connection string with admin privileges when the agent only needs SELECT. When those credentials do get exposed, the blast radius is larger than it needed to be.

What to actually do

Credential scanning in CI. Tools like truffleHog, gitleaks, and GitHub’s built-in secret scanning will catch credentials before they land in main. This should be a blocking CI check for any repository that might contain MCP configuration files.

Environment variable management. Don’t put credentials in configuration files. Put them in environment variables, and manage those variables through a secrets manager (AWS Secrets Manager, HashiCorp Vault, 1Password Secrets Automation). Yes, this adds friction. That’s the point.

Principle of least privilege on the credentials themselves. Whatever credential you use for an MCP server, scope it as tightly as possible. A read-only GitHub token. A database user with access only to the tables the agent actually needs. When credentials are scoped correctly, exposure is less catastrophic.

Audit what you’ve already shipped. Run truffleHog or gitleaks against your internal repositories now. If you’ve been deploying MCP servers for any length of time, there’s a reasonable chance something has already slipped through.

The bigger picture

Credential exposure is threat category 1 in the MCP Threat Model because it’s the most straightforward and the most common. It’s also a signal of something larger: the MCP ecosystem is at the same inflection point cloud infrastructure hit in 2015, when engineers were committing AWS keys to public GitHub repos because the tooling and the patterns to handle it differently didn’t exist yet.

That problem got solved. This one will too. But the first step is being clear about what it actually is: not a protocol vulnerability, but a deployment practice problem that compounds as the ecosystem scales.

The next post in this series covers tool permission scope, how MCP servers grant AI agents capabilities they don’t need, and what happens when an agent is connected to a tool with more access than it should have.


This is Part 1 of the MCP Security Threat Model series. The compiled document, covering all 8 threat categories with structured checklists and a reference architecture, will be available when the series is complete. Get notified when it’s ready.