How 24,000 Secrets Ended Up in MCP Configurations
I put my database credentials in an MCP config file within the first five minutes of setting up my postgres server. Not because I didn’t know any better. Because it worked, and I was trying to get something done.
That’s probably the most honest summary of why researchers scanning public MCP configurations in early 2026 found over 24,000 exposed secrets — API keys, database credentials, OAuth tokens, cloud provider keys. Not from a protocol vulnerability. Not from a sophisticated attack. From engineers doing exactly what I did, at scale, across thousands of repos.
This post is about credential exposure specifically: what’s actually happening, why it keeps happening, and what a realistic response looks like. It’s the first thing I want to cover because it’s the most immediate and the most reproducible risk in the current MCP landscape. It’s also the easiest to dismiss — “just don’t hardcode credentials” — which is exactly why it keeps happening.
How did 24,000 secrets end up in public repos?
MCP (Model Context Protocol) servers are configured with JSON files that tell the client what tools are available and how to connect to them. A typical configuration for a database tool looks like this:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://user:password@host:5432/db"],
"env": {
"PGPASSWORD": "your-actual-password"
}
}
}
}
That PGPASSWORD value is sitting in a config file. In most setups, that config file lives at ~/.claude/mcp.json, or somewhere similar in the user’s home directory.
Engineers share these files. That’s often the point — get a working configuration in front of your teammate so they don’t have to figure it out from scratch. Those files end up in onboarding docs. They end up in internal repos. They end up in the dotfiles of repos that happen to be public.
The 24,000 figure came from systematic scanning of public repositories and shared configuration examples. The method wasn’t sophisticated. It was grep.
Why does this keep happening?
The MCP ecosystem moved fast. The protocol was announced by Anthropic in November 2024. By early 2025, hundreds of MCP servers existed. By late 2025, Cursor, Claude Code, and Cline all had native MCP support. Engineers were connecting agents to databases, GitHub accounts, cloud providers, internal tooling — essentially every system they used.
The onboarding pattern for almost every MCP server followed the same arc:
- Here’s the server you install
- Here’s the configuration JSON
- Here’s where to put your API key
Step 3 is where it goes wrong. The path of least resistance is to put credentials directly in the config file. It works immediately. No extra steps. No friction. And then that config file travels somewhere it shouldn’t.
The root cause isn’t ignorance — most engineers know that hardcoding credentials is a bad habit. It’s that the ecosystem grew faster than the security tooling, the documentation, and the organizational norms needed to keep up. The patterns that would make the right thing the easy thing didn’t exist yet.
Three ways credentials end up exposed
Hardcoded credentials in config files. The most common case. An API key or database password embedded directly in the MCP server configuration, often committed to version control because the config file is required for the project to run.
Environment variable leakage. Many configurations use environment variables, which is the right instinct. But those variables have to come from somewhere. In development, they often come from .env files that get committed. In CI/CD, they end up in logs. The credential still leaks — it just takes one more hop.
Overpermissioned credentials are a separate category, though they show up alongside the other two. Even when credentials aren’t directly exposed, the ones used in MCP configurations are frequently scoped too broadly: a GitHub token with full repository write access when read-only would do, a database connection string with admin privileges when the agent only needs SELECT. When those credentials are exposed, the blast radius is larger than it had to be.
What’s a realistic response?
There’s no clean solution here. The ecosystem is already deployed. Credentials are already in repos. The question is what you do from here, not what you would have done differently.
Credential scanning in CI. Tools like truffleHog, gitleaks, and GitHub’s built-in secret scanning will catch credentials before they land in main. This should be a blocking CI check for any repository that might contain MCP configuration files — which, at this point, is most repos where engineers are using AI coding tools.
Environment variable management, done properly. Don’t put credentials in configuration files. Put them in environment variables, and manage those variables through a secrets manager — AWS Secrets Manager, HashiCorp Vault, 1Password Secrets Automation. Yes, this adds friction. That friction is load-bearing. It’s what prevents the casual commit.
Least-privilege credentials. Whatever credential you use for an MCP server, scope it as tightly as possible. A read-only GitHub token. A database user with access only to the tables the agent actually needs. When credentials are scoped correctly, an exposure is recoverable. When they’re not, it often isn’t.
Audit what you’ve already shipped. Run truffleHog or gitleaks against your internal repositories now, before you do anything else. If you’ve been deploying MCP servers for any length of time, there’s a reasonable chance something has already slipped through. Better to find it yourself.
2015 again
My sense is that the MCP ecosystem is at the same inflection point cloud infrastructure hit in 2015, when engineers were committing AWS keys to public GitHub repos because the tooling and the patterns to handle it differently didn’t exist yet. That problem got solved — not overnight, but it got solved. This one is on the same trajectory.
What makes this moment feel more urgent is the speed of adoption. AI coding tools went from experimental to company-wide deployments in most engineering orgs in under two years. The configurations are already in repos. The credentials are already out there. The window for establishing better defaults is closing fast.
Credential exposure is where I’m starting because it’s the most straightforward. There are more threat categories to work through: tool permission scope, prompt injection, server trust boundaries, among others. I’ll cover them as I dig further into this. If there’s something specific you’re wrestling with in your own MCP setup, I’d genuinely like to hear it.
Part 1 of an ongoing series on MCP security threat modeling. Get notified when new posts are published.
Get notified when Redactus launches
Redactus is an auditor for AWS Secrets Manager — checks secrets hygiene against security and compliance best practices across accounts. Sign up to hear when it's ready.