About
Who I am
I'm Jeff Moser. I've spent the last decade working at the intersection of platform engineering and security — building the infrastructure and tooling that engineering organizations depend on, and then making sure those systems are defensible when something inevitably goes wrong.
I've done this work at companies ranging from Series B startups to enterprises with thousands of engineers. I've built CI/CD pipelines, designed cloud security controls, run incident response, and tried to explain to product teams why that public S3 bucket was a problem. I know how this work actually gets done — which is usually messier than any framework suggests.
In the last two years, I've been focused on what AI coding agents are doing inside engineering organizations: what they can access, what they can execute, and what happens when someone figures out how to make them do something they shouldn't. This is the problem I find most interesting right now, and it's the problem BitSalt is built around.
What BitSalt is building
BitSalt is a security engineering practice focused on governing AI agents in developer workflows. The honest version: we're in the early stage of building tooling and frameworks for this problem. The research and writing comes first.
Right now, that means publishing the MCP Security Threat Model — a structured framework for understanding the attack surface of MCP-enabled AI agents — and building an open-source scanner that helps teams understand their exposure. Neither of these exists in any serious form yet, which is why we're building them.
The longer-term direction is governance tooling: the things platform and security teams need to actually manage AI agents in production at scale. We'll build toward that as the space matures. But first things first: the threat model and the scanner.
How to reach me
If you're a platform engineer or security architect grappling with AI agent governance, I want to talk to you. Not to sell you anything — to understand the problem as you're experiencing it.