MCP Security: What Enterprises Need to Know Before Deploying
MCP security risks are real. Cisco's OpenClaw found critical vulnerabilities. Here's how to lock down MCP servers for enterprise production use.
Chase Dillingham
Founder & CEO, TrainMyAgent
MCP is a protocol, not a security framework.
Read that again.
97 million monthly SDK downloads (source). Every major AI provider on board. Enterprise teams deploying MCP servers connected to production databases, CRMs, and internal APIs.
And the protocol itself has no built-in authentication, no standardized authorization, no audit logging.
That’s not a flaw. It’s a design decision — MCP provides the transport and tool interface; security is the deployer’s responsibility. But most teams deploying MCP don’t realize that until something goes wrong.
Here’s what the security landscape actually looks like, what the real risks are, and how to lock it down before you expose your production systems to AI-powered access.
The Threat Surface
MCP gives AI models the ability to execute actions on your systems. That’s the whole point. An AI agent connected to an MCP server can query databases, modify records, create tickets, send messages, and interact with any service the server exposes.
Every one of those capabilities is an attack vector if not properly secured.
The threat model includes:
- Prompt injection: A malicious input tricks the model into calling tools in unintended ways (source)
- Tool poisoning: A compromised MCP server advertises malicious tool descriptions that manipulate model behavior
- Data exfiltration: An agent extracts sensitive data through tool calls and leaks it via the model’s response
- Privilege escalation: An agent uses tool chaining to access resources beyond its intended scope
- Server impersonation: A malicious server masquerades as a legitimate one
- Rug pulls: A server changes its tool descriptions after initial approval to introduce malicious behavior
These aren’t theoretical. Researchers have demonstrated all of them.
Cisco’s OpenClaw: The Wake-Up Call
In early 2026, Cisco’s security research team released OpenClaw, a comprehensive security analysis framework for MCP deployments (source). Their findings were blunt:
Key findings from Cisco’s analysis:
- Tool poisoning attacks are practical and reproducible. Malicious tool descriptions can instruct models to exfiltrate data or perform unauthorized actions without the user’s knowledge.
- Lack of server authentication means clients have no standardized way to verify they’re connecting to a legitimate server.
- No permission boundaries between tools within the same server. If an agent can call one tool, it can call all tools on that server.
- Session persistence risks. MCP connections maintain state, meaning a compromised session can continue executing malicious actions across multiple interactions.
- Cross-server data leakage. When an agent connects to multiple MCP servers, data from one server can flow to another through the model’s context.
Cisco’s OpenClaw framework provides automated scanning for these vulnerabilities. If you’re deploying MCP in production, run it against your servers (source).
Let me be blunt: if you haven’t audited your MCP servers for these attack vectors, you have an open security hole.
The Permission Model Problem
MCP’s current permission model is binary. A client either connects to a server or it doesn’t. Once connected, it can call any tool that server exposes.
There’s no built-in concept of:
- Role-based access: Different users having different tool permissions
- Tool-level authorization: Allowing read tools but blocking write tools
- Data-level filtering: Restricting which records a tool can return based on user identity
- Rate limiting: Preventing excessive tool calls
- Approval workflows: Requiring human approval for high-impact actions
All of these are your responsibility to implement at the server level.
Compare this to a REST API where you’d never dream of deploying without auth middleware, rate limiting, and audit logging. MCP servers need the same treatment, but the protocol doesn’t enforce it.
SurePath AI Policy Controls
In March 2026, SurePath AI launched MCP Policy Controls, the first enterprise-grade policy layer for MCP deployments (source). This addresses the gap between MCP’s raw protocol and what enterprises actually need.
What SurePath provides:
- Gateway-level policy enforcement: All MCP traffic routes through a policy gateway that evaluates rules before tool calls reach servers
- SSO-integrated authentication: Users authenticate via existing identity providers (Okta, Azure AD, etc.) and their identity carries through to MCP tool calls
- Tool-level authorization policies: Define which users, roles, or groups can access which tools on which servers
- Audit trails: Every tool call is logged with user identity, parameters, response, and timestamp
- Data loss prevention: Scan tool responses for sensitive data patterns (PII, credentials, financial data) before they reach the model
- Anomaly detection: Flag unusual tool call patterns that might indicate prompt injection or misuse
This is the kind of infrastructure enterprises need before putting MCP anywhere near production data. SurePath isn’t the only option — you can build similar controls in-house — but they’re the first to ship a turnkey solution.
How to Lock Down MCP in Production
Whether you use a third-party policy layer or build your own, here’s the security checklist for enterprise MCP deployment:
1. Authenticate Every Connection
Never run MCP servers without authentication. Even in “internal only” environments.
Implementation:
- For stdio transport (local): Rely on OS-level user permissions. The server runs as a specific user with specific access.
- For HTTP/SSE transport (network): Implement token-based auth. Validate JWT tokens or API keys on every connection.
- Integrate with your existing identity provider. If you use Okta, your MCP server should validate Okta tokens.
import os
from functools import wraps
def require_auth(func):
@wraps(func)
def wrapper(*args, **kwargs):
token = os.environ.get("MCP_AUTH_TOKEN")
if not validate_token(token):
return "Unauthorized. Authentication required."
return func(*args, **kwargs)
return wrapper
@mcp.tool()
@require_auth
def get_customer(customer_id: str) -> str:
"""Get customer details. Requires authentication."""
# ... implementation
2. Implement Tool-Level Authorization
Not every user should have access to every tool. A support agent needs read access to customer records. They don’t need the ability to delete accounts.
Implementation:
- Map MCP tools to permission scopes
- Check user permissions before executing any tool
- Return clear “access denied” messages so the model can inform the user
TOOL_PERMISSIONS = {
"search_customers": ["read:customers"],
"get_customer": ["read:customers"],
"create_ticket": ["write:tickets"],
"delete_customer": ["admin:customers"], # restricted
}
def check_permission(tool_name: str, user_scopes: list) -> bool:
required = TOOL_PERMISSIONS.get(tool_name, [])
return all(scope in user_scopes for scope in required)
3. Log Everything
Every tool call. Every parameter. Every response. Every error. With user identity attached.
This isn’t optional for regulated industries. And even if you’re not regulated, you need this for debugging, incident response, and understanding how your agents actually behave in production.
What to log:
- Timestamp
- User identity (from auth token)
- Tool name
- Input parameters (redact sensitive fields)
- Response (redact sensitive data)
- Execution time
- Success/failure status
- Source IP (for network transport)
Ship these logs to your existing SIEM or observability platform. MCP tool calls should be treated with the same gravity as database queries or API calls to sensitive services.
4. Validate and Sanitize All Inputs
The AI model generates tool call parameters. Those parameters are user-influenced (through the prompt) and can be manipulated through prompt injection.
Treat every tool input as untrusted. Same rules as any web application:
- Validate types, ranges, and formats
- Parameterize database queries — never interpolate tool inputs into SQL strings
- Sanitize file paths — prevent directory traversal
- Validate IDs against allowlists when possible
- Reject unexpected parameters
5. Scope Server Access Narrowly
Each MCP server should have the minimum permissions necessary.
- Database MCP server: Read-only access to specific tables. Not
SELECT *on everything. - CRM MCP server: API token scoped to specific objects and operations.
- File system MCP server: Restricted to specific directories. Never root access.
If your MCP server connects to a database, create a dedicated database user with only the permissions that server’s tools need. If a tool only reads customer records, the database user shouldn’t have write access.
6. Isolate Servers
Run each MCP server in its own isolated environment:
- Separate containers or processes
- No shared credentials between servers
- Network segmentation — the database MCP server shouldn’t be able to reach the Slack MCP server
- Independent failure domains
This limits blast radius. If one server is compromised, the attacker doesn’t get access to everything.
7. Monitor for Anomalies
Set up alerting for:
- Unusual tool call volume: A sudden spike in database queries might indicate data exfiltration
- Access pattern changes: A user who normally searches customers is suddenly deleting records
- Error rate spikes: Repeated auth failures or invalid inputs might indicate an attack
- Cross-server data flows: Data from a sensitive server appearing in calls to a less-sensitive server
- After-hours activity: Tool calls outside normal business hours from unexpected users
8. Implement Human-in-the-Loop for High-Impact Actions
Some actions shouldn’t be fully automated. Deleting records. Sending external communications. Modifying financial data. Processing refunds.
Build approval gates into your MCP tools:
@mcp.tool()
def delete_customer(customer_id: str) -> str:
"""Delete a customer record. Requires manual approval."""
# Don't execute — create an approval request
approval_id = create_approval_request(
action="delete_customer",
params={"customer_id": customer_id},
requester=get_current_user()
)
return f"Approval required. Request {approval_id} sent to admin team. Customer will be deleted upon approval."
The model gets a clear response. The user understands what happened. And your data doesn’t get deleted by a prompt injection.
Data Sovereignty Considerations
For enterprises in regulated industries (healthcare, finance, government), data sovereignty is non-negotiable.
MCP servers run in your infrastructure — that’s good. Your data doesn’t leave your network just because you’re using MCP. But the model that’s calling the tools might be a cloud API.
The flow:
- User sends prompt to cloud LLM
- LLM decides to call a tool
- Tool call goes to your local MCP server
- MCP server queries your database
- Results go back to the LLM (in the cloud)
- LLM generates a response
Step 5 is where data leaves your perimeter. The tool results — which might contain customer records, financial data, PII — are sent to the model provider’s API.
Mitigations:
- Use on-premise or VPC-deployed models for sensitive data
- Implement data masking in your MCP server responses — redact PII before returning results
- Use the DLP capabilities in tools like SurePath to scan outbound data
- Define clear policies about which data can flow through cloud models
The 2026 Roadmap Helps (But Don’t Wait)
The 2026 MCP roadmap includes significant security improvements:
- Standardized authentication and authorization built into the protocol
- Enterprise readiness features including audit and compliance tooling
- Metadata format for describing server security properties
These are coming. But they’re not here yet. If you’re deploying MCP in production today, you need to build security into your implementation now, not wait for the protocol to catch up.
The teams that get this right will have a head start when standardized security features land. The teams that don’t will be the case studies in the next security research paper.
Frequently Asked Questions
Does MCP encrypt data in transit?
MCP over stdio (local) inherits the security of the local process. MCP over HTTP/SSE should use TLS — but this is your responsibility to configure, not a protocol guarantee. Always deploy network-facing MCP servers behind TLS termination.
Can I use my existing WAF with MCP?
If you’re using HTTP transport, yes. Route MCP traffic through your WAF. However, most WAF rules are designed for REST API patterns and may need tuning for MCP’s JSON-RPC format.
How do I prevent prompt injection attacks on MCP tools?
Input validation is your primary defense. Treat all tool parameters as untrusted user input. Use parameterized queries, validate against schemas, and implement output filtering. There is no silver bullet — defense in depth is required.
Is MCP SOC 2 compliant?
MCP is a protocol, not a service — SOC 2 compliance applies to your deployment, not the protocol itself. You need audit logging, access controls, and monitoring to meet SOC 2 requirements for your MCP infrastructure.
Can I restrict which tools a user can access?
Not at the protocol level (yet). Implement tool-level authorization in your MCP server code. Check the user’s identity and permissions before executing any tool. See the authorization examples above.
What’s the risk of connecting to third-party MCP servers?
High, if unvetted. Third-party servers can expose malicious tool descriptions that manipulate model behavior. Only connect to servers from trusted sources, review their code, and run them in sandboxed environments.
How does MCP handle secrets management?
It doesn’t. MCP servers need credentials to connect to downstream services. Use environment variables, secret managers (Vault, AWS Secrets Manager), or your existing secrets infrastructure. Never hard-code credentials in server code.
Should I run MCP servers in production today?
Yes, with proper security controls in place. MCP is production-ready from a protocol standpoint. The security gap is in the deployment layer, which you control. Follow the guidelines in this article and you’ll be ahead of most deployments.
Three Ways to Work With TMA
Need an agent built? We deploy production AI agents in your infrastructure. Working pilot. Real data. Measurable ROI. → Schedule Demo
Want to co-build a product? We’re not a dev agency. We’re co-builders. Shared cost. Shared upside. → Partner with Us
Want to join the Guild? Ship pilots, earn bounties, share profit. Community + equity + path to exit. → Become an AI Architect
Need this implemented?
We design and deploy enterprise AI agents in your environment with measurable ROI and production guardrails.
About the Author
Chase Dillingham
Founder & CEO, TrainMyAgent
Chase Dillingham builds AI agent platforms that deliver measurable ROI. Former enterprise architect with 15+ years deploying production systems.