Insights Business| SaaS| Technology Model Context Protocol Security Architecture from OAuth to Client Identity Verification
Business
|
SaaS
|
Technology
Dec 11, 2025

Model Context Protocol Security Architecture from OAuth to Client Identity Verification

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic Model Context Protocol Security Architecture from OAuth to Client Identity Verification

In April 2025, Invariant Labs published an easy-to-reproduce example of what they called a “tool poisoning attack” using MCP. Malicious instructions could be embedded inside MCP server documentation, tricking AI into behaving maliciously without users knowing. The user would never see the malicious instruction because the documentation is sent directly to the LLM upon connection.

What followed was a deluge of blog posts with titles like “Everything Wrong with MCP” and “The ‘S’ in MCP Stands for Security.” For security teams across the industry, MCP quickly became something to be concerned about and block.

The primary security objection organisations face is straightforward: “How does MCP handle authorisation and authentication?” In this article, which is part of our comprehensive guide on understanding Model Context Protocol, we’re going to walk through how MCP evolved from vulnerable early implementations to enterprise-grade security. We’ll cover OAuth 2.0 integration, Client ID Metadata Documents, and Registry vetting. By the end you’ll understand what security architecture you need for informed adoption decisions and production deployment.

What is Model Context Protocol and why does it need security architecture?

MCP is an open-source protocol that standardises how AI models interact with external data sources, tools, and services through client-server architecture using JSON-RPC messaging.

Why does it need security architecture? Because MCP servers operate differently from traditional APIs. They bridge AI agents with diverse data sources, including sensitive enterprise resources. A compromise doesn’t just expose data—it enables attackers to manipulate AI behaviour and access connected systems.

Without proper authentication and authorisation, untrusted MCP servers could expose credentials, leak sensitive data, or enable unauthorised actions.

The protocol evolved from early authentication approaches using API keys and basic auth to standardised OAuth 2.0 integration after security vulnerabilities emerged in early deployments. MCP connections are stateful, so they require careful session management to keep unauthorised users out and ensure sensitive data is properly cleaned up.

Transport mechanisms include Stdio transport for local process communication and Streamable HTTP transport supporting bearer tokens, API keys, custom headers, and OAuth for remote servers.

How does OAuth 2.0 work with MCP servers?

To understand how OAuth integrates with MCP, it helps to be familiar with the core protocol concepts and architecture. MCP servers function as OAuth Resource Servers requiring valid access tokens for operations. Authorization Servers issue tokens after authenticating users and validating client requests. The Authorization Code Flow enables users to grant MCP clients delegated access to servers without sharing credentials. Resource Indicators prevent access token reuse across different MCP servers, ensuring tokens are scoped to specific resources.

MCP uses OAuth 2.1 as its default authorisation approach, which lets it leverage existing identity infrastructure. Servers implement OAuth 2.0 Protected Resource Metadata (RFC 9728) to advertise supported authorisation servers.

Here’s how the complete authorisation flow works: MCP client attempts access without credentials → Server returns HTTP 401 with metadata URL in WWW-Authenticate header → Client fetches Protected Resource Metadata and discovers authorisation endpoints → Client registers with authorisation server → OAuth flow initiates with PKCE and resource parameter → User authorises access → Client exchanges authorisation code for access token → Subsequent requests include Bearer token.

Dynamic client registration eliminates manual setup when agents connect dynamically. Resource indicators mandate token binding to specific MCP servers, preventing token reuse attacks.

Multi-tenant scenarios require additional security measures:

Least-privilege access principles apply here—every operation must be scoped to the current user to prevent accidental exposure.

What are Client ID Metadata Documents (CIMD) and why do they matter?

CIMD is a simplified client registration mechanism where clients publish metadata documents at public, trusted URLs instead of using dynamic registration. It was introduced in November 2025 to solve Dynamic Client Registration complexity and reduce impersonation risks. CIMD enables URL-based client identity verification without requiring pre-registration with every Authorization Server. This approach reduces attack surface by eliminating the need for OAuth proxies and simplifying client authentication.

The problem CIMD solves is significant. Dynamic Client Registration required AS support for clients to register themselves via public API. Without DCR support, developers needed to build an OAuth proxy manually registered with an AS, mapping its own issued tokens to tokens from downstream AS—a complex, time-consuming, error-prone task.

SEP-991 introduced URL-based client registration. Instead of generating and storing clients dynamically, a client now publishes a metadata document at a public, trusted URL. Clients provide their own client ID that is a URL pointing to a JSON document describing properties of the client.

Each client gets a unique, URL-based identity. This makes it easier for admins and servers to understand exactly which client is connecting. Trusted domain names reduce impersonation risks.

When to use CIMD versus traditional pre-registration depends on your deployment scenario. Use CIMD when you need to support multiple authorisation servers without manual registration at each, when client identity verification through trusted URLs is acceptable, and when reducing OAuth proxy complexity is a priority. Use traditional pre-registration when you require tighter control over which clients can access your authorisation server.

What happened in the April 2025 MCP security incident?

April 2025 involved unauthorised access through improperly configured MCP server authentication. The root cause: servers implemented custom authentication instead of standard OAuth, creating credential exposure vulnerabilities. The impact highlighted the need for standardised security practices and formal OAuth integration in the MCP specification.

On April 1, 2025, Invariant Labs published their easy-to-reproduce example of a tool poisoning attack. April 30, 2025 research demonstrated how MCP prompt injection can be used for both attack and defence. And on April 10, 2025 HiddenLayer published “MCP: Model Context Pitfalls in an Agentic World” discussing related security concerns.

What followed illuminated other concerning MCP security concerns: tool mimicry, rug pulls, and indirect prompt injection. Articles appeared with titles like “Why MCP’s Disregard for 40 Years of RPC Best Practices Will Burn Enterprises.”

MCP observability was weak to nonexistent at the time, and AI could be tricked into hiding its tracks. For security teams across the industry, MCP quickly became something to be concerned about and block.

But the open, public nature of the standard meant vulnerabilities were out in daylight and could be collaboratively addressed. Every identified flaw and its fix became part of the community’s understanding of how to safely connect AI to tools. The June 2025 spec update added new security best practices.

What security vulnerabilities exist in MCP implementations?

Common vulnerabilities include credential exposure in MCP client configurations, insufficient token validation, cross-server token reuse, and inadequate access controls. Token management risks involve improper storage, missing rotation policies, and inadequate revocation mechanisms. Client-side risks include trusting untrusted servers, insufficient verification of server identity, and exposing sensitive prompts. Server-side risks involve accepting invalid tokens, overly permissive scopes, and inadequate audit logging.

Here’s a sobering fact: Equixly’s security assessment found command injection vulnerabilities in 43% of tested MCP implementations. Another 30% were vulnerable to server-side request forgery attacks. And 22% allow arbitrary file access.

Credential exposure happens where credentials are stored in config files or environment variables. Token validation failures include common mistakes in JWT verification, missing expiration checks, and inadequate signature validation. Access control failures involve over-permissioned scopes and missing role-based access control.

Token-based authentication uses well-known standards like JWT and OAuth 2.0 but requires proper implementation. Stateful MCP connections require careful session management. The increased attack surface comes from the permanent connection approach.

What is the MCP Registry and how does it vet servers?

MCP Registry is a centralised or enterprise-managed catalogue of vetted MCP servers with governance controls. The vetting process validates servers meet security standards, implement proper OAuth, maintain documentation, and follow best practices. Enterprise registries enable organisations to maintain approved server lists, enforce security policies, and control server provisioning.

The MCP Registry team established an ecosystem vision helping enterprises adopt their own MCP registries with self-managed governance controls. Over 1,000 community-built servers exist in the growing ecosystem.

Enterprise observability solutions are emerging. New Relic launched a solution to observe MCP communications. MCP Manager introduced a dedicated MCP gateway with enterprise controls: team provisioning, security policies, identity management, audit logging, and server provisioning guardrails.

The concept of an MCP gateway emerged as a solution to observing and governing MCP anywhere. Obot and similar companies focused on policy enforcement, tool filtering, and safe agent action guarantees. ToolHive brought MCP into the cloud native world by managing and securing MCP servers as Kubernetes resources.

Independent projects and open source tools began offering threat models, validation tools, server hardening guides, and security-focused checklists. The conversation around MCP shifted from “How do you connect an agent?” to “How do we operate this responsibly at organisational scale?”

Building an internal registry requires technical architecture for server cataloguing, policy enforcement, and integration with existing governance frameworks.

What does OAuth implementation involve for MCP servers?

OAuth implementation for MCP servers involves several key components. Servers must be configured as OAuth Resource Servers, which requires selecting an Authorization Server and implementing token validation.

Secure MCP server implementations integrate security from the initial design phase. Leveraging OAuth 2.1 and existing identity providers forms the foundation. For practical guidance on implementing OAuth in MCP servers, including code examples and testing strategies, refer to our implementation guide.

Your implementation needs to choose between Authorization Code Flow for user-facing scenarios or Client Credentials Flow for machine-to-machine communication. Common challenges include insufficient validation, accepting expired tokens, and overly permissive scopes.

SEP-1046 added OAuth client credentials support for machine-to-machine authorisation. SEP-990 (Cross App Access) provides enterprise IdP policy controls for MCP OAuth flows enabling users within an enterprise to sign in to an MCP client once and immediately get access to every authorised MCP server without additional authorisation prompts.

URL mode elicitation lets you send users to a proper OAuth flow in their browser where they can authenticate securely. Credentials are directly managed by the server; the client only worries about its own authorisation flow to the server. This enables secure credential collection where API keys and passwords never transit through the MCP client.

What security controls are needed for production MCP deployments?

Production deployments require OAuth-based authentication, network segmentation, audit logging, token management policies, and continuous monitoring. Infrastructure security includes TLS encryption, network isolation, least privilege access, and secrets management. Operational security involves monitoring for anomalous access patterns, automated token rotation, and incident response procedures. For detailed guidance on secure server implementation, including testing strategies and security best practices during development, see our implementation tutorial.

Production implementations require strict data isolation in multi-tenant scenarios, AI gateways to centralise security policies, workload identities for secrets management, comprehensive observability, and least-privilege principles throughout deployment. For comprehensive coverage of production security monitoring and operational security best practices, see our deployment operations guide.

AI gateways sit between clients and servers, similar to API gateways. They handle rate limiting, validate JWT tokens before requests reach servers, add security headers, and transform between protocol versions.

Environment variables in production create security risks: rotation challenges, potential log exposure, and static attack targets. Dedicated secrets management services like Azure Key Vault, AWS Secrets Manager, and HashiCorp Vault provide encrypted storage, access control, and audit trails. Workload identities eliminate the “bootstrap secret” problem—applications receive secure identities from cloud platforms with limited permissions to retrieve secrets at runtime.

Zero trust architecture principles assume no entity inside or outside the network should be trusted by default. The model requires:

Integration with enterprise tools includes New Relic for monitoring, Auth0 or Okta for identity, and SIEM platforms for audit logs.

Incident response requires detection, containment, remediation, and post-incident analysis. Merge MCP server includes enterprise-grade authentication, data encryption, and trusted infrastructure with built-in privacy safeguards. Avoid raw MCP or self-managed setups for sensitive workloads unless you have dedicated security engineering resources.

FAQ

Can I use my existing enterprise identity provider with MCP?

Yes, MCP integrates with enterprise Identity Providers like Azure AD, Okta, and Auth0 through standard OAuth 2.0 federation. Your Authorization Server connects to your IdP, enabling single sign-on across MCP servers and existing enterprise applications. Auth0 published joint work with Cloudflare showing how to secure remote MCP servers with Auth0 as the OAuth provider.

What’s the difference between OAuth 2.0 and OAuth 2.1 for MCP?

OAuth 2.1 consolidates best practices from OAuth 2.0 extensions, mandating PKCE (Proof Key for Code Exchange) and eliminating less secure flows. MCP implementations should follow OAuth 2.1 guidelines for enhanced security, though core OAuth 2.0 with best practices is sufficient for most deployments.

How do I know if an MCP server from GitHub is safe to use?

Evaluate GitHub MCP servers by checking OAuth implementation (reject servers using API keys only), documentation quality, active maintenance, security disclosures, community adoption, and whether it appears in vetted registries. Consider running internal security audits before production use. Equixly’s security assessment found command injection vulnerabilities in 43% of tested MCP implementations, so verification is important.

Do I really need OAuth for my MCP implementation?

OAuth is strongly recommended for production deployments accessing sensitive resources or operating in enterprise environments. API keys may suffice for local development or internal tools with low security requirements, but OAuth provides authorisation controls, token scoping, and audit capabilities needed for production.

How long does it take to implement proper MCP security?

Implementation timeline varies: basic OAuth integration takes 1-2 weeks, enterprise deployment with full security controls takes 4-8 weeks, including Authorization Server setup, registry configuration, monitoring integration, and security testing. Factor in additional time for compliance validation.

What tools are available for monitoring MCP server security?

Enterprise monitoring tools include New Relic for observability, MCP Manager for gateway security, ToolHive for Kubernetes-based deployments, and standard SIEM platforms for audit logging. Open-source options include custom Prometheus exporters and ELK stack integrations.

Can MCP work with SOC 2 compliance requirements?

Yes, MCP can meet SOC 2 requirements through proper implementation: OAuth-based authentication, comprehensive audit logging, encrypted communications using TLS, access controls, token management policies, and regular security reviews. Document your MCP security architecture in SOC 2 audit materials.

What’s the minimum security setup for MCP in production?

Minimum production security requires OAuth 2.0 authentication, TLS encryption for all communications, token validation on MCP servers, audit logging of access events, secrets management with no hardcoded credentials, and network isolation for sensitive servers.

How does Cross App Access extension improve security?

Cross App Access (SEP-990) enables single sign-on across multiple MCP servers within an enterprise, reducing authentication friction while maintaining security. Users authenticate once with their IdP, and the Authorization Server issues tokens valid across approved servers, improving usability without compromising security.

What are the security implications of local vs remote MCP servers?

Local servers have a smaller attack surface with no network exposure but may access sensitive local resources like files and databases. Remote servers require robust authentication, TLS encryption, and network security but enable centralised monitoring and access control. Choose based on your threat model and data sensitivity.

How do I migrate from API keys to OAuth for MCP?

Migration strategy: implement OAuth alongside existing API key authentication, test OAuth thoroughly in staging, gradually transition clients to OAuth, deprecate API keys with advance notice, monitor for authentication errors during transition, and document OAuth configuration for teams.

What is the role of PKCE in MCP security?

PKCE (Proof Key for Code Exchange) prevents authorisation code interception attacks in OAuth flows. MCP clients should implement PKCE by generating code verifiers and challenges, ensuring that only the client that initiated authorisation can exchange the code for tokens.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices
Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Jakarta

JAKARTA

Plaza Indonesia, 5th Level Unit
E021AB
Jl. M.H. Thamrin Kav. 28-30
Jakarta 10350
Indonesia

Plaza Indonesia, 5th Level Unit E021AB, Jl. M.H. Thamrin Kav. 28-30, Jakarta 10350, Indonesia

+62 858-6514-9577

Bandung

BANDUNG

Jl. Banda No. 30
Bandung 40115
Indonesia

Jl. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660