Before the Model Context Protocol arrived, connecting AI agents to your tools meant custom integrations everywhere. Five AI platforms? Ten internal systems? You’re looking at fifty bespoke connectors to build and maintain. Engineers call this the N×M problem, and it’s quadratic scaling at its worst—expensive and difficult.
That changed when Anthropic donated MCP to the Linux Foundation’s Agentic AI Foundation in December 2025. Vendor-neutral governance arrived. Within twelve months, every major platform adopted it—Claude (with 75+ connectors), ChatGPT, Cursor, Gemini, VS Code, Microsoft Copilot. AWS, Azure, Google Cloud, and Cloudflare now provide infrastructure for deploying MCP servers at scale.
If you’re evaluating production AI deployments, MCP reduces integration complexity from O(N×M) to O(N+M) while preventing vendor lock-in. This standardisation plays a crucial role in enabling safe production deployment of AI agents, addressing one of the key challenges preventing widespread adoption.
What Is the Model Context Protocol?
MCP is an open standard for connecting AI applications to external tools through a universal adapter pattern. Think USB-C for AI agents—one interface that works everywhere.
The protocol uses JSON-RPC 2.0 over stateful connections, similar to how Microsoft’s Language Server Protocol standardised developer tool integration. Before LSP, every editor needed custom plugins for every language. Sound familiar? That’s exactly what MCP solves for AI agents.
The maths are simple. Without a standard, you need N×M integrations—every agent platform multiplied by every tool. Before MCP, developers were connecting LLMs through a patchwork of incompatible APIs—bespoke extensions that broke with each model update. With MCP, you need N+M integrations—one per platform plus one per tool. It’s that simple.
There are now more than 10,000 active public MCP servers. Official SDKs exist in eleven programming languages. 97 million monthly downloads across Python and TypeScript packages alone. That’s adoption.
MCP provides four core capabilities. Tools are functions the AI executes—database queries, API calls, that sort of thing. Resources are data the agent references—file contents, knowledge bases. Prompts are templated workflows that guide behaviour. Sampling lets servers recursively invoke LLMs, which enables multi-step workflows where one tool’s output becomes input to another agent.
The session-based design matters for production. Unlike stateless REST APIs that forget everything between calls, MCP supports complex interactions that can reference previous activity. This enables comprehensive logging—what data was accessed, which tools were called, why the agent made each decision. When something goes wrong, you have an audit trail. And in production, something always goes wrong.
Why Did Anthropic Donate MCP to the Linux Foundation?
Vendor-neutral governance. That’s the short answer.
When a single company controls a protocol, you worry about what happens if they change direction, get acquired, or decide to monetise features you depend on. The Linux Foundation provides proven stewardship for infrastructure like Kubernetes, Node.js, and PyTorch. You know, projects that run the world.
The Agentic AI Foundation was co-founded by Anthropic, OpenAI, and Block in December 2025. Platinum members include AWS, Bloomberg, Cloudflare, Google, and Microsoft. When competitors join the same foundation, it signals something—the protocol matters more than competitive advantage.
Before this, the landscape was fragmented. OpenAI had its function-calling API, ChatGPT plugins required vendor-specific connectors, each platform built proprietary frameworks. Nick Cooper, an OpenAI engineer on the MCP steering committee, put it bluntly: “All the platforms had their own attempts like function calling, plugin APIs, extensions, but they just didn’t get much traction.”
And when you built integrations around a proprietary system and the vendor pivoted? You’re rebuilding from scratch. With Linux Foundation governance, changes go through open review processes where your engineering team can participate. That’s the difference.
Bloomberg’s involvement is telling. As a platinum member, they view MCP as foundational infrastructure for financial services. When financial services companies—where compliance isn’t optional—bet on a standard, that’s validation.
You’re not betting on Anthropic’s roadmap or OpenAI’s priorities. You’re betting on an open standard maintained by a foundation that’s stewarded infrastructure for decades.
Which AI Platforms and Infrastructure Providers Support MCP?
Within twelve months of MCP’s November 2024 launch, every major AI platform integrated MCP clients.
Claude launched with 75+ connectors. OpenAI adopted MCP in March 2025 across ChatGPT. Google followed with Gemini in April 2025. Developer tools joined quickly—Cursor, Replit, Sourcegraph, Zed, Visual Studio Code, Microsoft Copilot, GitHub Copilot. That’s the major platforms sorted.
The infrastructure layer matters too. AWS, Azure, Google Cloud, and Cloudflare provide enterprise deployment support. You can deploy MCP servers on AWS Lambda, Cloudflare Workers, Azure Functions, or Google Cloud Run. Pick your poison.
Enterprise adoption follows a pattern: private MCP registries running curated, vetted servers. Fortune 500 companies maintain internal catalogues of approved integrations that security teams have reviewed. This is how enterprise works.
This level of support means you’re not locked into a single vendor’s ecosystem. That’s the point.
How Does MCP Standardisation Reduce Security Attack Surface?
Standardised implementations receive centralised security reviews. One well-audited MCP server is more secure than fifty custom integrations built by different developers. It’s basic security hygiene.
The protocol enables standardised security controls: authentication, role-based access control, version pinning, and trust domain isolation. These aren’t optional features—they’re built into how the protocol works.
Private registries are the enterprise deployment pattern. You expose only a curated list of trusted servers. Your security team reviews each integration once, pins the version, and controls updates. Simple.
OAuth 2.0 support was added within weeks, enabling secure remote authentication. Having a familiar security model makes MCP easier to adopt inside existing enterprise authentication stacks. It just fits.
Role-based access control provides granular permissions. Development agents might query databases and trigger builds. Customer service agents access CRM systems but nothing else. RBAC lets you implement least-privilege principles per agent. You know, the way security is supposed to work.
Version pinning prevents automatic updates from introducing security issues. Your security team controls when servers update, tests changes in staging, rolls out updates deliberately. This matters in regulated environments.
Now, MCP doesn’t eliminate security risks. The protocol doesn’t solve prompt injection, tool poisoning, or data exfiltration—those are inherent to agentic AI. Best practices still require defence-in-depth: sandboxing, least privilege, user consent, and monitoring. MCP just makes consistent security controls easier to implement. It’s not a silver bullet, but it helps.
What Are the Compliance Benefits of MCP Audit Trails?
The stateful session design enables comprehensive logging. When an agent makes a decision affecting customers, regulators want to know what data was accessed, which tools were called, and why. Fair enough.
MCP’s session-based interactions capture the complete execution history—session initiation, tool calls, data access, reasoning steps, outcomes. This structured audit trail matters for regulated industries where transparency in automated decision-making isn’t optional.
GDPR requires data access logs. Financial services need automated trading oversight. Healthcare has HIPAA audit requirements. All benefit from comprehensive, structured logging.
Debugging production failures becomes tractable with complete audit trails. When an agent malfunctions, you need to understand what went wrong. With MCP’s session history, you can trace the exact sequence of events. No more guessing.
Pre-MCP custom integrations treated logging as an afterthought. Standardised logging through MCP means your audit and debugging tools work consistently across all integrations. Consistency makes life easier.
Audit trails are only valuable if you implement proper log retention and analysis. MCP provides the mechanism, but you need the operational discipline to store logs securely, retain them per regulatory requirements, and review them when issues arise. The tool doesn’t replace the process.
MCP vs Custom APIs – When Should I Standardise?
Choose MCP when building agents that should work across multiple AI platforms—Claude, ChatGPT, Cursor—because it eliminates platform-specific connector development. Choose custom APIs only when you have unique integration requirements or when latency is ultra-critical.
The adoption threshold is roughly three platforms connecting to five tools. Above that, MCP’s return on investment becomes obvious. One connector works everywhere versus maintaining fifteen custom integrations that break when platforms update. Do the maths.
Connecting a model to the web, a database, ticketing system, or CI pipeline required bespoke code that often broke with the next model update. With MCP, platform updates don’t break your integrations—the protocol remains stable while platforms evolve independently. That’s the benefit of standardisation.
MCP prevents vendor lock-in, enabling platform switching without re-engineering integrations. If pricing changes or a better model becomes available, migration doesn’t require rebuilding every tool connector. You just switch.
You don’t need to go all-in immediately. Hybrid approaches work—new integrations use MCP while existing ones remain custom. MCP SDKs make it straightforward to wrap existing APIs, providing incremental migration without big-bang rewrites. Sensible.
AWS, Bloomberg, and Microsoft all chose MCP despite having the capability to build proprietary solutions. When companies that can afford custom infrastructure standardise anyway, it tells you something about the interoperability benefits.
How Does Linux Foundation Governance Compare to CNCF or Vendor Control?
Linux Foundation governance provides neutral stewardship ensuring no single vendor controls protocol evolution. That’s the core value.
The AAIF uses the same governance model as CNCF projects like Kubernetes. Technical Steering Committees and transparent roadmap processes give you confidence the protocol evolves to meet real needs rather than one vendor’s goals. It’s democratic, in a technical sense.
Before standardisation, each platform had proprietary approaches—OpenAI’s function calling, ChatGPT plugins, custom frameworks. These required platform-specific implementations. When the vendor changed direction, your integrations broke. Not ideal.
Kubernetes thrived under CNCF neutrality while vendor-specific container orchestration systems failed. MCP is on the same path as Kubernetes, SPDX, GraphQL, and the CNCF stack—infrastructure maintained in the open.
Platinum member diversity demonstrates credibility. AWS and Google are co-members despite competitive tensions. Anthropic and OpenAI collaborate despite competing on AI models. Nick Cooper: “I don’t meet with Anthropic, I meet with David. And I don’t meet with Google, I meet with Che. The work was never about corporate boundaries. It was about the protocol.” That’s how standards work when they work properly.
Neutral governance reduces “what if the vendor abandons the project” concerns. When you standardise on MCP, you’re not betting on any single company’s continued investment—you’re betting on an industry consortium maintaining infrastructure they all depend on. Much safer bet.
What Does MCP Adoption Mean for Avoiding Vendor Lock-In?
MCP servers work identically across Claude, ChatGPT, Cursor, and Gemini. Switching AI platforms requires zero integration re-engineering—only LLM-specific prompt tuning.
Before MCP, migrating from Claude to ChatGPT meant rebuilding all tool connectors using OpenAI’s function calling API. Different platforms, different approaches, completely incompatible. The switching cost made vendor lock-in real.
With MCP, a single connector works everywhere. This changes contract negotiations. When platforms know switching costs are low, you have leverage. Basic economics.
Multi-agent orchestration becomes practical. You can run different agents on different platforms sharing common MCP server infrastructure. Development team uses Cursor, customer service runs ChatGPT, data analysis uses Claude—all connecting to the same internal tools. No need to standardise on one platform when the integration layer is already standardised.
Infrastructure portability compounds the benefit. MCP servers deploy identically to AWS, Azure, Google Cloud, or Cloudflare. You’re not locked into cloud vendor-specific services either.
The real-world scenario: your company uses Claude for coding assistance, ChatGPT for customer service, and Cursor for IDE integration. All three share an MCP-based CRM connector. When one platform raises prices or a better model launches elsewhere, you evaluate based on model quality and cost—not on migration work. That’s freedom.
Block’s goose framework demonstrates local-first deployment. It’s an open-source agent framework combining language models with MCP-based integration. You can run entirely local deployments using open models while maintaining the same tool integrations you’d use with cloud-based Claude. Options matter.
MCP solves integration lock-in, not model lock-in. Different models require different prompt engineering. You still evaluate each platform on its merits. But at least you’re evaluating on actual differentiators—model quality, pricing, latency—rather than migration cost.
Strategic flexibility matters as AI evolves quickly. Betting on one vendor is risky when the tech landscape changes this fast. MCP lets you hedge—adopt multiple platforms where they’re strongest, switch when better options emerge, negotiate from a position where alternatives exist. Combined with the right approach to production AI agent deployment, this standardisation provides the foundation for safely running AI agents at scale. Smart business.
FAQ Section
How does MCP relate to sandboxing AI agents?
MCP standardises the interface between agents and tools but doesn’t provide sandboxing. You deploy MCP servers within sandboxed environments—containers, VMs, Firecracker microVMs—to isolate tool execution. The standardisation enables consistent security policies across sandbox implementations, which is part of addressing the broader AI agent sandboxing challenge.
What is the relationship between MCP and AGENTS.md?
AGENTS.md, contributed by OpenAI to AAIF, provides project-specific guidance for agents through Markdown conventions. Over 60,000 open source projects have adopted it. MCP and AGENTS.md are complementary—AGENTS.md tells agents what to do, MCP gives agents standardised ways to do it.
Can I use MCP with locally-run open-source models?
Yes. MCP is model-agnostic. Block’s goose framework demonstrates local-first agent deployment using open models with MCP-based integrations. The protocol works identically for cloud-based Claude or locally-run Llama models.
How does MCP handle authentication for remote servers?
MCP supports OAuth 2.0 for remote server authentication, enabling enterprise-grade security for cloud-deployed MCP servers. Local MCP servers typically don’t require authentication. The protocol is transport-agnostic, allowing custom authentication schemes when needed.
What happens if an MCP server becomes unavailable during agent execution?
MCP clients handle server unavailability through standard error handling. Agents receive error responses and can retry, fall back to alternative tools, or escalate to human operators. The stateful session design enables graceful degradation—partial progress is preserved even if specific tools fail.
Does MCP introduce latency compared to direct API calls?
MCP adds minimal overhead—the JSON-RPC transport is lightweight. Remote MCP servers introduce latency comparable to any remote API call. For most enterprise use cases, this is acceptable. For ultra-low-latency requirements like high-frequency trading, direct API integration might still be preferable.
How do I discover available MCP servers for my use case?
The MCP Registry provides searchable discovery of public MCP servers. Enterprises run private registries with curated, vetted servers. Claude’s 75+ built-in connectors demonstrate common patterns. SDK documentation includes examples for building custom servers.
Can MCP servers call other MCP servers?
Yes, through the sampling capability—MCP servers can recursively invoke LLMs, which can call other MCP servers. This enables complex multi-step workflows. Design such chains carefully to prevent infinite loops or runaway resource consumption.
What’s the difference between MCP tools, resources, and prompts?
Tools are functions the agent executes—database queries, API calls. Resources are data the agent references—file contents, knowledge bases. Prompts are templated workflows that guide agent behaviour—code review procedures, analysis frameworks. All three use the same protocol with consistent authentication and access control.
How does MCP compare to Google’s A2A standard?
A2A (Agent-to-Agent) focuses on agent coordination and communication, while MCP focuses on agent-to-tool integration. They’re complementary rather than competitive. An agent might use MCP to access tools while using A2A to coordinate with other agents.
Do I need to rewrite existing custom integrations to use MCP?
Not immediately. Organisations adopt hybrid approaches—new integrations use MCP, existing ones remain custom during transition. MCP SDKs make it straightforward to wrap existing APIs in MCP servers, providing incremental migration paths.
What security vulnerabilities does MCP introduce?
MCP doesn’t eliminate prompt injection, tool poisoning, or data exfiltration risks inherent to agentic AI. Standardisation enables consistent security controls—RBAC, audit trails, version pinning—but you still need defence-in-depth: sandboxing, least privilege, user consent, and monitoring. The protocol makes these easier to implement consistently, not a security silver bullet.