MCP momentum: every major vendor has shipped one
Six months ago MCP was Anthropic's protocol nobody had implemented. Now it's a category every major vendor ships against. The thing nobody is asking is what that does to the protocol itself.
The Model Context Protocol turned eighteen months old last week. Six months ago I was walking through what it took to build a server for it and the answer included "wait for the tooling to catch up." That wait is over. As of mid-April: Anthropic ships an SDK and reference servers, OpenAI shipped MCP support in the Responses API, Microsoft baked it into Copilot Studio, Google added it to Gemini's tool-use surface, every major IDE supports it, and the tooling registry has grown past a thousand community servers.
The momentum is real and the adoption is durable. The thing nobody is asking is what that does to the protocol itself.
What "every major vendor has shipped one" actually means
The current state, to be precise:
- Anthropic, the original sponsor; SDKs in TypeScript, Python, and a few others; reference servers for filesystem, git, slack, postgres, sentry, and a long list of others. Claude Desktop has been a primary client surface since launch.
- OpenAI, added MCP support to the Responses API in mid-March. The implementation is mostly compatible with the spec but with a few opinionated wrappers. Notable that they shipped support at all given the protocol was a competitor's release.
- Google. Gemini's tool-use surface added MCP compatibility in early April. The integration is described as "support" rather than "first-class," which in practice means the rough edges show up faster than they do on Anthropic's path.
- Microsoft. Copilot Studio added MCP as a connection type for custom tool surfaces. The Microsoft messaging frames it as "open standard", which is technically true but also strategically convenient when the alternative was building their own.
- The IDE tier. Cursor, Zed, Continue, Aider, Windsurf, JetBrains AI Assistant. All ship MCP client support. The user experience varies; the protocol underneath is the same.
- The platform tier. VSCode (via the official Anthropic extension and several third-party ones), GitHub Copilot Chat, Replit Agent. Adoption is uneven but real.
That's the state of "vendor support." The community-server side is where the volume is. The MCP server registry has grown from roughly 50 community servers at the start of the year to over a thousand by mid-April. Most of them are wrappers around an existing API. A meaningful minority of them are useful. The signal-to-noise ratio is what you'd expect from any protocol that hits the cool-tech-Twitter inflection point.
What the breadth of adoption is doing to the protocol
A protocol that's adopted by everyone develops the problems that come with adoption-by-everyone. A few are visible already:
Vendor-specific extensions are emerging. OpenAI's MCP wrapping has slightly different semantics around tool-call results than Anthropic's reference. Google's tool-use surface translates MCP capabilities into Gemini-shaped function calls in a way that loses some of the original metadata. Microsoft's Copilot Studio binding adds an authentication layer that the spec doesn't define. None of these are protocol breaks (your MCP server still works against all of them) but the behavior differs in ways that show up at the integration level.
The spec is moving and the SDKs are catching up unevenly. Late-March additions to the spec around resources subscription and async tool calls aren't yet implemented in every SDK. If you build against the spec, you have to track which client surfaces support which version of which capability. That's the normal state of an evolving protocol; it's also a tax that the early adopters didn't pay because the spec was small.
The auth story is getting fragmented. MCP punts on authentication, which was reasonable when servers were single-user local processes. The remote-MCP pattern that's emerging (servers running on a host you don't control, accessed across the network) exposes the gap. Each vendor is adding its own auth approach. Some are OAuth-flavored, some are API-key-flavored, some are session-token-flavored. None are interoperable. The protocol-level fix is being discussed; the vendor-level workarounds are shipping faster than the discussion.
Server quality is a real problem. A thousand community servers is impressive volume; it's also a thousand surfaces with varying degrees of input validation, error handling, and security thinking. The early-adopter experience was "the few servers that exist were built by people who cared." The current experience is "you have to evaluate before you trust." The registries don't yet have a quality signal that maps well onto whether you should connect a server to a model that can take destructive actions.
What this is good for and what it isn't
The "MCP everywhere" outcome is genuinely positive for the integration story. Six months ago the question was "will a non-Anthropic client surface ever support this." That question is settled. Building an MCP server now is a one-time investment that pays off across the major AI client surfaces, not a bet on a single vendor's adoption.
What it doesn't fix:
- The agent's tool-selection accuracy still degrades as the menu grows. Connecting twenty MCP servers to a single client doesn't make the agent better, it makes the tool descriptions compete for the model's attention, and the wrong tool gets picked more often. The right pattern is still small numbers of well-described servers per agent, and that's a discipline that the tooling-volume conversation tends to elide.
- Server-side observability is mostly DIY. When an MCP-driven workflow fails, the debugging story involves stitching together logs across the client surface, the protocol layer, and the server's own logs. There's no standard for tracing a request across that boundary. Each vendor has its own opinion. The lack of a standard is a real cost when you're trying to debug what an agent actually did.
- The "remote MCP" path is half-baked. Servers running on a host you don't control are a coming pattern. The auth, the rate limiting, the trust model, the multi-tenancy story, most of these are being figured out in real-world deployments rather than spec discussion. The first major remote-MCP security incident is probably the event that forces the spec to catch up.
Where I'd build now
If you're considering wiring something up against MCP today, the shape that's working in practice:
- Build the server, not the client. Client surfaces are commoditized; servers are where your specific integration value lives.
- Pin to a stable spec version. Don't chase the latest spec features unless you specifically need them. The trailing-edge spec compatibility is what your users will actually have available.
- Treat auth as your problem from day one. Even if your server starts as local-only, design it as if it'll be reached from a remote client surface eventually. The retrofit is harder than the up-front design.
- Minimize tool surface area. One excellent tool beats five mediocre ones. The model picks better and the maintenance burden is smaller.
- Wire in observability that doesn't depend on the client. Log what your server does, with structured data, in a format you can query. Don't rely on the AI client surface to tell you what happened.
The protocol's adoption story is a clear win. The protocol's coherence story is the next year's work. Worth being clear-eyed about both as the tooling keeps growing and the rough edges get more visible.