Every time a new AI application needs to read a file, query a database, or call an internal API, somebody writes another adapter. Different vendors, different shapes, different auth flows. Model Context Protocol (MCP) is the open standard that replaces those one-off adapters with a single interface.
It’s the same idea LSP did for IDEs and language tooling in 2016 — instead of N×M integrations, you get N + M. MCP plays that role for AI applications and the world outside them.
![]()
Courtesy: Anthropic / Wikimedia Commons — Model Context Protocol logo
What MCP actually is
MCP is an open specification — originally introduced by Anthropic in late 2024 and since adopted by major AI vendors — that defines how an AI application discovers and calls capabilities provided by an external server. Those capabilities are exposed as tools (functions the model can call), resources (read-only data the model can fetch), and prompts (reusable templates).
Three roles do the work:
- MCP host — the AI application the user actually interacts with (Claude Desktop, an IDE assistant, an internal agent runtime).
- MCP client — a component inside the host that maintains a single connection to one MCP server.
- MCP server — a program that exposes a specific set of tools, resources, and prompts. One server per capability domain (filesystem, GitHub, Postgres, internal ticketing).
A host can run many clients, and each client talks to exactly one server. That 1-to-1 client/server pairing is what gives MCP its security and isolation story — a misbehaving server can only see what its own client can see.
![]()
Courtesy: Wikimedia Commons — Model Context Protocol component diagram
The two layers
The specification splits MCP into two layers that you can reason about independently.
Data layer. A JSON-RPC 2.0 protocol that defines the message shapes — initialize, tools/list, tools/call, resources/read, and so on. This is the contract the model and the server actually agree on. It is transport-agnostic; the same messages flow regardless of how they’re delivered.
Transport layer. How the bytes move. The two common choices are stdio (the server runs as a subprocess of the host — the right answer for local developer tooling) and streamable HTTP (the server runs as a network service — the right answer for hosted, multi-user, or remote integrations). Streamable HTTP is where authorization, observability, and rate-limiting live.
Keeping the two layers separate is what lets the protocol stay small. The schema doesn’t change when you swap transport; the transport doesn’t change when you add a new tool.
Why platform teams should care
For cloud and platform engineers, MCP is the integration layer for the next five years of AI tooling. A few practical consequences:
- One inventory, many models. If your internal ticketing, observability, runbook search, and IAM are each behind an MCP server, you can swap the AI application — Claude today, an open-weights model tomorrow, a vertical agent the year after — without rewriting integrations.
- Identity is a transport concern. The authorization story lives in streamable HTTP. RFC 8693 token exchange and scope-down patterns belong here, not in the prompt.
- Audit is uniform. Every tool call is a structured JSON-RPC message. That’s the substrate for a provenance ledger — which conversation, which user, which prompt called what — that you can actually query.
The shape that emerges in production is rarely agent talks directly to N servers. It’s agent talks to a gateway, gateway talks to N servers. The gateway owns identity exchange, schema validation, audit, and policy. The servers stay simple. (That’s a separate post — The MCP gateway pattern — but it’s the natural next step once you’ve got more than two MCP servers in production.)
What MCP is not
- Not a model runtime. MCP doesn’t run the LLM. It only defines how the LLM’s host application reaches the outside world.
- Not a tool-use replacement. Function-calling APIs from individual model vendors still exist. MCP is the standardized layer above them — the host translates between the model’s native tool-use format and the MCP wire format.
- Not a permission system on its own. MCP gives you the structure to enforce policy (one server per domain, structured calls, transport-level auth). It does not enforce it for you. That’s still your platform’s job.
Where to start
If you’ve never touched MCP, the shortest path is to install Claude Desktop, point it at the official filesystem server (stdio transport, two-line config), and watch the model read and write files in a sandboxed directory. Five minutes of doing this is worth an hour of reading the spec.
After that, the spec itself is the source of truth — small, readable, and versioned by date.