The short version
Imagine every AI assistant needed a custom-built adapter for every tool it wanted to use. One for your calendar, one for your database, one for your files. That's how things worked before MCP.
MCP is an open protocol, created by Anthropic, that standardises how AI models interact with external systems. One interface, many tools. It's doing for AI what USB did for peripherals. Before USB, every device had its own connector. After USB, one plug worked for everything.
How it works
MCP has three parts:
- Host is the AI application (like Claude Code or an IDE extension) that wants to use tools
- Client is the connector inside the host that speaks the MCP protocol
- Server is a lightweight program that exposes specific capabilities (reading files, querying a database, searching the web)
When an AI needs to do something (say, check your calendar) it calls a tool through MCP. The server handles the actual work and returns the result. The AI never touches the raw system directly.
Each MCP server defines its tools with clear schemas: what inputs they accept, what they return. The AI reads these schemas and knows how to use the tools. No custom integration code needed.
Why it matters
MCP means AI tools aren't locked into specific ecosystems. A calendar MCP server works with any MCP-compatible AI, not just one vendor's product. It makes AI assistants genuinely extensible, and it means builders can create one integration that works everywhere.
If you're building with AI, MCP is how your tools will talk to models.