April 12, 2026 AntHill Team 4 min read

How MCPs Work: A Practical Guide to Model Context Protocol

Understanding what Model Context Protocol is, why it matters for AI agents, and how it enables tool use in large language models.

AIMCPAgentsLLM

Introduction

Large language models are remarkably capable, but they live in a box. They can reason, summarize, and generate code, yet they can’t check your calendar, query a database, or read a file on your machine. Every integration requires custom glue code, and none of it is portable.

Model Context Protocol (MCP) was created to solve this. It’s an open standard that gives AI models a universal way to connect to external tools and data sources. Think of it as USB-C for AI: one protocol, any capability.

How the Protocol Works

MCP uses a client-server architecture. An MCP host (like Claude Desktop, an IDE plugin, or your own application) runs an MCP client that connects to one or more MCP servers. Each server exposes a set of capabilities that the model can discover and use at runtime.

The communication happens over a transport layer. For local integrations, MCP uses stdio — the server runs as a subprocess and communicates via standard input/output. For remote integrations, it uses HTTP with Server-Sent Events (SSE), which allows servers to stream responses back to the client.

This is intentionally simple. The host doesn’t need to know what capabilities exist ahead of time. It discovers them by asking the server, “What can you do?” The server responds with a structured list of tools, resources, and prompts.

Key Concepts

MCP defines three core primitives that servers can expose.

Tools

Tools are functions the model can call. A tool has a name, a description, and an input schema (defined in JSON Schema). When the model decides it needs to use a tool, the host sends the request to the appropriate server, which executes it and returns the result.

Examples: running a database query, calling a REST API, searching a codebase, sending a Slack message.

Resources

Resources represent data the model can read. Unlike tools, which perform actions, resources are passive. They expose files, database records, API responses, or any other structured data that the model might need for context.

A resource has a URI (like file:///path/to/doc.md or jira://PROJ-123) and returns content in a standardized format.

Prompts

Prompts are reusable templates that servers can expose. They let server authors package common workflows into discoverable, parameterized prompts. For example, a code review server might expose a “review-pull-request” prompt that accepts a PR number and generates a structured review.

A Real-World Example

Suppose a developer is using an AI assistant in their IDE and asks: “What’s the status of ticket PROJ-123?”

Without MCP, you’d need a custom plugin that knows how to talk to Jira, parse the response, and format it for the model. That plugin only works with one AI tool.

With MCP, here’s what happens:

  1. The host sees the model wants to fetch a Jira ticket.
  2. It routes the request to a Jira MCP server that’s already running.
  3. The server authenticates with Jira (using credentials configured once), fetches ticket PROJ-123, and returns structured data: status, assignee, description, comments.
  4. The model receives the data and responds: “PROJ-123 is In Progress, assigned to Sarah. Last update was two hours ago — she’s waiting on a design review.”

The Jira MCP server is written once and works with Claude Desktop, VS Code, Cursor, or any other MCP-compatible host. No custom integration code per client.

Why This Matters for Enterprise AI

MCP isn’t just a developer convenience. It has real implications for how organizations adopt AI.

Interoperability. MCP creates a standard interface. AI tools become interchangeable — you’re not locked into one vendor’s plugin ecosystem. Switch from one AI provider to another, and your MCP servers still work.

Reduced integration cost. Enterprises have dozens of internal systems: ticketing, CRM, data warehouses, monitoring dashboards. Instead of building custom integrations for each AI tool, you build one MCP server per system. Every MCP-compatible client can use it immediately.

Security at the protocol level. MCP supports capability-based permissions. Servers declare what they can do, and hosts can enforce policies about what the model is allowed to access. This gives IT teams a single control point for AI-to-system access.

Ecosystem growth. The network effect is powerful. Every new MCP server benefits every MCP client, and vice versa. The community is already building servers for GitHub, Postgres, Slack, Google Drive, and hundreds of other services.

Getting Started

Building an MCP server is surprisingly straightforward. Here’s a minimal example in Python that exposes a single tool:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Demo Server")

@mcp.tool()
def get_weather(city: str) -> str:
    """Get current weather for a city."""
    # In production, call a real weather API
    return f"Weather in {city}: 72°F, sunny"

if __name__ == "__main__":
    mcp.run(transport="stdio")

That’s it. Install the mcp package, define your tools as decorated functions, and run. The SDK handles protocol negotiation, schema generation, and transport.

TypeScript has an equivalent SDK with the same developer experience. Both are open source and actively maintained.

To connect this server to a host like Claude Desktop, you add a few lines to your config file pointing to the server script. The model can then discover and call get_weather on its own.

What’s Next

MCP is still evolving. The specification is open, and the community is actively contributing new servers, clients, and extensions. If you’re building AI-powered tools or integrating AI into enterprise workflows, MCP is the standard to build on.

The gap between what AI models can reason about and what they can actually do is closing fast. MCP is a big part of why.

Design partner program
Work with us
before we launch.

We’re selecting 5–8 design partners across BFSI and quick commerce. Co-build the ontology. Shape the product. Lock founding-customer pricing.

India-first · Bangalore · April 2026