Building Developer mcp protocol tutorial typescript

Building agent skills with MCP

The Model Context Protocol is how tools talk to AI models. Here's what it is, why it matters, and how to build your first MCP server.

5 min read
On this page

Every AI platform has its own way of defining tools. Claude has tool_use, OpenAI has function calling, Google has their own thing. They all work differently enough that building a tool means picking a platform and committing to it. MCP, the Model Context Protocol, is an attempt to fix that. You build a tool once, expose it through a standard protocol, and any MCP-compatible client can use it.

I’ve been building MCP servers for my own projects for a while now. The protocol has rough edges, but the core idea is sound, and the developer experience has gotten much better in the last year. Here’s what you need to know and how to build your first one.

What MCP actually is

MCP is a protocol, like HTTP for the web or LSP for code editors. It defines how an AI model discovers what tools are available and how it calls them.

The architecture has two sides:

  • An MCP server is your code. It exposes tools through a standard interface. You define what each tool does, what parameters it accepts, and what it returns.
  • An MCP client is the AI application. Claude Desktop, Claude Code, and other compatible applications connect to your server, discover its tools, and call them during conversations.

Communication happens over one of two transports: stdio (standard input/output, good for local tools) or HTTP with SSE (server-sent events, good for remote tools). For local development, stdio is simpler. Your MCP server is just a process that reads JSON from stdin and writes JSON to stdout.

The key thing to understand: MCP servers are not web services. They don’t listen on a port by default. When using stdio transport, the client spawns your server as a child process and talks to it through pipes. This is different from a REST API, and that distinction matters for how you think about deployment.

Building your first MCP server

Let’s build a weather lookup tool. It’s simple enough to fit in one file but real enough to show the important patterns. We’ll use TypeScript and the official MCP SDK.

Set up the project

mkdir weather-mcp && cd weather-mcp
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node

Create a tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "declaration": true
  },
  "include": ["src/**/*"]
}

Write the server

Create src/index.ts:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "weather",
  version: "1.0.0",
});

server.tool(
  "get_weather",
  "Get the current weather for a city. Returns temperature, " +
    "conditions, and humidity. Use this when someone asks about " +
    "weather in a specific location.",
  {
    city: z.string().describe("City name, e.g. 'Minneapolis' or 'London'"),
    units: z
      .enum(["fahrenheit", "celsius"])
      .default("fahrenheit")
      .describe("Temperature units"),
  },
  async ({ city, units }) => {
    // IMPORTANT: Never use console.log in MCP servers using stdio
    // transport — it corrupts the protocol stream. Use console.error
    // for debug logging instead.
    try {
      // In a real server, you'd call a weather API here.
      // This example uses a mock to keep the focus on MCP structure.
      const weather = await fetchWeather(city, units);

      return {
        content: [
          {
            type: "text" as const,
            text: JSON.stringify(weather, null, 2),
          },
        ],
      };
    } catch (err) {
      const message =
        err instanceof Error ? err.message : "Unknown error occurred";
      console.error(`Weather fetch failed for "${city}":`, message);
      return {
        content: [
          {
            type: "text" as const,
            text: JSON.stringify({
              error: `Failed to fetch weather for "${city}": ${message}`,
              suggestion:
                "Check the city name and try again. " +
                "Use a well-known city name like 'Minneapolis' or 'London'.",
            }),
          },
        ],
        isError: true,
      };
    }
  },
);

async function fetchWeather(
  city: string,
  units: string,
): Promise<{
  city: string;
  temperature: number;
  units: string;
  conditions: string;
  humidity: number;
}> {
  // Replace this with a real API call (OpenWeatherMap, WeatherAPI, etc.)
  // For now, return mock data so you can test the MCP plumbing.
  return {
    city,
    temperature: units === "celsius" ? 18 : 64,
    units,
    conditions: "Partly cloudy",
    humidity: 55,
  };
}

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}

main().catch(console.error);

A few things worth noting about this code.

The tool description is doing real work. It tells the model what the tool does, when to use it, and what to expect. This is the prompt engineering side of skill design in action. A vague description like “weather tool” would work sometimes, but a specific one works reliably.

The parameters use Zod schemas with .describe() calls. Those descriptions are passed to the model so it knows what to put in each field. Treat them like you’d treat good function documentation.

The return format uses the MCP content block structure. The content array can contain text, images, or other types. For most tools, a JSON text block is the simplest and most flexible option.

Build and test

Add these scripts to your package.json:

{
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js"
  }
}

Build it:

npm run build

You can test that it starts without errors by running npm start, but since it communicates over stdio, you won’t see much happen. It’s waiting for MCP messages on stdin. Press Ctrl+C to stop it.

Connecting it to Claude

Claude Code

In your project directory, create or edit .mcp.json:

{
  "mcpServers": {
    "weather": {
      "command": "node",
      "args": ["/absolute/path/to/weather-mcp/dist/index.js"]
    }
  }
}

When you start Claude Code in that directory, it will spawn your MCP server and make the get_weather tool available. Try asking “What’s the weather in Minneapolis?” and you’ll see it call your tool.

Claude Desktop

Open Settings, go to Developer, and add your server configuration:

{
  "mcpServers": {
    "weather": {
      "command": "node",
      "args": ["/absolute/path/to/weather-mcp/dist/index.js"]
    }
  }
}

Restart Claude Desktop. Your weather tool should appear in the tools menu (the hammer icon).

Making it real

The mock weather function is a placeholder. To make this actually useful, swap in a real API call. Here’s what that looks like with the free OpenWeatherMap API:

async function fetchWeather(city: string, units: string) {
  const apiUnits = units === "celsius" ? "metric" : "imperial";
  const apiKey = process.env.OPENWEATHER_API_KEY;

  if (!apiKey) {
    throw new Error(
      "OPENWEATHER_API_KEY environment variable is not set. " +
        "Get a free key at https://openweathermap.org/api",
    );
  }

  const url =
    `https://api.openweathermap.org/data/2.5/weather` +
    `?q=${encodeURIComponent(city)}&units=${apiUnits}&appid=${apiKey}`;

  const response = await fetch(url);
  if (!response.ok) {
    throw new Error(`Weather API returned ${response.status} for "${city}"`);
  }

  const data = await response.json();

  return {
    city: data.name,
    temperature: Math.round(data.main.temp),
    units,
    conditions: data.weather[0]?.description ?? "Unknown",
    humidity: data.main.humidity,
  };
}

Pass the API key through your MCP configuration:

{
  "mcpServers": {
    "weather": {
      "command": "node",
      "args": ["/absolute/path/to/weather-mcp/dist/index.js"],
      "env": {
        "OPENWEATHER_API_KEY": "your-key-here"
      }
    }
  }
}

Notice the error handling. When something goes wrong, throw an error with a message that helps the model (and the user) understand what happened. “Weather API returned 404” tells the model the city probably doesn’t exist. “API key not set” tells the user they need to configure something. Good error messages are part of good skill design.

Why this matters

The value of MCP comes down to one thing: you build a tool once and it works with any compatible client. Your weather server works in Claude Code today. It works in Claude Desktop. If another AI application adds MCP support tomorrow, your server works there too, with zero changes.

This is the same thing that made LSP successful for code editors. Before LSP, every editor needed its own language integration. After LSP, you build a language server once and it works in VS Code, Neovim, Emacs, and anything else that speaks the protocol. MCP is trying to do the same thing for AI tool use.

There’s also a practical benefit for teams. If you build internal tools as MCP servers, different team members can use them with whichever AI client they prefer. The tool layer is decoupled from the chat layer.

Limitations to know about

I’d be doing you a disservice if I didn’t mention the rough spots.

MCP is still evolving. The spec has gone through breaking changes and will probably go through more. Pin your SDK versions and read the changelog when you upgrade.

Not all AI platforms support MCP. As of writing, it’s well-supported in the Claude ecosystem (Claude Desktop, Claude Code) and has growing support in other clients. But it’s not universal yet, and some implementations are more complete than others.

Stdio transport has some quirks. If your tool writes anything to stdout that isn’t a valid MCP message (like a stray console.log), the client will choke on it. Use console.error for debug logging. This catches people every single time.

Remote MCP servers (using HTTP+SSE transport) work but the authentication story is still maturing. For local tools this doesn’t matter, but if you’re thinking about exposing MCP servers over a network, plan to do some extra work around auth and security.

Where to go from here

The weather example is intentionally minimal. Real MCP servers might expose dozens of tools, manage persistent connections, or coordinate with other services. For thinking about how tools compose together, read skill composition. For the nuts and bolts of how models select and call tools, see tool use patterns. And for the full picture of what makes a well-designed skill, the anatomy of a skill article covers the structural patterns you’ll want to follow as your MCP servers grow beyond a single tool.

The MCP specification and SDK documentation live at modelcontextprotocol.io. The TypeScript SDK source is on GitHub if you want to read the implementation or see more examples.

For a clear definition of how MCP fits next to skills, plugins, and integrations as concepts, see What’s the difference between an AI skill, tool, plugin, and integration?.