AI Implementation

How to Build an MCP Server in Under an Hour (Step-by-Step)

Build a working MCP server in under an hour. Python and TypeScript examples, real code, common mistakes, and connecting to Claude or ChatGPT.

Chase Dillingham

Chase Dillingham

Founder & CEO, TrainMyAgent

11 min read 10 sources cited
MCP Tutorial MCP Server AI Development Technical Guide
Code editor showing MCP server implementation in Python with connected AI client

Stop reading about MCP. Build something.

You can have a working MCP server, connected to a real AI model, returning real data, in under an hour. Not a “hello world” toy. An actual server that exposes tools an AI agent can use in production.

This guide walks you through it. Python first (using FastMCP), then TypeScript. Real code you can copy, run, and extend.

Let’s go.

What You’re Building

An MCP server that:

  • Exposes tools an AI model can discover and call
  • Exposes resources an AI model can read for context
  • Connects to Claude Desktop, Cursor, ChatGPT, or any MCP-compatible client
  • Runs locally first, then can be deployed anywhere

We’ll build a simple customer data server. It exposes tools to search customers, get customer details, and create support tickets. Basic, but it demonstrates every core MCP concept.

Prerequisites

  • Python 3.10+ or Node.js 18+
  • An MCP-compatible client (Claude Desktop is the easiest starting point)
  • 45 minutes

That’s it.

Option A: Python with FastMCP

FastMCP is the official Python SDK for building MCP servers. It’s the fastest path from zero to working server (source).

Step 1: Install the SDK

pip install mcp

That installs the mcp package which includes FastMCP. One dependency. No framework bloat.

Step 2: Create Your Server

Create a file called customer_server.py:

from mcp.server.fastmcp import FastMCP

# Initialize the server
mcp = FastMCP("Customer Data Server")

# Sample data (replace with your actual data source)
CUSTOMERS = {
    "C001": {"name": "Acme Corp", "plan": "Enterprise", "mrr": 12500, "health": "green"},
    "C002": {"name": "TechStart Inc", "plan": "Growth", "mrr": 2500, "health": "yellow"},
    "C003": {"name": "MegaRetail", "plan": "Enterprise", "mrr": 45000, "health": "green"},
    "C004": {"name": "FastShip LLC", "plan": "Starter", "mrr": 500, "health": "red"},
}

TICKETS = []


@mcp.tool()
def search_customers(query: str) -> str:
    """Search customers by name or plan type. Returns matching customer records."""
    results = []
    query_lower = query.lower()
    for cid, customer in CUSTOMERS.items():
        if query_lower in customer["name"].lower() or query_lower in customer["plan"].lower():
            results.append(f"{cid}: {customer['name']} ({customer['plan']}) - ${customer['mrr']}/mo - Health: {customer['health']}")
    if not results:
        return "No customers found matching your query."
    return "\n".join(results)


@mcp.tool()
def get_customer(customer_id: str) -> str:
    """Get detailed information about a specific customer by their ID."""
    customer = CUSTOMERS.get(customer_id)
    if not customer:
        return f"Customer {customer_id} not found."
    return (
        f"Customer: {customer['name']}\n"
        f"Plan: {customer['plan']}\n"
        f"MRR: ${customer['mrr']}\n"
        f"Health Score: {customer['health']}"
    )


@mcp.tool()
def create_ticket(customer_id: str, subject: str, priority: str = "medium") -> str:
    """Create a support ticket for a customer. Priority: low, medium, high, critical."""
    if customer_id not in CUSTOMERS:
        return f"Customer {customer_id} not found. Cannot create ticket."
    ticket_id = f"TKT-{len(TICKETS) + 1001}"
    ticket = {
        "id": ticket_id,
        "customer_id": customer_id,
        "subject": subject,
        "priority": priority,
        "status": "open"
    }
    TICKETS.append(ticket)
    return f"Ticket {ticket_id} created for {CUSTOMERS[customer_id]['name']}: {subject} (Priority: {priority})"


@mcp.resource("customers://summary")
def customer_summary() -> str:
    """Summary of all customers for context."""
    total_mrr = sum(c["mrr"] for c in CUSTOMERS.values())
    by_health = {}
    for c in CUSTOMERS.values():
        by_health[c["health"]] = by_health.get(c["health"], 0) + 1
    return (
        f"Total Customers: {len(CUSTOMERS)}\n"
        f"Total MRR: ${total_mrr:,}\n"
        f"Health Distribution: {by_health}"
    )

That’s it. Under 70 lines. You have a fully functional MCP server with three tools and one resource.

Let’s break down what’s happening:

  • @mcp.tool() registers a function as a tool the AI can call. The docstring becomes the tool’s description — this is what the model reads to decide when to use it. Write good docstrings.
  • @mcp.resource() registers a data source the AI can read for context. Resources are identified by URIs (like customers://summary).
  • Type hints matter. FastMCP uses them to generate the tool’s parameter schema. query: str tells the model this tool takes a string parameter called “query.”

Step 3: Test It Locally

Run the server directly:

python customer_server.py

Or test with the MCP inspector:

mcp dev customer_server.py

The inspector gives you a web UI to call tools and view resources without connecting a full AI client. Use this for debugging.

Step 4: Connect to Claude Desktop

Edit your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

Add your server:

{
  "mcpServers": {
    "customer-data": {
      "command": "python",
      "args": ["/absolute/path/to/customer_server.py"]
    }
  }
}

Restart Claude Desktop. You’ll see the tools icon (hammer) in the chat interface. Click it — your three tools should be listed.

Now ask Claude: “Search for enterprise customers and create a high-priority ticket for any with red health scores.”

Watch it discover your tools, call search_customers, identify the at-risk customer, and call create_ticket. That’s tool calling via MCP in action.

Option B: TypeScript

If TypeScript is your stack, the official TypeScript SDK works just as well (source).

Step 1: Set Up the Project

mkdir customer-mcp-server && cd customer-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod

Step 2: Create the Server

Create src/index.ts:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "Customer Data Server",
  version: "1.0.0",
});

const CUSTOMERS: Record<string, {name: string; plan: string; mrr: number; health: string}> = {
  "C001": { name: "Acme Corp", plan: "Enterprise", mrr: 12500, health: "green" },
  "C002": { name: "TechStart Inc", plan: "Growth", mrr: 2500, health: "yellow" },
  "C003": { name: "MegaRetail", plan: "Enterprise", mrr: 45000, health: "green" },
  "C004": { name: "FastShip LLC", plan: "Starter", mrr: 500, health: "red" },
};

const tickets: Array<{id: string; customerId: string; subject: string; priority: string}> = [];

server.tool(
  "search_customers",
  "Search customers by name or plan type",
  { query: z.string().describe("Search query to match against customer name or plan") },
  async ({ query }) => {
    const results = Object.entries(CUSTOMERS)
      .filter(([_, c]) =>
        c.name.toLowerCase().includes(query.toLowerCase()) ||
        c.plan.toLowerCase().includes(query.toLowerCase())
      )
      .map(([id, c]) => `${id}: ${c.name} (${c.plan}) - $${c.mrr}/mo - Health: ${c.health}`);

    return {
      content: [{ type: "text", text: results.length ? results.join("\n") : "No customers found." }],
    };
  }
);

server.tool(
  "get_customer",
  "Get detailed info about a specific customer",
  { customer_id: z.string().describe("The customer ID (e.g., C001)") },
  async ({ customer_id }) => {
    const c = CUSTOMERS[customer_id];
    if (!c) return { content: [{ type: "text", text: `Customer ${customer_id} not found.` }] };
    return {
      content: [{ type: "text", text: `Customer: ${c.name}\nPlan: ${c.plan}\nMRR: $${c.mrr}\nHealth: ${c.health}` }],
    };
  }
);

server.tool(
  "create_ticket",
  "Create a support ticket for a customer",
  {
    customer_id: z.string().describe("Customer ID"),
    subject: z.string().describe("Ticket subject"),
    priority: z.enum(["low", "medium", "high", "critical"]).default("medium").describe("Ticket priority"),
  },
  async ({ customer_id, subject, priority }) => {
    if (!CUSTOMERS[customer_id]) {
      return { content: [{ type: "text", text: `Customer ${customer_id} not found.` }] };
    }
    const id = `TKT-${1001 + tickets.length}`;
    tickets.push({ id, customerId: customer_id, subject, priority });
    return {
      content: [{ type: "text", text: `Ticket ${id} created for ${CUSTOMERS[customer_id].name}: ${subject} (Priority: ${priority})` }],
    };
  }
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}

main().catch(console.error);

Step 3: Build and Connect

npx tsc --outDir dist

Then add to your Claude Desktop config:

{
  "mcpServers": {
    "customer-data": {
      "command": "node",
      "args": ["/absolute/path/to/dist/index.js"]
    }
  }
}

Same result as the Python version. Different language, identical protocol.

Connecting to Other Clients

MCP isn’t Claude-only. Here’s how to connect to other clients:

ChatGPT

OpenAI added MCP support in March 2025 (source). Use the OpenAI Agents SDK to connect your MCP server as a tool source for GPT-4 agents.

Cursor / VS Code

Both support MCP natively. Add your server to the IDE’s MCP configuration and your coding assistant can use your tools during development.

Custom Agents

Use the MCP client SDK in your own application:

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

server_params = StdioServerParameters(
    command="python",
    args=["customer_server.py"]
)

async with stdio_client(server_params) as (read, write):
    async with ClientSession(read, write) as session:
        await session.initialize()
        tools = await session.list_tools()
        result = await session.call_tool("search_customers", {"query": "enterprise"})

This is how you embed MCP in your own agentic workflows.

Common Mistakes (And How to Avoid Them)

We’ve built dozens of MCP servers for production deployments. Here’s where teams get tripped up:

1. Vague Tool Descriptions

The model reads your tool descriptions to decide when to use them. “Does stuff with customers” is useless. “Search customers by name or plan type. Returns matching customer records with MRR and health scores.” is actionable.

Rule: Write tool descriptions like you’re writing them for a new team member on their first day.

2. Missing Error Handling

Your MCP server will get bad inputs. Customers that don’t exist. Malformed IDs. Network failures to downstream APIs. Handle them gracefully and return useful error messages — the model will use those messages to recover or inform the user.

3. Giant Return Values

Don’t return 10MB of JSON from a tool call. LLMs have context limits. Return what’s needed, summarize what isn’t. Paginate if you have to.

4. No Input Validation

Just because an AI model is calling your tool doesn’t mean the inputs will be valid. Validate everything. Use Pydantic models in Python or Zod schemas in TypeScript (which you already saw above).

5. Ignoring the Resource Primitive

Most teams build tools and forget about resources. Resources give the model background context without requiring a tool call. Your customer summary, system status, configuration — expose these as resources so the model starts every conversation informed.

6. Hard-Coding Credentials

Your MCP server will need API keys, database credentials, service tokens. Use environment variables. Never hard-code secrets. MCP servers run as processes on your infrastructure — treat them like any other service.

import os

DB_URL = os.environ["DATABASE_URL"]
API_KEY = os.environ["SALESFORCE_API_KEY"]

Going Beyond the Basics

Once your first server is running, here’s the path to production:

Add Prompts

MCP servers can expose predefined prompt templates (source). If your team runs the same analysis repeatedly (“summarize at-risk customers” or “generate weekly MRR report”), define it as a prompt. Users select it from the client UI instead of typing instructions.

@mcp.prompt()
def at_risk_report() -> str:
    """Generate a report of all at-risk customers with recommended actions."""
    return (
        "Search for all customers with red or yellow health scores. "
        "For each one, provide: customer name, MRR, current health, "
        "and a recommended action to improve retention."
    )

Connect to Real Data Sources

Replace the sample dictionaries with actual database queries, API calls, or file system reads. The MCP layer stays the same — only the backend implementation changes.

Add Authentication

For production, implement auth at the server level. Check tokens, validate permissions, log access. The 2026 MCP roadmap includes standardized auth, but for now, handle it in your server code. See our MCP security guide for details.

Deploy as a Service

For team-wide access, deploy your MCP server as a persistent service. Use Streamable HTTP transport instead of stdio so multiple clients can connect simultaneously (source).

What You Can Build in an Hour

Some real MCP servers our team has built in under 60 minutes:

  • Internal docs search: Wraps a RAG system over company documentation
  • Database query tool: Natural language to SQL against a Postgres database
  • Deployment status checker: Pulls build and deploy status from CI/CD
  • Ticket triager: Reads incoming support tickets and categorizes them
  • Invoice lookup: Searches and retrieves invoices from the billing system

Each one: under an hour to build, immediately useful, and composable with every other MCP server in the stack.

Stop planning your MCP strategy. Build your first server today. The protocol will click once you’ve written your first @mcp.tool().

Frequently Asked Questions

Do I need to know the MCP specification to build a server?

No. The Python SDK (FastMCP) and TypeScript SDK abstract the protocol details. You write Python functions or TypeScript handlers, and the SDK handles JSON-RPC, transport, and protocol negotiation. Read the spec later if you want to understand the internals.

Can one MCP server expose multiple tools?

Yes. A single server can expose as many tools, resources, and prompts as needed. Group related capabilities in one server (e.g., all customer operations in one server, all billing operations in another).

How do I debug tool calls?

Use mcp dev your_server.py to launch the MCP Inspector — a web UI that lets you call tools, view resources, and see raw protocol messages. For production, log tool invocations and responses in your server code.

Can MCP servers call other MCP servers?

Not directly in the current spec. An MCP client (like your agent) can connect to multiple servers and chain tool calls. For server-to-server communication, use direct API calls or implement an orchestration layer.

What’s the difference between stdio and SSE transport?

Stdio runs the server as a subprocess — great for local development and single-user desktop apps. SSE (Server-Sent Events) and the newer Streamable HTTP transport run the server as a network service — required for multi-user and remote deployments.

How do I handle long-running tool operations?

MCP supports progress notifications. Your server can send progress updates while a tool is executing. For very long operations, consider returning a task ID and providing a separate tool to check status.

Can I test MCP servers without Claude Desktop?

Yes. Use the MCP Inspector (mcp dev), write integration tests using the MCP client SDK, or use any MCP-compatible client including Cursor, VS Code, or custom applications.

What if my tool needs to access a private network resource?

MCP servers run in your infrastructure. If the server process can reach the resource (database, internal API, file system), the tool can access it. No data leaves your network unless you explicitly send it somewhere.


Three Ways to Work With TMA

Need an agent built? We deploy production AI agents in your infrastructure. Working pilot. Real data. Measurable ROI. → Schedule Demo

Want to co-build a product? We’re not a dev agency. We’re co-builders. Shared cost. Shared upside. → Partner with Us

Want to join the Guild? Ship pilots, earn bounties, share profit. Community + equity + path to exit. → Become an AI Architect

Need this implemented?

We design and deploy enterprise AI agents in your environment with measurable ROI and production guardrails.

About the Author

Chase Dillingham

Chase Dillingham

Founder & CEO, TrainMyAgent

Chase Dillingham builds AI agent platforms that deliver measurable ROI. Former enterprise architect with 15+ years deploying production systems.