Building with the Model Context Protocol (MCP): A Practical Guide
What Is the Model Context Protocol?
The Model Context Protocol (MCP) is an open standard that lets AI assistants connect to external tools and data sources. Think of it as a universal adapter between AI models and the real world. Instead of building custom integrations for every AI assistant, you build one MCP server, and any MCP-compatible client can use it.
I use MCP in my Open Brain project to give Claude access to my personal knowledge base, and the experience has fundamentally changed how I interact with AI assistants. This guide covers what I have learned about building practical MCP servers.
Core Concepts
MCP has three main primitives:
- Tools: Functions that the AI can call to perform actions (search, create, update, delete)
- Resources: Data sources that the AI can read (files, database records, API responses)
- Prompts: Reusable prompt templates that help the AI use your tools effectively
When an AI assistant connects to your MCP server, it discovers what tools and resources are available, and can use them to answer questions or complete tasks.
Building Your First MCP Server
Let me walk through building a simple but useful MCP server. We will create a server that provides access to a project database:
from mcp.server import Server
from mcp.types import Tool, TextContent
import json
import asyncpg
server = Server("project-manager")
# Database connection
db_pool = None
async def get_db():
global db_pool
if db_pool is None:
db_pool = await asyncpg.create_pool("postgresql://localhost/projects")
return db_pool
@server.tool()
async def list_projects(status: str = "active") -> str:
"""List all projects, optionally filtered by status."""
pool = await get_db()
rows = await pool.fetch(
"SELECT id, name, status, updated_at FROM projects WHERE status = $1 ORDER BY updated_at DESC",
status
)
projects = [{"id": r["id"], "name": r["name"], "status": r["status"],
"updated": str(r["updated_at"])} for r in rows]
return json.dumps(projects, indent=2)
@server.tool()
async def get_project_details(project_id: int) -> str:
"""Get detailed information about a specific project."""
pool = await get_db()
row = await pool.fetchrow(
"SELECT * FROM projects WHERE id = $1", project_id
)
if not row:
return json.dumps({"error": "Project not found"})
return json.dumps(dict(row), indent=2, default=str)
@server.tool()
async def search_projects(query: str) -> str:
"""Search projects by name or description."""
pool = await get_db()
rows = await pool.fetch(
"SELECT id, name, status FROM projects WHERE name ILIKE $1 OR description ILIKE $1",
f"%{query}%"
)
return json.dumps([dict(r) for r in rows], indent=2)
Running the Server
MCP servers can run over stdio (for local use with Claude Desktop) or over HTTP (for remote access). For local development, stdio is simplest:
import asyncio
from mcp.server.stdio import stdio_server
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream)
if __name__ == "__main__":
asyncio.run(main())
Tool Design Best Practices
From building several MCP servers, I have learned what makes tools effective:
1. Clear, Descriptive Names
The AI reads the tool name and description to decide when to use it. Be explicit:
# Bad: vague name
@server.tool()
async def get_data(id: int) -> str:
"""Gets data."""
# Good: clear name and description
@server.tool()
async def get_venue_reviews(venue_id: int, limit: int = 5) -> str:
"""Fetch the most recent customer reviews for a specific venue,
ordered by date. Returns review text, rating, and reviewer name."""
2. Return Structured Data
Always return structured, parseable data. JSON is ideal. The AI can reason about structured data much more effectively than free-form text.
3. Handle Errors Gracefully
If a tool call fails, return a clear error message rather than crashing. The AI can then explain the error to the user or try a different approach.
4. Keep Tools Focused
Each tool should do one thing well. A "search" tool and a "get details" tool are better than a single "do everything" tool, because the AI can compose them as needed.
Resources for Read-Only Data
Resources are for data that the AI should be able to read but not modify through the resource itself:
@server.resource("project://config")
async def get_config() -> str:
"""Current project configuration and settings."""
config = load_config()
return json.dumps(config, indent=2)
@server.resource("project://stats")
async def get_stats() -> str:
"""Project statistics and metrics."""
pool = await get_db()
stats = await pool.fetchrow(
"SELECT COUNT(*) as total, "
"COUNT(*) FILTER (WHERE status='active') as active "
"FROM projects"
)
return json.dumps(dict(stats))
Real-World Integration: Open Brain
In my Open Brain project, the MCP server exposes three main tools:
- search_knowledge: Semantic search across my knowledge base using pgvector
- add_knowledge: Store new information with tags and metadata
- get_recent: Retrieve recently added knowledge, optionally filtered by topic
When I use Claude with this MCP server connected, the conversation looks natural. I can say "What did I save about vector indexing strategies?" and Claude calls search_knowledge behind the scenes, retrieves relevant chunks, and synthesises an answer from my own notes.
Security Considerations
MCP servers can access databases, APIs, and file systems. Security matters:
- Read vs write: Be deliberate about which tools can modify data. Read-only tools are safer.
- Input validation: Always validate tool inputs. The AI might pass unexpected values.
- Authentication: For HTTP-based servers, implement proper authentication.
- Rate limiting: Prevent runaway tool calls by implementing rate limits.
The Future of MCP
MCP is still early, but the direction is clear: AI assistants that can interact with your actual tools and data are far more useful than chatbots that can only work with the text you paste in. Building MCP servers is an investment in making your infrastructure AI-accessible, and that investment will compound as AI assistants become more capable.
If you are an AI engineer, I strongly recommend starting to build MCP servers for your most-used tools and data sources. The initial effort is small, and the productivity gain from having AI assistants that can actually access your systems is substantial.