- MCP Python SDK
The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
- Build MCP clients that can connect to any MCP server
- Create MCP servers that expose resources, prompts and tools
- Use standard transports like stdio, SSE, and Streamable HTTP
- Handle all MCP protocol messages and lifecycle events
We recommend using uv to manage your Python projects.
If you haven't created a uv-managed project yet, create one:
uv init mcp-server-demo cd mcp-server-demo
Then add MCP to your project dependencies:
uv add "mcp[cli]"
Alternatively, for projects using pip for dependencies:
pip install "mcp[cli]"
To run the mcp command with uv:
uv run mcp
Let's create a simple MCP server that exposes a calculator tool and some data:
""" FastMCP quickstart example. cd to the `examples/snippets/clients` directory and run: uv run server fastmcp_quickstart stdio """ from mcp.server.fastmcp import FastMCP # Create an MCP server mcp = FastMCP("Demo") # Add an addition tool @mcp.tool() def add(a: int, b: int) -> int: """Add two numbers""" return a + b # Add a dynamic greeting resource @mcp.resource("greeting://{name}") def get_greeting(name: str) -> str: """Get a personalized greeting""" return f"Hello, {name}!" # Add a prompt @mcp.prompt() def greet_user(name: str, style: str = "friendly") -> str: """Generate a greeting prompt""" styles = { "friendly": "Please write a warm, friendly greeting", "formal": "Please write a formal, professional greeting", "casual": "Please write a casual, relaxed greeting", } return f"{styles.get(style, styles['friendly'])} for someone named {name}."
Full example: examples/snippets/servers/fastmcp_quickstart.py
You can install this server in Claude Desktop and interact with it right away by running:
uv run mcp install server.py
Alternatively, you can test it with the MCP Inspector:
uv run mcp dev server.py
The Model Context Protocol (MCP) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through Resources (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through Tools (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through Prompts (reusable templates for LLM interactions)
- And more!
The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
"""Example showing lifespan support for startup/shutdown with strong typing.""" from collections.abc import AsyncIterator from contextlib import asynccontextmanager from dataclasses import dataclass from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession # Mock database class for example class Database: """Mock database class for example.""" @classmethod async def connect(cls) -> "Database": """Connect to database.""" return cls() async def disconnect(self) -> None: """Disconnect from database.""" pass def query(self) -> str: """Execute a query.""" return "Query result" @dataclass class AppContext: """Application context with typed dependencies.""" db: Database @asynccontextmanager async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]: """Manage application lifecycle with type-safe context.""" # Initialize on startup db = await Database.connect() try: yield AppContext(db=db) finally: # Cleanup on shutdown await db.disconnect() # Pass lifespan to server mcp = FastMCP("My App", lifespan=app_lifespan) # Access type-safe lifespan context in tools @mcp.tool() def query_db(ctx: Context[ServerSession, AppContext]) -> str: """Tool that uses initialized resources.""" db = ctx.request_context.lifespan_context.db return db.query()
Full example: examples/snippets/servers/lifespan_example.py
Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
from mcp.server.fastmcp import FastMCP mcp = FastMCP(name="Resource Example") @mcp.resource("file://documents/{name}") def read_document(name: str) -> str: """Read a document by name.""" # This would normally read from disk return f"Content of {name}" @mcp.resource("config://settings") def get_settings() -> str: """Get application settings.""" return """{ "theme": "dark", "language": "en", "debug": false }"""
Full example: examples/snippets/servers/basic_resource.py
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
from mcp.server.fastmcp import FastMCP mcp = FastMCP(name="Tool Example") @mcp.tool() def sum(a: int, b: int) -> int: """Add two numbers together.""" return a + b @mcp.tool() def get_weather(city: str, unit: str = "celsius") -> str: """Get weather for a city.""" # This would normally call a weather API return f"Weather in {city}: 22degrees{unit[0].upper()}"
Full example: examples/snippets/servers/basic_tool.py
Tools can optionally receive a Context object by including a parameter with the Context
type annotation. This context is automatically injected by the FastMCP framework and provides access to MCP capabilities:
from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession mcp = FastMCP(name="Progress Example") @mcp.tool() async def long_running_task(task_name: str, ctx: Context[ServerSession, None], steps: int = 5) -> str: """Execute a task with progress updates.""" await ctx.info(f"Starting: {task_name}") for i in range(steps): progress = (i + 1) / steps await ctx.report_progress( progress=progress, total=1.0, message=f"Step {i + 1}/{steps}", ) await ctx.debug(f"Completed step {i + 1}") return f"Task '{task_name}' completed"
Full example: examples/snippets/servers/tool_progress.py
Tools will return structured results by default, if their return type annotation is compatible. Otherwise, they will return unstructured results.
Structured output supports these return types:
- Pydantic models (BaseModel subclasses)
- TypedDicts
- Dataclasses and other classes with type hints
dict[str, T]
(where T is any JSON-serializable type)- Primitive types (str, int, float, bool, bytes, None) - wrapped in
{"result": value}
- Generic types (list, tuple, Union, Optional, etc.) - wrapped in
{"result": value}
Classes without type hints cannot be serialized for structured output. Only classes with properly annotated attributes will be converted to Pydantic models for schema generation and validation.
Structured results are automatically validated against the output schema generated from the annotation. This ensures the tool returns well-typed, validated data that clients can easily process.
Note: For backward compatibility, unstructured results are also returned. Unstructured results are provided for backward compatibility with previous versions of the MCP specification, and are quirks-compatible with previous versions of FastMCP in the current version of the SDK.
Note: In cases where a tool function's return type annotation causes the tool to be classified as structured and this is undesirable, the classification can be suppressed by passing structured_output=False
to the @tool
decorator.
"""Example showing structured output with tools.""" from typing import TypedDict from pydantic import BaseModel, Field from mcp.server.fastmcp import FastMCP mcp = FastMCP("Structured Output Example") # Using Pydantic models for rich structured data class WeatherData(BaseModel): """Weather information structure.""" temperature: float = Field(description="Temperature in Celsius") humidity: float = Field(description="Humidity percentage") condition: str wind_speed: float @mcp.tool() def get_weather(city: str) -> WeatherData: """Get weather for a city - returns structured data.""" # Simulated weather data return WeatherData( temperature=22.5, humidity=45.0, condition="sunny", wind_speed=5.2, ) # Using TypedDict for simpler structures class LocationInfo(TypedDict): latitude: float longitude: float name: str @mcp.tool() def get_location(address: str) -> LocationInfo: """Get location coordinates""" return LocationInfo(latitude=51.5074, longitude=-0.1278, name="London, UK") # Using dict[str, Any] for flexible schemas @mcp.tool() def get_statistics(data_type: str) -> dict[str, float]: """Get various statistics""" return {"mean": 42.5, "median": 40.0, "std_dev": 5.2} # Ordinary classes with type hints work for structured output class UserProfile: name: str age: int email: str | None = None def __init__(self, name: str, age: int, email: str | None = None): self.name = name self.age = age self.email = email @mcp.tool() def get_user(user_id: str) -> UserProfile: """Get user profile - returns structured data""" return UserProfile(name="Alice", age=30, email="alice@example.com") # Classes WITHOUT type hints cannot be used for structured output class UntypedConfig: def __init__(self, setting1, setting2): # type: ignore[reportMissingParameterType] self.setting1 = setting1 self.setting2 = setting2 @mcp.tool() def get_config() -> UntypedConfig: """This returns unstructured output - no schema generated""" return UntypedConfig("value1", "value2") # Lists and other types are wrapped automatically @mcp.tool() def list_cities() -> list[str]: """Get a list of cities""" return ["London", "Paris", "Tokyo"] # Returns: {"result": ["London", "Paris", "Tokyo"]} @mcp.tool() def get_temperature(city: str) -> float: """Get temperature as a simple float""" return 22.5 # Returns: {"result": 22.5}
Full example: examples/snippets/servers/structured_output.py
Prompts are reusable templates that help LLMs interact with your server effectively:
from mcp.server.fastmcp import FastMCP from mcp.server.fastmcp.prompts import base mcp = FastMCP(name="Prompt Example") @mcp.prompt(title="Code Review") def review_code(code: str) -> str: return f"Please review this code:\n\n{code}" @mcp.prompt(title="Debug Assistant") def debug_error(error: str) -> list[base.Message]: return [ base.UserMessage("I'm seeing this error:"), base.UserMessage(error), base.AssistantMessage("I'll help debug that. What have you tried so far?"), ]
Full example: examples/snippets/servers/basic_prompt.py
MCP servers can provide icons for UI display. Icons can be added to the server implementation, tools, resources, and prompts:
from mcp.server.fastmcp import FastMCP, Icon # Create an icon from a file path or URL icon = Icon( src="icon.png", mimeType="image/png", sizes="64x64" ) # Add icons to server mcp = FastMCP( "My Server", website_url="https://example.com", icons=[icon] ) # Add icons to tools, resources, and prompts @mcp.tool(icons=[icon]) def my_tool(): """Tool with an icon.""" return "result" @mcp.resource("demo://resource", icons=[icon]) def my_resource(): """Resource with an icon.""" return "content"
Full example: examples/fastmcp/icons_demo.py
FastMCP provides an Image
class that automatically handles image data:
"""Example showing image handling with FastMCP.""" from PIL import Image as PILImage from mcp.server.fastmcp import FastMCP, Image mcp = FastMCP("Image Example") @mcp.tool() def create_thumbnail(image_path: str) -> Image: """Create a thumbnail from an image""" img = PILImage.open(image_path) img.thumbnail((100, 100)) return Image(data=img.tobytes(), format="png")
Full example: examples/snippets/servers/images.py
The Context object is automatically injected into tool and resource functions that request it via type hints. It provides access to MCP capabilities like logging, progress reporting, resource reading, user interaction, and request metadata.
To use context in a tool or resource function, add a parameter with the Context
type annotation:
from mcp.server.fastmcp import Context, FastMCP mcp = FastMCP(name="Context Example") @mcp.tool() async def my_tool(x: int, ctx: Context) -> str: """Tool that uses context capabilities.""" # The context parameter can have any name as long as it's type-annotated return await process_with_context(x, ctx)
The Context object provides the following capabilities:
ctx.request_id
- Unique ID for the current requestctx.client_id
- Client ID if availablectx.fastmcp
- Access to the FastMCP server instance (see FastMCP Properties)ctx.session
- Access to the underlying session for advanced communication (see Session Properties and Methods)ctx.request_context
- Access to request-specific data and lifespan resources (see Request Context Properties)await ctx.debug(message)
- Send debug log messageawait ctx.info(message)
- Send info log messageawait ctx.warning(message)
- Send warning log messageawait ctx.error(message)
- Send error log messageawait ctx.log(level, message, logger_name=None)
- Send log with custom levelawait ctx.report_progress(progress, total=None, message=None)
- Report operation progressawait ctx.read_resource(uri)
- Read a resource by URIawait ctx.elicit(message, schema)
- Request additional information from user with validation
from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession mcp = FastMCP(name="Progress Example") @mcp.tool() async def long_running_task(task_name: str, ctx: Context[ServerSession, None], steps: int = 5) -> str: """Execute a task with progress updates.""" await ctx.info(f"Starting: {task_name}") for i in range(steps): progress = (i + 1) / steps await ctx.report_progress( progress=progress, total=1.0, message=f"Step {i + 1}/{steps}", ) await ctx.debug(f"Completed step {i + 1}") return f"Task '{task_name}' completed"
Full example: examples/snippets/servers/tool_progress.py
MCP supports providing completion suggestions for prompt arguments and resource template parameters. With the context parameter, servers can provide completions based on previously resolved values:
Client usage:
""" cd to the `examples/snippets` directory and run: uv run completion-client """ import asyncio import os from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client from mcp.types import PromptReference, ResourceTemplateReference # Create server parameters for stdio connection server_params = StdioServerParameters( command="uv", # Using uv to run the server args=["run", "server", "completion", "stdio"], # Server with completion support env={"UV_INDEX": os.environ.get("UV_INDEX", "")}, ) async def run(): """Run the completion client example.""" async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: # Initialize the connection await session.initialize() # List available resource templates templates = await session.list_resource_templates() print("Available resource templates:") for template in templates.resourceTemplates: print(f" - {template.uriTemplate}") # List available prompts prompts = await session.list_prompts() print("\nAvailable prompts:") for prompt in prompts.prompts: print(f" - {prompt.name}") # Complete resource template arguments if templates.resourceTemplates: template = templates.resourceTemplates[0] print(f"\nCompleting arguments for resource template: {template.uriTemplate}") # Complete without context result = await session.complete( ref=ResourceTemplateReference(type="ref/resource", uri=template.uriTemplate), argument={"name": "owner", "value": "model"}, ) print(f"Completions for 'owner' starting with 'model': {result.completion.values}") # Complete with context - repo suggestions based on owner result = await session.complete( ref=ResourceTemplateReference(type="ref/resource", uri=template.uriTemplate), argument={"name": "repo", "value": ""}, context_arguments={"owner": "modelcontextprotocol"}, ) print(f"Completions for 'repo' with owner='modelcontextprotocol': {result.completion.values}") # Complete prompt arguments if prompts.prompts: prompt_name = prompts.prompts[0].name print(f"\nCompleting arguments for prompt: {prompt_name}") result = await session.complete( ref=PromptReference(type="ref/prompt", name=prompt_name), argument={"name": "style", "value": ""}, ) print(f"Completions for 'style' argument: {result.completion.values}") def main(): """Entry point for the completion client.""" asyncio.run(run()) if __name__ == "__main__": main()
Full example: examples/snippets/clients/completion_client.py
Request additional information from users. This example shows an Elicitation during a Tool Call:
from pydantic import BaseModel, Field from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession mcp = FastMCP(name="Elicitation Example") class BookingPreferences(BaseModel): """Schema for collecting user preferences.""" checkAlternative: bool = Field(description="Would you like to check another date?") alternativeDate: str = Field( default="2024-12-26", description="Alternative date (YYYY-MM-DD)", ) @mcp.tool() async def book_table(date: str, time: str, party_size: int, ctx: Context[ServerSession, None]) -> str: """Book a table with date availability check.""" # Check if date is available if date == "2024-12-25": # Date unavailable - ask user for alternative result = await ctx.elicit( message=(f"No tables available for {party_size} on {date}. Would you like to try another date?"), schema=BookingPreferences, ) if result.action == "accept" and result.data: if result.data.checkAlternative: return f"[SUCCESS] Booked for {result.data.alternativeDate}" return "[CANCELLED] No booking made" return "[CANCELLED] Booking cancelled" # Date available return f"[SUCCESS] Booked for {date} at {time}"
Full example: examples/snippets/servers/elicitation.py
Elicitation schemas support default values for all field types. Default values are automatically included in the JSON schema sent to clients, allowing them to pre-populate forms.
The elicit()
method returns an ElicitationResult
with:
action
: "accept", "decline", or "cancel"data
: The validated response (only when accepted)validation_error
: Any validation error message
Tools can interact with LLMs through sampling (generating text):
from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession from mcp.types import SamplingMessage, TextContent mcp = FastMCP(name="Sampling Example") @mcp.tool() async def generate_poem(topic: str, ctx: Context[ServerSession, None]) -> str: """Generate a poem using LLM sampling.""" prompt = f"Write a short poem about {topic}" result = await ctx.session.create_message( messages=[ SamplingMessage( role="user", content=TextContent(type="text", text=prompt), ) ], max_tokens=100, ) if result.content.type == "text": return result.content.text return str(result.content)
Full example: examples/snippets/servers/sampling.py
Tools can send logs and notifications through the context:
from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession mcp = FastMCP(name="Notifications Example") @mcp.tool() async def process_data(data: str, ctx: Context[ServerSession, None]) -> str: """Process data with logging.""" # Different log levels await ctx.debug(f"Debug: Processing '{data}'") await ctx.info("Info: Starting processing") await ctx.warning("Warning: This is experimental") await ctx.error("Error: (This is just a demo)") # Notify about resource changes await ctx.session.send_resource_list_changed() return f"Processed: {data}"
Full example: examples/snippets/servers/notifications.py
Authentication can be used by servers that want to expose tools accessing protected resources.
mcp.server.auth
implements OAuth 2.1 resource server functionality, where MCP servers act as Resource Servers (RS) that validate tokens issued by separate Authorization Servers (AS). This follows the MCP authorization specification and implements RFC 9728 (Protected Resource Metadata) for AS discovery.
MCP servers can use authentication by providing an implementation of the TokenVerifier
protocol:
""" Run from the repository root: uv run examples/snippets/servers/oauth_server.py """ from pydantic import AnyHttpUrl from mcp.server.auth.provider import AccessToken, TokenVerifier from mcp.server.auth.settings import AuthSettings from mcp.server.fastmcp import FastMCP class SimpleTokenVerifier(TokenVerifier): """Simple token verifier for demonstration.""" async def verify_token(self, token: str) -> AccessToken | None: pass # This is where you would implement actual token validation # Create FastMCP instance as a Resource Server mcp = FastMCP( "Weather Service", # Token verifier for authentication token_verifier=SimpleTokenVerifier(), # Auth settings for RFC 9728 Protected Resource Metadata auth=AuthSettings( issuer_url=AnyHttpUrl("https://auth.example.com"), # Authorization Server URL resource_server_url=AnyHttpUrl("http://localhost:3001"), # This server's URL required_scopes=["user"], ), ) @mcp.tool() async def get_weather(city: str = "London") -> dict[str, str]: """Get weather data for a city""" return { "city": city, "temperature": "22", "condition": "Partly cloudy", "humidity": "65%", } if __name__ == "__main__": mcp.run(transport="streamable-http")
Full example: examples/snippets/servers/oauth_server.py
For a complete example with separate Authorization Server and Resource Server implementations, see examples/servers/simple-auth/
.
Architecture:
- Authorization Server (AS): Handles OAuth flows, user authentication, and token issuance
- Resource Server (RS): Your MCP server that validates tokens and serves protected resources
- Client: Discovers AS through RFC 9728, obtains tokens, and uses them with the MCP server
See TokenVerifier for more details on implementing token validation.
The FastMCP server instance accessible via ctx.fastmcp
provides access to server configuration and metadata:
ctx.fastmcp.name
- The server's name as defined during initializationctx.fastmcp.instructions
- Server instructions/description provided to clientsctx.fastmcp.website_url
- Optional website URL for the serverctx.fastmcp.icons
- Optional list of icons for UI displayctx.fastmcp.settings
- Complete server configuration object containing:debug
- Debug mode flaglog_level
- Current logging levelhost
andport
- Server network configurationmount_path
,sse_path
,streamable_http_path
- Transport pathsstateless_http
- Whether the server operates in stateless mode- And other configuration options
@mcp.tool() def server_info(ctx: Context) -> dict: """Get information about the current server.""" return { "name": ctx.fastmcp.name, "instructions": ctx.fastmcp.instructions, "debug_mode": ctx.fastmcp.settings.debug, "log_level": ctx.fastmcp.settings.log_level, "host": ctx.fastmcp.settings.host, "port": ctx.fastmcp.settings.port, }
The session object accessible via ctx.session
provides advanced control over client communication:
ctx.session.client_params
- Client initialization parameters and declared capabilitiesawait ctx.session.send_log_message(level, data, logger)
- Send log messages with full controlawait ctx.session.create_message(messages, max_tokens)
- Request LLM sampling/completionawait ctx.session.send_progress_notification(token, progress, total, message)
- Direct progress updatesawait ctx.session.send_resource_updated(uri)
- Notify clients that a specific resource changedawait ctx.session.send_resource_list_changed()
- Notify clients that the resource list changedawait ctx.session.send_tool_list_changed()
- Notify clients that the tool list changedawait ctx.session.send_prompt_list_changed()
- Notify clients that the prompt list changed
@mcp.tool() async def notify_data_update(resource_uri: str, ctx: Context) -> str: """Update data and notify clients of the change.""" # Perform data update logic here # Notify clients that this specific resource changed await ctx.session.send_resource_updated(AnyUrl(resource_uri)) # If this affects the overall resource list, notify about that too await ctx.session.send_resource_list_changed() return f"Updated {resource_uri} and notified clients"
The request context accessible via ctx.request_context
contains request-specific information and resources:
ctx.request_context.lifespan_context
- Access to resources initialized during server startup- Database connections, configuration objects, shared services
- Type-safe access to resources defined in your server's lifespan function
ctx.request_context.meta
- Request metadata from the client including:progressToken
- Token for progress notifications- Other client-provided metadata
ctx.request_context.request
- The original MCP request object for advanced processingctx.request_context.request_id
- Unique identifier for this request
# Example with typed lifespan context @dataclass class AppContext: db: Database config: AppConfig @mcp.tool() def query_with_config(query: str, ctx: Context) -> str: """Execute a query using shared database and configuration.""" # Access typed lifespan context app_ctx: AppContext = ctx.request_context.lifespan_context # Use shared resources connection = app_ctx.db settings = app_ctx.config # Execute query with configuration result = connection.execute(query, timeout=settings.query_timeout) return str(result)
Full lifespan example: examples/snippets/servers/lifespan_example.py
The fastest way to test and debug your server is with the MCP Inspector:
uv run mcp dev server.py # Add dependencies uv run mcp dev server.py --with pandas --with numpy # Mount local code uv run mcp dev server.py --with-editable .
Once your server is ready, install it in Claude Desktop:
uv run mcp install server.py # Custom name uv run mcp install server.py --name "My Analytics Server" # Environment variables uv run mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://... uv run mcp install server.py -f .env
For advanced scenarios like custom deployments:
"""Example showing direct execution of an MCP server. This is the simplest way to run an MCP server directly. cd to the `examples/snippets` directory and run: uv run direct-execution-server or python servers/direct_execution.py """ from mcp.server.fastmcp import FastMCP mcp = FastMCP("My App") @mcp.tool() def hello(name: str = "World") -> str: """Say hello to someone.""" return f"Hello, {name}!" def main(): """Entry point for the direct execution server.""" mcp.run() if __name__ == "__main__": main()
Full example: examples/snippets/servers/direct_execution.py
Run it with:
python servers/direct_execution.py # or uv run mcp run servers/direct_execution.py
Note that uv run mcp run
or uv run mcp dev
only supports server using FastMCP and not the low-level server variant.
Note: Streamable HTTP transport is superseding SSE transport for production deployments.
""" Run from the repository root: uv run examples/snippets/servers/streamable_config.py """ from mcp.server.fastmcp import FastMCP # Stateful server (maintains session state) mcp = FastMCP("StatefulServer") # Other configuration options: # Stateless server (no session persistence) # mcp = FastMCP("StatelessServer", stateless_http=True) # Stateless server (no session persistence, no sse stream with supported client) # mcp = FastMCP("StatelessServer", stateless_http=True, json_response=True) # Add a simple tool to demonstrate the server @mcp.tool() def greet(name: str = "World") -> str: """Greet someone by name.""" return f"Hello, {name}!" # Run server with streamable_http transport if __name__ == "__main__": mcp.run(transport="streamable-http")
Full example: examples/snippets/servers/streamable_config.py
You can mount multiple FastMCP servers in a Starlette application:
""" Run from the repository root: uvicorn examples.snippets.servers.streamable_starlette_mount:app --reload """ import contextlib from starlette.applications import Starlette from starlette.routing import Mount from mcp.server.fastmcp import FastMCP # Create the Echo server echo_mcp = FastMCP(name="EchoServer", stateless_http=True) @echo_mcp.tool() def echo(message: str) -> str: """A simple echo tool""" return f"Echo: {message}" # Create the Math server math_mcp = FastMCP(name="MathServer", stateless_http=True) @math_mcp.tool() def add_two(n: int) -> int: """Tool to add two to the input""" return n + 2 # Create a combined lifespan to manage both session managers @contextlib.asynccontextmanager async def lifespan(app: Starlette): async with contextlib.AsyncExitStack() as stack: await stack.enter_async_context(echo_mcp.session_manager.run()) await stack.enter_async_context(math_mcp.session_manager.run()) yield # Create the Starlette app and mount the MCP servers app = Starlette( routes=[ Mount("/echo", echo_mcp.streamable_http_app()), Mount("/math", math_mcp.streamable_http_app()), ], lifespan=lifespan, ) # Note: Clients connect to http://localhost:8000/echo/mcp and http://localhost:8000/math/mcp # To mount at the root of each path (e.g., /echo instead of /echo/mcp): # echo_mcp.settings.streamable_http_path = "/" # math_mcp.settings.streamable_http_path = "/"
Full example: examples/snippets/servers/streamable_starlette_mount.py
For low level server with Streamable HTTP implementations, see:
- Stateful server:
examples/servers/simple-streamablehttp/
- Stateless server:
examples/servers/simple-streamablehttp-stateless/
The streamable HTTP transport supports:
- Stateful and stateless operation modes
- Resumability with event stores
- JSON or SSE response formats
- Better scalability for multi-node deployments
If you'd like your server to be accessible by browser-based MCP clients, you'll need to configure CORS headers. The Mcp-Session-Id
header must be exposed for browser clients to access it:
from starlette.applications import Starlette from starlette.middleware.cors import CORSMiddleware # Create your Starlette app first starlette_app = Starlette(routes=[...]) # Then wrap it with CORS middleware starlette_app = CORSMiddleware( starlette_app, allow_origins=["*"], # Configure appropriately for production allow_methods=["GET", "POST", "DELETE"], # MCP streamable HTTP methods expose_headers=["Mcp-Session-Id"], )
This configuration is necessary because:
- The MCP streamable HTTP transport uses the
Mcp-Session-Id
header for session management - Browsers restrict access to response headers unless explicitly exposed via CORS
- Without this configuration, browser-based clients won't be able to read the session ID from initialization responses
By default, SSE servers are mounted at /sse
and Streamable HTTP servers are mounted at /mcp
. You can customize these paths using the methods described below.
For more information on mounting applications in Starlette, see the Starlette documentation.
You can mount the StreamableHTTP server to an existing ASGI server using the streamable_http_app
method. This allows you to integrate the StreamableHTTP server with other ASGI applications.
""" Basic example showing how to mount StreamableHTTP server in Starlette. Run from the repository root: uvicorn examples.snippets.servers.streamable_http_basic_mounting:app --reload """ from starlette.applications import Starlette from starlette.routing import Mount from mcp.server.fastmcp import FastMCP # Create MCP server mcp = FastMCP("My App") @mcp.tool() def hello() -> str: """A simple hello tool""" return "Hello from MCP!" # Mount the StreamableHTTP server to the existing ASGI server app = Starlette( routes=[ Mount("/", app=mcp.streamable_http_app()), ] )
Full example: examples/snippets/servers/streamable_http_basic_mounting.py
""" Example showing how to mount StreamableHTTP server using Host-based routing. Run from the repository root: uvicorn examples.snippets.servers.streamable_http_host_mounting:app --reload """ from starlette.applications import Starlette from starlette.routing import Host from mcp.server.fastmcp import FastMCP # Create MCP server mcp = FastMCP("MCP Host App") @mcp.tool() def domain_info() -> str: """Get domain-specific information""" return "This is served from mcp.acme.corp" # Mount using Host-based routing app = Starlette( routes=[ Host("mcp.acme.corp", app=mcp.streamable_http_app()), ] )
Full example: examples/snippets/servers/streamable_http_host_mounting.py
""" Example showing how to mount multiple StreamableHTTP servers with path configuration. Run from the repository root: uvicorn examples.snippets.servers.streamable_http_multiple_servers:app --reload """ from starlette.applications import Starlette from starlette.routing import Mount from mcp.server.fastmcp import FastMCP # Create multiple MCP servers api_mcp = FastMCP("API Server") chat_mcp = FastMCP("Chat Server") @api_mcp.tool() def api_status() -> str: """Get API status""" return "API is running" @chat_mcp.tool() def send_message(message: str) -> str: """Send a chat message""" return f"Message sent: {message}" # Configure servers to mount at the root of each path # This means endpoints will be at /api and /chat instead of /api/mcp and /chat/mcp api_mcp.settings.streamable_http_path = "/" chat_mcp.settings.streamable_http_path = "/" # Mount the servers app = Starlette( routes=[ Mount("/api", app=api_mcp.streamable_http_app()), Mount("/chat", app=chat_mcp.streamable_http_app()), ] )
Full example: examples/snippets/servers/streamable_http_multiple_servers.py
""" Example showing path configuration during FastMCP initialization. Run from the repository root: uvicorn examples.snippets.servers.streamable_http_path_config:app --reload """ from starlette.applications import Starlette from starlette.routing import Mount from mcp.server.fastmcp import FastMCP # Configure streamable_http_path during initialization # This server will mount at the root of wherever it's mounted mcp_at_root = FastMCP("My Server", streamable_http_path="/") @mcp_at_root.tool() def process_data(data: str) -> str: """Process some data""" return f"Processed: {data}" # Mount at /process - endpoints will be at /process instead of /process/mcp app = Starlette( routes=[ Mount("/process", app=mcp_at_root.streamable_http_app()), ] )
Full example: examples/snippets/servers/streamable_http_path_config.py
Note: SSE transport is being superseded by Streamable HTTP transport.
You can mount the SSE server to an existing ASGI server using the sse_app
method. This allows you to integrate the SSE server with other ASGI applications.
from starlette.applications import Starlette from starlette.routing import Mount, Host from mcp.server.fastmcp import FastMCP mcp = FastMCP("My App") # Mount the SSE server to the existing ASGI server app = Starlette( routes=[ Mount('/', app=mcp.sse_app()), ] ) # or dynamically mount as host app.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app()))
When mounting multiple MCP servers under different paths, you can configure the mount path in several ways:
from starlette.applications import Starlette from starlette.routing import Mount from mcp.server.fastmcp import FastMCP # Create multiple MCP servers github_mcp = FastMCP("GitHub API") browser_mcp = FastMCP("Browser") curl_mcp = FastMCP("Curl") search_mcp = FastMCP("Search") # Method 1: Configure mount paths via settings (recommended for persistent configuration) github_mcp.settings.mount_path = "/github" browser_mcp.settings.mount_path = "/browser" # Method 2: Pass mount path directly to sse_app (preferred for ad-hoc mounting) # This approach doesn't modify the server's settings permanently # Create Starlette app with multiple mounted servers app = Starlette( routes=[ # Using settings-based configuration Mount("/github", app=github_mcp.sse_app()), Mount("/browser", app=browser_mcp.sse_app()), # Using direct mount path parameter Mount("/curl", app=curl_mcp.sse_app("/curl")), Mount("/search", app=search_mcp.sse_app("/search")), ] ) # Method 3: For direct execution, you can also pass the mount path to run() if __name__ == "__main__": search_mcp.run(transport="sse", mount_path="/search")
For more information on mounting applications in Starlette, see the Starlette documentation.
For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
""" Run from the repository root: uv run examples/snippets/servers/lowlevel/lifespan.py """ from collections.abc import AsyncIterator from contextlib import asynccontextmanager from typing import Any import mcp.server.stdio import mcp.types as types from mcp.server.lowlevel import NotificationOptions, Server from mcp.server.models import InitializationOptions # Mock database class for example class Database: """Mock database class for example.""" @classmethod async def connect(cls) -> "Database": """Connect to database.""" print("Database connected") return cls() async def disconnect(self) -> None: """Disconnect from database.""" print("Database disconnected") async def query(self, query_str: str) -> list[dict[str, str]]: """Execute a query.""" # Simulate database query return [{"id": "1", "name": "Example", "query": query_str}] @asynccontextmanager async def server_lifespan(_server: Server) -> AsyncIterator[dict[str, Any]]: """Manage server startup and shutdown lifecycle.""" # Initialize resources on startup db = await Database.connect() try: yield {"db": db} finally: # Clean up on shutdown await db.disconnect() # Pass lifespan to server server = Server("example-server", lifespan=server_lifespan) @server.list_tools() async def handle_list_tools() -> list[types.Tool]: """List available tools.""" return [ types.Tool( name="query_db", description="Query the database", inputSchema={ "type": "object", "properties": {"query": {"type": "string", "description": "SQL query to execute"}}, "required": ["query"], }, ) ] @server.call_tool() async def query_db(name: str, arguments: dict[str, Any]) -> list[types.TextContent]: """Handle database query tool call.""" if name != "query_db": raise ValueError(f"Unknown tool: {name}") # Access lifespan context ctx = server.request_context db = ctx.lifespan_context["db"] # Execute query results = await db.query(arguments["query"]) return [types.TextContent(type="text", text=f"Query results: {results}")] async def run(): """Run the server with lifespan management.""" async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): await server.run( read_stream, write_stream, InitializationOptions( server_name="example-server", server_version="0.1.0", capabilities=server.get_capabilities( notification_options=NotificationOptions(), experimental_capabilities={}, ), ), ) if __name__ == "__main__": import asyncio asyncio.run(run())
Full example: examples/snippets/servers/lowlevel/lifespan.py
The lifespan API provides:
- A way to initialize resources when the server starts and clean them up when it stops
- Access to initialized resources through the request context in handlers
- Type-safe context passing between lifespan and request handlers
""" Run from the repository root: uv run examples/snippets/servers/lowlevel/basic.py """ import asyncio import mcp.server.stdio import mcp.types as types from mcp.server.lowlevel import NotificationOptions, Server from mcp.server.models import InitializationOptions # Create a server instance server = Server("example-server") @server.list_prompts() async def handle_list_prompts() -> list[types.Prompt]: """List available prompts.""" return [ types.Prompt( name="example-prompt", description="An example prompt template", arguments=[types.PromptArgument(name="arg1", description="Example argument", required=True)], ) ] @server.get_prompt() async def handle_get_prompt(name: str, arguments: dict[str, str] | None) -> types.GetPromptResult: """Get a specific prompt by name.""" if name != "example-prompt": raise ValueError(f"Unknown prompt: {name}") arg1_value = (arguments or {}).get("arg1", "default") return types.GetPromptResult( description="Example prompt", messages=[ types.PromptMessage( role="user", content=types.TextContent(type="text", text=f"Example prompt text with argument: {arg1_value}"), ) ], ) async def run(): """Run the basic low-level server.""" async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): await server.run( read_stream, write_stream, InitializationOptions( server_name="example", server_version="0.1.0", capabilities=server.get_capabilities( notification_options=NotificationOptions(), experimental_capabilities={}, ), ), ) if __name__ == "__main__": asyncio.run(run())
Full example: examples/snippets/servers/lowlevel/basic.py
Caution: The uv run mcp run
and uv run mcp dev
tool doesn't support low-level server.
The low-level server supports structured output for tools, allowing you to return both human-readable content and machine-readable structured data. Tools can define an outputSchema
to validate their structured output:
""" Run from the repository root: uv run examples/snippets/servers/lowlevel/structured_output.py """ import asyncio from typing import Any import mcp.server.stdio import mcp.types as types from mcp.server.lowlevel import NotificationOptions, Server from mcp.server.models import InitializationOptions server = Server("example-server") @server.list_tools() async def list_tools() -> list[types.Tool]: """List available tools with structured output schemas.""" return [ types.Tool( name="get_weather", description="Get current weather for a city", inputSchema={ "type": "object", "properties": {"city": {"type": "string", "description": "City name"}}, "required": ["city"], }, outputSchema={ "type": "object", "properties": { "temperature": {"type": "number", "description": "Temperature in Celsius"}, "condition": {"type": "string", "description": "Weather condition"}, "humidity": {"type": "number", "description": "Humidity percentage"}, "city": {"type": "string", "description": "City name"}, }, "required": ["temperature", "condition", "humidity", "city"], }, ) ] @server.call_tool() async def call_tool(name: str, arguments: dict[str, Any]) -> dict[str, Any]: """Handle tool calls with structured output.""" if name == "get_weather": city = arguments["city"] # Simulated weather data - in production, call a weather API weather_data = { "temperature": 22.5, "condition": "partly cloudy", "humidity": 65, "city": city, # Include the requested city } # low-level server will validate structured output against the tool's # output schema, and additionally serialize it into a TextContent block # for backwards compatibility with pre-2025-06-18 clients. return weather_data else: raise ValueError(f"Unknown tool: {name}") async def run(): """Run the structured output server.""" async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): await server.run( read_stream, write_stream, InitializationOptions( server_name="structured-output-example", server_version="0.1.0", capabilities=server.get_capabilities( notification_options=NotificationOptions(), experimental_capabilities={}, ), ), ) if __name__ == "__main__": asyncio.run(run())
Full example: examples/snippets/servers/lowlevel/structured_output.py
Tools can return data in three ways:
- Content only: Return a list of content blocks (default behavior before spec revision 2025-06-18)
- Structured data only: Return a dictionary that will be serialized to JSON (Introduced in spec revision 2025-06-18)
- Both: Return a tuple of (content, structured_data) preferred option to use for backwards compatibility
When an outputSchema
is defined, the server automatically validates the structured output against the schema. This ensures type safety and helps catch errors early.
For servers that need to handle large datasets, the low-level server provides paginated versions of list operations. This is an optional optimization - most servers won't need pagination unless they're dealing with hundreds or thousands of items.
""" Example of implementing pagination with MCP server decorators. """ from pydantic import AnyUrl import mcp.types as types from mcp.server.lowlevel import Server # Initialize the server server = Server("paginated-server") # Sample data to paginate ITEMS = [f"Item {i}" for i in range(1, 101)] # 100 items @server.list_resources() async def list_resources_paginated(request: types.ListResourcesRequest) -> types.ListResourcesResult: """List resources with pagination support.""" page_size = 10 # Extract cursor from request params cursor = request.params.cursor if request.params is not None else None # Parse cursor to get offset start = 0 if cursor is None else int(cursor) end = start + page_size # Get page of resources page_items = [ types.Resource(uri=AnyUrl(f"resource://items/{item}"), name=item, description=f"Description for {item}") for item in ITEMS[start:end] ] # Determine next cursor next_cursor = str(end) if end < len(ITEMS) else None return types.ListResourcesResult(resources=page_items, nextCursor=next_cursor)
Full example: examples/snippets/servers/pagination_example.py
""" Example of consuming paginated MCP endpoints from a client. """ import asyncio from mcp.client.session import ClientSession from mcp.client.stdio import StdioServerParameters, stdio_client from mcp.types import PaginatedRequestParams, Resource async def list_all_resources() -> None: """Fetch all resources using pagination.""" async with stdio_client(StdioServerParameters(command="uv", args=["run", "mcp-simple-pagination"])) as ( read, write, ): async with ClientSession(read, write) as session: await session.initialize() all_resources: list[Resource] = [] cursor = None while True: # Fetch a page of resources result = await session.list_resources(params=PaginatedRequestParams(cursor=cursor)) all_resources.extend(result.resources) print(f"Fetched {len(result.resources)} resources") # Check if there are more pages if result.nextCursor: cursor = result.nextCursor else: break print(f"Total resources: {len(all_resources)}") if __name__ == "__main__": asyncio.run(list_all_resources())
Full example: examples/snippets/clients/pagination_client.py
- Cursors are opaque strings - the server defines the format (numeric offsets, timestamps, etc.)
- Return
nextCursor=None
when there are no more pages - Backward compatible - clients that don't support pagination will still work (they'll just get the first page)
- Flexible page sizes - Each endpoint can define its own page size based on data characteristics
See the simple-pagination example for a complete implementation.
The SDK provides a high-level client interface for connecting to MCP servers using various transports:
""" cd to the `examples/snippets/clients` directory and run: uv run client """ import asyncio import os from pydantic import AnyUrl from mcp import ClientSession, StdioServerParameters, types from mcp.client.stdio import stdio_client from mcp.shared.context import RequestContext # Create server parameters for stdio connection server_params = StdioServerParameters( command="uv", # Using uv to run the server args=["run", "server", "fastmcp_quickstart", "stdio"], # We're already in snippets dir env={"UV_INDEX": os.environ.get("UV_INDEX", "")}, ) # Optional: create a sampling callback async def handle_sampling_message( context: RequestContext[ClientSession, None], params: types.CreateMessageRequestParams ) -> types.CreateMessageResult: print(f"Sampling request: {params.messages}") return types.CreateMessageResult( role="assistant", content=types.TextContent( type="text", text="Hello, world! from model", ), model="gpt-3.5-turbo", stopReason="endTurn", ) async def run(): async with stdio_client(server_params) as (read, write): async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session: # Initialize the connection await session.initialize() # List available prompts prompts = await session.list_prompts() print(f"Available prompts: {[p.name for p in prompts.prompts]}") # Get a prompt (greet_user prompt from fastmcp_quickstart) if prompts.prompts: prompt = await session.get_prompt("greet_user", arguments={"name": "Alice", "style": "friendly"}) print(f"Prompt result: {prompt.messages[0].content}") # List available resources resources = await session.list_resources() print(f"Available resources: {[r.uri for r in resources.resources]}") # List available tools tools = await session.list_tools() print(f"Available tools: {[t.name for t in tools.tools]}") # Read a resource (greeting resource from fastmcp_quickstart) resource_content = await session.read_resource(AnyUrl("greeting://World")) content_block = resource_content.contents[0] if isinstance(content_block, types.TextContent): print(f"Resource content: {content_block.text}") # Call a tool (add tool from fastmcp_quickstart) result = await session.call_tool("add", arguments={"a": 5, "b": 3}) result_unstructured = result.content[0] if isinstance(result_unstructured, types.TextContent): print(f"Tool result: {result_unstructured.text}") result_structured = result.structuredContent print(f"Structured tool result: {result_structured}") def main(): """Entry point for the client script.""" asyncio.run(run()) if __name__ == "__main__": main()
Full example: examples/snippets/clients/stdio_client.py
Clients can also connect using Streamable HTTP transport:
""" Run from the repository root: uv run examples/snippets/clients/streamable_basic.py """ import asyncio from mcp import ClientSession from mcp.client.streamable_http import streamablehttp_client async def main(): # Connect to a streamable HTTP server async with streamablehttp_client("http://localhost:8000/mcp") as ( read_stream, write_stream, _, ): # Create a session using the client streams async with ClientSession(read_stream, write_stream) as session: # Initialize the connection await session.initialize() # List available tools tools = await session.list_tools() print(f"Available tools: {[tool.name for tool in tools.tools]}") if __name__ == "__main__": asyncio.run(main())
Full example: examples/snippets/clients/streamable_basic.py
When building MCP clients, the SDK provides utilities to help display human-readable names for tools, resources, and prompts:
""" cd to the `examples/snippets` directory and run: uv run display-utilities-client """ import asyncio import os from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client from mcp.shared.metadata_utils import get_display_name # Create server parameters for stdio connection server_params = StdioServerParameters( command="uv", # Using uv to run the server args=["run", "server", "fastmcp_quickstart", "stdio"], env={"UV_INDEX": os.environ.get("UV_INDEX", "")}, ) async def display_tools(session: ClientSession): """Display available tools with human-readable names""" tools_response = await session.list_tools() for tool in tools_response.tools: # get_display_name() returns the title if available, otherwise the name display_name = get_display_name(tool) print(f"Tool: {display_name}") if tool.description: print(f" {tool.description}") async def display_resources(session: ClientSession): """Display available resources with human-readable names""" resources_response = await session.list_resources() for resource in resources_response.resources: display_name = get_display_name(resource) print(f"Resource: {display_name} ({resource.uri})") templates_response = await session.list_resource_templates() for template in templates_response.resourceTemplates: display_name = get_display_name(template) print(f"Resource Template: {display_name}") async def run(): """Run the display utilities example.""" async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: # Initialize the connection await session.initialize() print("=== Available Tools ===") await display_tools(session) print("\n=== Available Resources ===") await display_resources(session) def main(): """Entry point for the display utilities client.""" asyncio.run(run()) if __name__ == "__main__": main()
Full example: examples/snippets/clients/display_utilities.py
The get_display_name()
function implements the proper precedence rules for displaying names:
- For tools:
title
>annotations.title
>name
- For other objects:
title
>name
This ensures your client UI shows the most user-friendly names that servers provide.
The SDK includes authorization support for connecting to protected MCP servers:
""" Before running, specify running MCP RS server URL. To spin up RS server locally, see examples/servers/simple-auth/README.md cd to the `examples/snippets` directory and run: uv run oauth-client """ import asyncio from urllib.parse import parse_qs, urlparse from pydantic import AnyUrl from mcp import ClientSession from mcp.client.auth import OAuthClientProvider, TokenStorage from mcp.client.streamable_http import streamablehttp_client from mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAuthToken class InMemoryTokenStorage(TokenStorage): """Demo In-memory token storage implementation.""" def __init__(self): self.tokens: OAuthToken | None = None self.client_info: OAuthClientInformationFull | None = None async def get_tokens(self) -> OAuthToken | None: """Get stored tokens.""" return self.tokens async def set_tokens(self, tokens: OAuthToken) -> None: """Store tokens.""" self.tokens = tokens async def get_client_info(self) -> OAuthClientInformationFull | None: """Get stored client information.""" return self.client_info async def set_client_info(self, client_info: OAuthClientInformationFull) -> None: """Store client information.""" self.client_info = client_info async def handle_redirect(auth_url: str) -> None: print(f"Visit: {auth_url}") async def handle_callback() -> tuple[str, str | None]: callback_url = input("Paste callback URL: ") params = parse_qs(urlparse(callback_url).query) return params["code"][0], params.get("state", [None])[0] async def main(): """Run the OAuth client example.""" oauth_auth = OAuthClientProvider( server_url="http://localhost:8001", client_metadata=OAuthClientMetadata( client_name="Example MCP Client", redirect_uris=[AnyUrl("http://localhost:3000/callback")], grant_types=["authorization_code", "refresh_token"], response_types=["code"], scope="user", ), storage=InMemoryTokenStorage(), redirect_handler=handle_redirect, callback_handler=handle_callback, ) async with streamablehttp_client("http://localhost:8001/mcp", auth=oauth_auth) as (read, write, _): async with ClientSession(read, write) as session: await session.initialize() tools = await session.list_tools() print(f"Available tools: {[tool.name for tool in tools.tools]}") resources = await session.list_resources() print(f"Available resources: {[r.uri for r in resources.resources]}") def run(): asyncio.run(main()) if __name__ == "__main__": run()
Full example: examples/snippets/clients/oauth_client.py
For a complete working example, see examples/clients/simple-auth-client/
.
When calling tools through MCP, the CallToolResult
object contains the tool's response in a structured format. Understanding how to parse this result is essential for properly handling tool outputs.
"""examples/snippets/clients/parsing_tool_results.py""" import asyncio from mcp import ClientSession, StdioServerParameters, types from mcp.client.stdio import stdio_client async def parse_tool_results(): """Demonstrates how to parse different types of content in CallToolResult.""" server_params = StdioServerParameters( command="python", args=["path/to/mcp_server.py"] ) async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: await session.initialize() # Example 1: Parsing text content result = await session.call_tool("get_data", {"format": "text"}) for content in result.content: if isinstance(content, types.TextContent): print(f"Text: {content.text}") # Example 2: Parsing structured content from JSON tools result = await session.call_tool("get_user", {"id": "123"}) if hasattr(result, "structuredContent") and result.structuredContent: # Access structured data directly user_data = result.structuredContent print(f"User: {user_data.get('name')}, Age: {user_data.get('age')}") # Example 3: Parsing embedded resources result = await session.call_tool("read_config", {}) for content in result.content: if isinstance(content, types.EmbeddedResource): resource = content.resource if isinstance(resource, types.TextResourceContents): print(f"Config from {resource.uri}: {resource.text}") elif isinstance(resource, types.BlobResourceContents): print(f"Binary data from {resource.uri}") # Example 4: Parsing image content result = await session.call_tool("generate_chart", {"data": [1, 2, 3]}) for content in result.content: if isinstance(content, types.ImageContent): print(f"Image ({content.mimeType}): {len(content.data)} bytes") # Example 5: Handling errors result = await session.call_tool("failing_tool", {}) if result.isError: print("Tool execution failed!") for content in result.content: if isinstance(content, types.TextContent): print(f"Error: {content.text}") async def main(): await parse_tool_results() if __name__ == "__main__": asyncio.run(main())
The MCP protocol defines three core primitives that servers can implement:
Primitive | Control | Description | Example Use |
---|---|---|---|
Prompts | User-controlled | Interactive templates invoked by user choice | Slash commands, menu options |
Resources | Application-controlled | Contextual data managed by the client application | File contents, API responses |
Tools | Model-controlled | Functions exposed to the LLM to take actions | API calls, data updates |
MCP servers declare capabilities during initialization:
Capability | Feature Flag | Description |
---|---|---|
prompts | listChanged | Prompt template management |
resources | subscribe listChanged | Resource exposure and updates |
tools | listChanged | Tool discovery and execution |
logging | - | Server logging configuration |
completions | - | Argument completion suggestions |
- API Reference
- Model Context Protocol documentation
- Model Context Protocol specification
- Officially supported servers
We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the contributing guide to get started.
This project is licensed under the MIT License - see the LICENSE file for details.