Skip to main content
mcp-agent logo mcp-agent is a simple, composable framework to build effective agents using Model Context Protocol. mcp-agent’s vision is that MCP is all you need to build agents, and that simple patterns are more robust than complex architectures for shipping high-quality agents. When you’re ready to deploy, mcp-c let’s you deploy any kind of MCP server to a managed Cloud. You can even deploy agents as MCP servers!

Why teams pick mcp-agent

MCP-native

Fully implements the MCP spec, including auth, elicitation, sampling, and notifications.

Composable patterns

Map-reduce, router, deep research, evaluator — every pattern from Anthropic’s Building Effective Agents guide ships as a first-class workflow.

Built for Production

Durable execution with Temporal, OpenTelemetry observability, and cloud deployment via the CLI.

Lightweight & Pythonic

Define an agent with a few lines of Python—mcp-agent handles the lifecycle, connections, and MCP server wiring for you.
import asyncio from mcp_agent.app import MCPApp from mcp_agent.agents.agent import Agent from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM  app = MCPApp(name="researcher")  async def main():  async with app.run() as session:  agent = Agent(  name="researcher",  instruction="Use available tools to gather concise answers.",  server_names=["fetch", "filesystem"],  )   async with agent:  llm = await agent.attach_llm(OpenAIAugmentedLLM)  report = await llm.generate_str("Summarize the latest MCP news")  print(report)  if __name__ == "__main__":  asyncio.run(main()) 

Next steps

Build with LLMs

The docs are also available in llms.txt format:
  • llms.txt - A sitemap listing all documentation pages
  • llms-full.txt - The entire documentation in one file (may exceed context windows)
  • docs MCP server - Directly connect the docs to an MCP-compatible AI coding assistant.