DEV Community

Cover image for Harnessing MCP Servers with LangChain and LangGraph: A Comprehensive Guide
James
James

Posted on

Harnessing MCP Servers with LangChain and LangGraph: A Comprehensive Guide

Welcome back Today, we're diving deep into the fascinating world of MCP servers and how they integrate seamlessly with LangChain and LangGraph. If you're excited about building advanced AI agents with LangGraph, this tutorial will walk you through the foundational concepts and a step-by-step implementation. We'll break down the provided code into digestible sections, explain each part, and show you how to run it yourself.

Before we get started, check out the accompanying YouTube video for a visual walkthrough. Click below to watch and follow along:

Watch the YouTube Video

Now, let's jump in!

Introduction to MCP Servers

MCP servers provide a powerful way to build dynamic, interactive systems. By combining them with LangChain (a framework for building applications with large language models) and LangGraph (an extension for creating agentic workflows), you can develop complex AI applications that handle tasks like calculations, data fetching, and more. This setup is ideal for scenarios where you need efficient, modular workflows—think chatbots, automated assistants, or data processing pipelines.

In this tutorial, we'll build a "Simple Agent" that uses local tools (like a math calculator) and remote tools via MCP servers. The agent will process queries, decide on actions, and clean up resources neatly. We'll use Python, asyncio for asynchronous operations, and libraries like LangChain and LangGraph.

Setting Up the Initial State and Calculator Tool

We start by defining the agent's state and a basic tool. The state is a simple TypedDict that holds a list of messages, which accumulates as the conversation progresses. We use LangGraph's add_messages to append new messages safely.

Here's the code for the agent state:

from typing import TypedDict, Annotated, List from langgraph.graph import add_messages from langchain_core.messages import BaseMessage class AgentState(TypedDict): """State for the agent workflow.""" # The list of messages accumulates over time  messages: Annotated[List[BaseMessage], add_messages] 
Enter fullscreen mode Exit fullscreen mode

Next, we create a local tool for calculations. This uses Python's eval but with safeguards: we whitelist allowed characters to prevent code injection, and we catch exceptions for invalid inputs.

from langchain_core.tools import tool @tool def calculate(expression: str) -> str: """Safely calculate a mathematical expression.""" try: # Use a whitelist to prevent arbitrary code execution  allowed_chars = set('0123456789+-*/.(). ') if all(c in allowed_chars for c in expression): result = eval(expression) return f'{expression} = {result}' else: return 'Invalid expression' except Exception as e: return f'Calculation error: {e}' 
Enter fullscreen mode Exit fullscreen mode

This tool ensures we can dynamically evaluate math expressions like "15% of 96" without risking security issues.

Creating the Simple Agent Class

The core of our setup is the SimpleAgent class. It manages tools, the LLM (using OpenAI's GPT-4o), the workflow graph, and MCP contexts. We initialize everything in the setup method, which loads local tools and attempts to connect to an MCP server for remote tools.

import asyncio from langchain_openai import ChatOpenAI from langchain_mcp_adapters.tools import load_mcp_tools from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client class SimpleAgent: def __init__(self): self.tools = [] self.llm = None self.workflow = None self.mcp_contexts = [] # Store contexts for later cleanup  async def setup(self): """Setup tools and compile the agent workflow.""" # Define local tools available to the agent  local_tools = [calculate] # Attempt to add remote tools via MCP  try: print('🔧 Loading MCP fetch tool...') # Configure the MCP server to run as a Python module subprocess  server_params = StdioServerParameters(command='python', args=['-m', 'mcp_server_fetch']) # Establish a connection to the MCP server  stdio_context = stdio_client(server_params) read, write = await stdio_context.__aenter__() self.mcp_contexts.append(stdio_context) # Create a client session for communication  session_context = ClientSession(read, write) session = await session_context.__aenter__() self.mcp_contexts.append(session_context) # Initialize the session and load remote tools  await session.initialize() mcp_tools = await load_mcp_tools(session) print(f'✅ MCP tools: {[t.name for t in mcp_tools]}') self.tools = local_tools + mcp_tools except Exception as e: # Fall back to using only local tools if MCP connection fails  print(f'⚠️ MCP failed: {e}. Using local tools only.') self.tools = local_tools # Configure the LLM and grant it access to the available tools  self.llm = ChatOpenAI(model='gpt-4o', temperature=0).bind_tools(self.tools) 
Enter fullscreen mode Exit fullscreen mode

If the MCP connection fails, we gracefully fall back to local tools. This class also tracks MCP contexts for cleanup later.

Building the Graph Structure

The graph is the heart of LangGraph—it's a state machine that defines nodes (like the agent and tools) and edges (conditional flows). We add nodes for the agent (which decides actions via the LLM) and tools (which execute them). Conditional edges check if more tools are needed or if we can end.

from langgraph.graph import StateGraph, END # Inside SimpleAgent class...  # Define the agent's graph structure  workflow = StateGraph(AgentState) workflow.add_node('agent', self.agent_node) workflow.add_node('tools', self.tools_node) workflow.set_entry_point('agent') # Define the control flow logic  workflow.add_conditional_edges('agent', self.should_continue, {'continue': 'tools', 'end': END}) workflow.add_edge('tools', 'agent') # Compile the graph into a runnable workflow  self.workflow = workflow.compile() async def agent_node(self, state: AgentState) -> AgentState: """Invoke the LLM to decide the next action.""" response = await self.llm.ainvoke(state['messages']) return {'messages': [response]} async def tools_node(self, state: AgentState) -> AgentState: """Execute the tools requested by the agent.""" last_message = state['messages'][-1] tool_messages = [] # Iterate through all tool calls made by the LLM  for tool_call in last_message.tool_calls: tool_name = tool_call['name'] tool_args = tool_call['args'] print(f'🔧 {tool_name}({tool_args})') # Find the corresponding tool implementation  tool = next((t for t in self.tools if t.name == tool_name), None) if tool: try: # Invoke the tool and capture its result  result = await tool.ainvoke(tool_args) tool_messages.append(ToolMessage(content=str(result), tool_call_id=tool_call['id'])) print(f'{str(result)[:100]}...' if len(str(result)) > 100 else f'{result}') except Exception as e: # Handle any errors during tool execution  error_msg = f'Error: {e}' tool_messages.append(ToolMessage(content=error_msg, tool_call_id=tool_call['id'])) print(f'{error_msg}') return {'messages': tool_messages} def should_continue(self, state: AgentState) -> str: """Determine whether to continue with another tool call or end.""" last_message = state['messages'][-1] # Continue if the agent requested a tool, otherwise end  return 'continue' if hasattr(last_message, 'tool_calls') and last_message.tool_calls else 'end' 
Enter fullscreen mode Exit fullscreen mode

This structure allows the agent to loop between thinking (agent node) and acting (tools node) until the query is resolved.

Implementing the Ask Method and Cleanup

The ask method lets users query the agent easily, invoking the graph asynchronously. We also add a cleanup method to close MCP contexts in reverse order, ensuring no resource leaks.

# Inside SimpleAgent class...  async def ask(self, question: str): """Present a question to the agent and get a final answer.""" print(f'\n🤖 Question: {question}') # Invoke the compiled workflow with the user's message  initial_state = {'messages': [HumanMessage(content=question)]} result = await self.workflow.ainvoke(initial_state) # Extract the final response from the agent  answer = result['messages'][-1].content print(f'✨ Answer: {answer}\n') return answer async def cleanup(self): """Clean up any active MCP resources.""" # Close contexts in reverse order of creation  for context in reversed(self.mcp_contexts): try: await context.__aexit__(None, None, None) except Exception: pass # Ignore errors during cleanup 
Enter fullscreen mode Exit fullscreen mode

Running the Agent and Final Thoughts

To run the agent, we create an instance, set it up, ask questions, and clean up. The main function demonstrates this with example queries.

from langchain_core.messages import HumanMessage, ToolMessage async def main(): """Initialize and run the agent.""" agent = SimpleAgent() # Use a try/finally block to ensure resources are always cleaned up  try: await agent.setup() # Ask the agent a series of questions  await agent.ask("What's 15% of 96?") await agent.ask('fetch the website https://langchain-ai.github.io/langgraph/ and summarize it') except Exception as e: print(f'❌ Error: {e}') finally: await agent.cleanup() if __name__ == '__main__': asyncio.run(main()) 
Enter fullscreen mode Exit fullscreen mode

When you run this in your terminal, you'll see the agent process queries: it calculates math using the local tool and fetches/summarizes websites via MCP if available. It's versatile for tasks from simple arithmetic to web data retrieval.

Conclusion and Best Practices

Integrating MCP servers with LangGraph and LangChain is about more than just tech—it's about creating adaptable, efficient systems. Stick to native methods for simplicity, but leverage MCP for remote capabilities. Always handle errors gracefully, clean up resources, and test asynchronously.

We hope this tutorial empowers your AI projects! Feel free to reach out with questions, and don't forget to explore more with Just Code It. Happy coding!

Top comments (0)