Skip to main content
Middleware specifically designed for Anthropic’s Claude models. Learn more about middleware.
MiddlewareDescription
Prompt cachingReduce costs by caching repetitive prompt prefixes
Bash toolExecute Claude’s native bash tool with local command execution
Text editorProvide Claude’s text editor tool for file editing
MemoryProvide Claude’s memory tool for persistent agent memory
File searchSearch tools for state-based file systems

Middleware vs tools

langchain-anthropic provides two ways to use Claude’s native tools:
  • Middleware (this page): Production-ready implementations with built-in execution, state management, and security policies
  • Tools (via bind_tools): Low-level building blocks where you provide your own execution logic

When to use which

Use caseRecommendedWhy
Production agents with bashMiddlewarePersistent sessions, Docker isolation, output redaction
State-based file editingMiddlewareBuilt-in LangGraph state persistence
Filesystem file editingMiddlewareWrites to disk with path validation
Custom execution logicToolsFull control over execution
Quick prototypeToolsSimpler, bring your own callback
Non-agent use with bind_toolsToolsMiddleware requires create_agent

Feature comparison

FeatureMiddlewareTools
Works with create_agent
Works with bind_tools
Built-in state management
Custom execute callback
Using middleware (turnkey solution):
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import ClaudeBashToolMiddleware from langchain.agents import create_agent from langchain.agents.middleware import DockerExecutionPolicy  # Production-ready with Docker isolation, session management, etc. agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  middleware=[  ClaudeBashToolMiddleware(  workspace_root="/workspace",  execution_policy=DockerExecutionPolicy(image="python:3.11"),  startup_commands=["pip install pandas"],  ),  ], ) 
Using tools (bring your own execution):
import subprocess  from anthropic.types.beta import BetaToolBash20250124Param from langchain_anthropic import ChatAnthropic from langchain.agents import create_agent from langchain.tools import tool  tool_spec = BetaToolBash20250124Param(  name="bash",  type="bash_20250124",  strict=True, )  @tool(extras={"provider_tool_definition": tool_spec}) def bash(*, command: str, restart: bool = False, **kw):  """Execute a bash command."""  if restart:  return "Bash session restarted"  try:  result = subprocess.run(  command,  shell=True,  capture_output=True,  text=True,  timeout=30,  )  return result.stdout + result.stderr  except Exception as e:  return f"Error: {e}"   agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[bash], )  result = agent.invoke(  {"messages": [{"role": "user", "content": "List files in this directory"}]} ) print(result["messages"][-1].content) 

Prompt caching

Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic’s servers. This middleware implements a conversational caching strategy that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls. Prompt caching is useful for the following:
  • Applications with long, static system prompts that don’t change between requests
  • Agents with many tool definitions that remain constant across invocations
  • Conversations where early message history is reused across multiple turns
  • High-volume deployments where reducing API costs and latency is critical
Learn more about Anthropic prompt caching strategies and limitations.
API reference: AnthropicPromptCachingMiddleware
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import AnthropicPromptCachingMiddleware from langchain.agents import create_agent  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  system_prompt="<Your long system prompt here>",  middleware=[AnthropicPromptCachingMiddleware(ttl="5m")],  ) 
type
string
default:"ephemeral"
Cache type. Only 'ephemeral' is currently supported.
ttl
string
default:"5m"
Time to live for cached content. Valid values: '5m' or '1h'
min_messages_to_cache
number
default:"0"
Minimum number of messages before caching starts
unsupported_model_behavior
string
default:"warn"
Behavior when using non-Anthropic models. Options: 'ignore', 'warn', or 'raise'
The middleware caches content up to and including the latest message in each request. On subsequent requests within the TTL window (5 minutes or 1 hour), previously seen content is retrieved from cache rather than reprocessed, significantly reducing costs and latency.How it works:
  1. First request: System prompt, tools, and the user message “Hi, my name is Bob” are sent to the API and cached
  2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message “What’s my name?” needs to be processed, plus the model’s response from the first request
  3. This pattern continues for each turn, with each request reusing the cached conversation history
Prompt caching reduces API costs by caching tokens, but does not provide conversation memory. To persist conversation history across invocations, use a checkpointer like MemorySaver.
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import AnthropicPromptCachingMiddleware from langchain.agents import create_agent from langchain.messages import HumanMessage from langchain_core.runnables import RunnableConfig from langgraph.checkpoint.memory import MemorySaver   LONG_PROMPT = """ Please be a helpful assistant.  <Lots more context ...> """  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  system_prompt=LONG_PROMPT,  middleware=[AnthropicPromptCachingMiddleware(ttl="5m")],   checkpointer=MemorySaver(), # Persists conversation history )  # Use a thread_id to maintain conversation state config: RunnableConfig = {"configurable": {"thread_id": "user-123"}}  # First invocation: Creates cache with system prompt, tools, and "Hi, my name is Bob" agent.invoke({"messages": [HumanMessage("Hi, my name is Bob")]}, config=config)  # Second invocation: Reuses cached system prompt, tools, and previous messages # The checkpointer maintains conversation history, so the agent remembers "Bob" result = agent.invoke({"messages": [HumanMessage("What's my name?")]}, config=config) print(result["messages"][-1].content) 
Your name is Bob! You told me that when you introduced yourself at the start of our conversation. 

Bash tool

Execute Claude’s native bash_20250124 tool with local command execution. The bash tool middleware is useful for the following:
  • Using Claude’s built-in bash tool with local execution
  • Leveraging Claude’s optimized bash tool interface
  • Agents that need persistent shell sessions with Anthropic models
This middleware wraps ShellToolMiddleware and exposes it as Claude’s native bash tool.
API reference: ClaudeBashToolMiddleware
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import ClaudeBashToolMiddleware from langchain.agents import create_agent  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[   ClaudeBashToolMiddleware(   workspace_root="/workspace",   ),   ],  ) 
ClaudeBashToolMiddleware accepts all parameters from ShellToolMiddleware, including:
workspace_root
str | Path | None
Base directory for the shell session
startup_commands
tuple[str, ...] | list[str] | str | None
Commands to run when the session starts
execution_policy
BaseExecutionPolicy | None
Execution policy (HostExecutionPolicy, DockerExecutionPolicy, or CodexSandboxExecutionPolicy)
redaction_rules
tuple[RedactionRule, ...] | list[RedactionRule] | None
Rules for sanitizing command output
See Shell tool for full configuration details.
import tempfile  from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import ClaudeBashToolMiddleware from langchain.agents import create_agent from langchain.agents.middleware import DockerExecutionPolicy  # Create a temporary workspace directory for this demo. # In production, use a persistent directory path. workspace = tempfile.mkdtemp(prefix="agent-workspace-")  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[   ClaudeBashToolMiddleware(   workspace_root=workspace,   startup_commands=["echo 'Session initialized'"],   execution_policy=DockerExecutionPolicy(   image="python:3.11-slim",   ),   ),   ],  )  # Claude can now use its native bash tool result = agent.invoke(  {"messages": [{"role": "user", "content": "What version of Python is installed?"}]} ) print(result["messages"][-1].content) 
Python 3.11.14 is installed. 

Text editor

Provide Claude’s text editor tool (text_editor_20250728) for file creation and editing. The text editor middleware is useful for the following:
  • File-based agent workflows
  • Code editing and refactoring tasks
  • Multi-file project work
  • Agents that need persistent file storage
Available in two variants: State-based (files in LangGraph state) and Filesystem-based (files on disk).
API references:
State-based text editor
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import StateClaudeTextEditorMiddleware from langchain.agents import create_agent  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[StateClaudeTextEditorMiddleware()],  ) 
Filesystem-based text editor
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import FilesystemClaudeTextEditorMiddleware from langchain.agents import create_agent  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[   FilesystemClaudeTextEditorMiddleware(   root_path="/workspace",   ),   ],  ) 
Claude’s text editor tool supports the following commands:
  • view - View file contents or list directory
  • create - Create a new file
  • str_replace - Replace string in file
  • insert - Insert text at line number
  • delete - Delete a file
  • rename - Rename/move a file
StateClaudeTextEditorMiddleware (state-based)
allowed_path_prefixes
Sequence[str] | None
Optional list of allowed path prefixes. If specified, only paths starting with these prefixes are allowed.
FilesystemClaudeTextEditorMiddleware (filesystem-based)
root_path
str
required
Root directory for file operations
allowed_prefixes
list[str] | None
Optional list of allowed virtual path prefixes (default: ["/"])
max_file_size_mb
int
default:"10"
Maximum file size in MB
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import StateClaudeTextEditorMiddleware from langchain.agents import create_agent from langchain_core.runnables import RunnableConfig from langgraph.checkpoint.memory import MemorySaver   agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[  StateClaudeTextEditorMiddleware(   allowed_path_prefixes=["/project"],   ),   ],  checkpointer=MemorySaver(), )  # Use a thread_id to persist state across invocations config: RunnableConfig = {"configurable": {"thread_id": "my-session"}}  # Claude can now create and edit files (stored in LangGraph state) result = agent.invoke(  {"messages": [{"role": "user", "content": "Create a file at /project/hello.py with a simple hello world program"}]},  config=config, ) print(result["messages"][-1].content) 
I've created a simple "Hello, World!" program at `/project/hello.py`. The program uses Python's `print()` function to display "Hello, World!" to the console when executed. 
import tempfile  from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import FilesystemClaudeTextEditorMiddleware from langchain.agents import create_agent   # Create a temporary workspace directory for this demo. # In production, use a persistent directory path. workspace = tempfile.mkdtemp(prefix="editor-workspace-")  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[  FilesystemClaudeTextEditorMiddleware(   root_path=workspace,   allowed_prefixes=["/src"],   max_file_size_mb=10,   ),   ], )  # Claude can now create and edit files (stored on disk) result = agent.invoke(  {"messages": [{"role": "user", "content": "Create a file at /src/hello.py with a simple hello world program"}]} ) print(result["messages"][-1].content) 
I've created a simple "Hello, World!" program at `/src/hello.py`. The program uses Python's `print()` function to display "Hello, World!" to the console when executed. 

Memory

Provide Claude’s memory tool (memory_20250818) for persistent agent memory across conversation turns. The memory middleware is useful for the following:
  • Long-running agent conversations
  • Maintaining context across interruptions
  • Task progress tracking
  • Persistent agent state management
Claude’s memory tool uses a /memories directory and automatically injects a system prompt encouraging the agent to check and update memory.
API reference: StateClaudeMemoryMiddleware, FilesystemClaudeMemoryMiddleware
State-based memory
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import StateClaudeMemoryMiddleware from langchain.agents import create_agent  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[StateClaudeMemoryMiddleware()],  ) 
Filesystem-based memory
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import FilesystemClaudeMemoryMiddleware from langchain.agents import create_agent  agent_fs = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[   FilesystemClaudeMemoryMiddleware(   root_path="/workspace",   ),   ],  ) 
StateClaudeMemoryMiddleware (state-based)
allowed_path_prefixes
Sequence[str] | None
Optional list of allowed path prefixes. Defaults to ["/memories"].
system_prompt
str
System prompt to inject. Defaults to Anthropic’s recommended memory prompt that encourages the agent to check and update memory.
FilesystemClaudeMemoryMiddleware (filesystem-based)
root_path
str
required
Root directory for file operations
allowed_prefixes
list[str] | None
Optional list of allowed virtual path prefixes. Defaults to ["/memories"].
max_file_size_mb
int
default:"10"
Maximum file size in MB
system_prompt
str
System prompt to inject
The agent will automatically:
  1. Check /memories directory at start
  2. Record progress and thoughts during execution
  3. Update memory files as work progresses
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import StateClaudeMemoryMiddleware from langchain.agents import create_agent from langchain_core.runnables import RunnableConfig from langgraph.checkpoint.memory import MemorySaver   agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[StateClaudeMemoryMiddleware()],   checkpointer=MemorySaver(), )  # Use a thread_id to persist state across invocations config: RunnableConfig = {"configurable": {"thread_id": "my-session"}}  # Claude can now use memory to track progress (stored in LangGraph state) result = agent.invoke(  {"messages": [{"role": "user", "content": "Remember that my favorite color is blue, then confirm what you stored."}]},  config=config, ) print(result["messages"][-1].content) 
Perfect! I've stored your favorite color as **blue** in my memory system. The information is saved in my user preferences file where I can access it in future conversations. 
The agent will automatically:
  1. Check /memories directory at start
  2. Record progress and thoughts during execution
  3. Update memory files as work progresses
import tempfile  from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import FilesystemClaudeMemoryMiddleware from langchain.agents import create_agent   # Create a temporary workspace directory for this demo. # In production, use a persistent directory path. workspace = tempfile.mkdtemp(prefix="memory-workspace-")  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[  FilesystemClaudeMemoryMiddleware(   root_path=workspace,   ),   ], )  # Claude can now use memory to track progress (stored on disk) result = agent.invoke(  {"messages": [{"role": "user", "content": "Remember that my favorite color is blue, then confirm what you stored."}]} ) print(result["messages"][-1].content) 
Perfect! I've stored your favorite color as **blue** in my memory system. The information is saved in my user preferences file where I can access it in future conversations. 
Provide Glob and Grep search tools for files stored in LangGraph state. File search middleware is useful for the following:
  • Searching through state-based virtual file systems
  • Works with text editor and memory tools
  • Finding files by patterns
  • Content search with regex
API reference: StateFileSearchMiddleware
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import (  StateClaudeTextEditorMiddleware,  StateFileSearchMiddleware, ) from langchain.agents import create_agent  agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[   StateClaudeTextEditorMiddleware(),   StateFileSearchMiddleware(), # Search text editor files  ],  ) 
state_key
str
default:"text_editor_files"
State key containing files to search. Use "text_editor_files" for text editor files or "memory_files" for memory files.
The middleware adds Glob and Grep search tools that work with state-based files.
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import (  StateClaudeTextEditorMiddleware,  StateFileSearchMiddleware, ) from langchain.agents import create_agent from langchain.messages import HumanMessage from langchain_core.runnables import RunnableConfig from langgraph.checkpoint.memory import MemorySaver   agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[  StateClaudeTextEditorMiddleware(),  StateFileSearchMiddleware(state_key="text_editor_files"),   ],  checkpointer=MemorySaver(), )  # Use a thread_id to persist state across invocations config: RunnableConfig = {"configurable": {"thread_id": "my-session"}}  # First invocation: Create some files using the text editor tool result = agent.invoke(  {"messages": [HumanMessage("Create a Python project with main.py, utils/helpers.py, and tests/test_main.py")]},  config=config, )  # The agent creates files, which are stored in state print("Files created:", list(result["text_editor_files"].keys()))  # Second invocation: Search the files we just created # State is automatically persisted via the checkpointer result = agent.invoke(  {"messages": [HumanMessage("Find all Python files in the project")]},  config=config, ) print(result["messages"][-1].content) 
Files created: ['/project/main.py', '/project/utils/helpers.py', '/project/utils/__init__.py', '/project/tests/test_main.py', '/project/tests/__init__.py', '/project/README.md'] 
I found 5 Python files in the project:  1. `/project/main.py` - Main application file 2. `/project/utils/__init__.py` - Utils package initialization 3. `/project/utils/helpers.py` - Helper utilities 4. `/project/tests/__init__.py` - Tests package initialization 5. `/project/tests/test_main.py` - Main test file  Would you like me to view the contents of any of these files? 
from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import (  StateClaudeMemoryMiddleware,  StateFileSearchMiddleware, ) from langchain.agents import create_agent from langchain.messages import HumanMessage from langchain_core.runnables import RunnableConfig from langgraph.checkpoint.memory import MemorySaver   agent = create_agent(  model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),  tools=[],  middleware=[  StateClaudeMemoryMiddleware(),  StateFileSearchMiddleware(state_key="memory_files"),   ],  checkpointer=MemorySaver(), )  # Use a thread_id to persist state across invocations config: RunnableConfig = {"configurable": {"thread_id": "my-session"}}  # First invocation: Record some memories result = agent.invoke(  {"messages": [HumanMessage("Remember that the project deadline is March 15th and code review deadline is March 10th")]},  config=config, )  # The agent creates memory files, which are stored in state print("Memory files created:", list(result["memory_files"].keys()))  # Second invocation: Search the memories we just recorded # State is automatically persisted via the checkpointer result = agent.invoke(  {"messages": [HumanMessage("Search my memories for project deadlines")]},  config=config, ) print(result["messages"][-1].content) 
Memory files created: ['/memories/project_info.md'] 
I found your project deadlines in my memory! Here's what I have recorded:  ## Important Deadlines - **Code Review Deadline:** March 10th - **Project Deadline:** March 15th  ## Notes - Code review must be completed 5 days before final project deadline - Need to ensure all code is ready for review by March 10th  Is there anything specific about these deadlines you'd like to know or update? 

Connect these docs to Claude, VSCode, and more via MCP for real-time answers.