Skip to main content
Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.

Overview

LangChain’s create_agent runs on LangGraph’s runtime under the hood. LangGraph exposes a Runtime object with the following information:
  1. Context: static information like user id, db connections, or other dependencies for an agent invocation
  2. Store: a BaseStore instance used for long-term memory
  3. Stream writer: an object used for streaming information via the "custom" stream mode
You can access the runtime information within tools, prompt, and pre and post model hooks.

Access

When creating an agent with create_agent, you can specify a context_schema to define the structure of the context stored in the agent Runtime. When invoking the agent, pass the context argument with the relevant configuration for the run:
from dataclasses import dataclass  from langchain.agents import create_agent  @dataclass class Context:  user_name: str  agent = create_agent(  model="openai:gpt-5-nano",  tools=[...],  context_schema=Context  )  agent.invoke(  {"messages": [{"role": "user", "content": "What's my name?"}]},  context=Context(user_name="John Smith")  ) 

Inside tools

You can access the runtime information inside tools to:
  • Access the context
  • Read or write long-term memory
  • Write to the custom stream (ex, tool progress / updates)
Use the get_runtime function from langgraph.runtime to access the Runtime object inside a tool.
from langchain_core.tools import tool from langgraph.runtime import get_runtime   @tool def fetch_user_email_preferences() -> str:  """Fetch the user's email preferences from the store."""  runtime = get_runtime(Context)   user_id = runtime.context.user_id    preferences: str = "The user prefers you to write a brief and polite email."  if runtime.store:   if memory := runtime.store.get(("users",), user_id):   preferences = memory.value["preferences"]   return preferences 

Inside prompt

Use the get_runtime function from langgraph.runtime to access the Runtime object inside a prompt function.
from dataclasses import dataclass  from langchain_core.messages import AnyMessage from langchain.agents import create_agent from langgraph.runtime import get_runtime   @dataclass class Context:  user_name: str  def my_prompt(state: State) -> list[AnyMessage]:  runtime = get_runtime(Context)   system_msg = (  "You are a helpful assistant. "  f"Address the user as {runtime.context.user_name}."  )  return [{"role": "system", "content": system_msg}] + state["messages"]  agent = create_agent(  model="openai:gpt-5-nano",  tools=[...],  prompt=my_prompt,  context_schema=Context )  agent.invoke(  {"messages": [{"role": "user", "content": "What's my name?"}]},  context=Context(user_name="John Smith") ) 

Inside pre and post model hooks

To access the underlying graph runtime information in a pre or post model hook, you can:
  1. Use the get_runtime function from langgraph.runtime to access the Runtime object inside the hook
  2. Inject the Runtime directly via the hook signature
This above options are purely preferential and not functionally different.
  • Using get_runtime
  • Injection
from langgraph.runtime import get_runtime   def pre_model_hook(state: State) -> State:  runtime = get_runtime(Context)   ... 

⌘I