Skip to content

Commit c56b390

Browse files
Merge branch 'main' into feature/session-memory
2 parents 116a8aa + db85a6d commit c56b390

File tree

17 files changed

+616
-27
lines changed

17 files changed

+616
-27
lines changed

Makefile

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,10 @@ format:
77
uv run ruff format
88
uv run ruff check --fix
99

10+
.PHONY: format-check
11+
format-check:
12+
uv run ruff format --check
13+
1014
.PHONY: lint
1115
lint:
1216
uv run ruff check
@@ -55,5 +59,5 @@ serve-docs:
5559
deploy-docs:
5660
uv run mkdocs gh-deploy --force --verbose
5761

58-
59-
62+
.PHONY: check
63+
check: format-check lint mypy tests

README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -279,10 +279,16 @@ make sync
279279

280280
2. (After making changes) lint/test
281281

282+
```
283+
make check # run tests linter and typechecker
284+
```
285+
286+
Or to run them individually:
282287
```
283288
make tests # run tests
284289
make mypy # run typechecker
285290
make lint # run linter
291+
make format-check # run style checker
286292
```
287293

288294
## Acknowledgements

docs/agents.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ Agents are the core building block in your apps. An agent is a large language mo
66

77
The most common properties of an agent you'll configure are:
88

9+
- `name`: A required string that identifies your agent.
910
- `instructions`: also known as a developer message or system prompt.
1011
- `model`: which LLM to use, and optional `model_settings` to configure model tuning parameters like temperature, top_p, etc.
1112
- `tools`: Tools that the agent can use to achieve its tasks.

docs/mcp.md

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The [Model context protocol](https://modelcontextprotocol.io/introduction) (aka
44

55
> MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
66
7-
The Agents SDK has support for MCP. This enables you to use a wide range of MCP servers to provide tools to your Agents.
7+
The Agents SDK has support for MCP. This enables you to use a wide range of MCP servers to provide tools and prompts to your Agents.
88

99
## MCP servers
1010

@@ -135,6 +135,38 @@ The `ToolFilterContext` provides access to:
135135
- `agent`: The agent requesting the tools
136136
- `server_name`: The name of the MCP server
137137

138+
## Prompts
139+
140+
MCP servers can also provide prompts that can be used to dynamically generate agent instructions. This allows you to create reusable instruction templates that can be customized with parameters.
141+
142+
### Using prompts
143+
144+
MCP servers that support prompts provide two key methods:
145+
146+
- `list_prompts()`: Lists all available prompts on the server
147+
- `get_prompt(name, arguments)`: Gets a specific prompt with optional parameters
148+
149+
```python
150+
# List available prompts
151+
prompts_result = await server.list_prompts()
152+
for prompt in prompts_result.prompts:
153+
print(f"Prompt: {prompt.name} - {prompt.description}")
154+
155+
# Get a specific prompt with parameters
156+
prompt_result = await server.get_prompt(
157+
"generate_code_review_instructions",
158+
{"focus": "security vulnerabilities", "language": "python"}
159+
)
160+
instructions = prompt_result.messages[0].content.text
161+
162+
# Use the prompt-generated instructions with an Agent
163+
agent = Agent(
164+
name="Code Reviewer",
165+
instructions=instructions, # Instructions from MCP prompt
166+
mcp_servers=[server]
167+
)
168+
```
169+
138170
## Caching
139171

140172
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to [`MCPServerStdio`][agents.mcp.server.MCPServerStdio], [`MCPServerSse`][agents.mcp.server.MCPServerSse], and [`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp]. You should only do this if you're certain the tool list will not change.

examples/basic/agent_lifecycle_example.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -101,12 +101,10 @@ async def main() -> None:
101101
### (Start Agent) 1: Agent Start Agent started
102102
### (Start Agent) 2: Agent Start Agent started tool random_number
103103
### (Start Agent) 3: Agent Start Agent ended tool random_number with result 37
104-
### (Start Agent) 4: Agent Start Agent started
105-
### (Start Agent) 5: Agent Start Agent handed off to Multiply Agent
104+
### (Start Agent) 4: Agent Start Agent handed off to Multiply Agent
106105
### (Multiply Agent) 1: Agent Multiply Agent started
107106
### (Multiply Agent) 2: Agent Multiply Agent started tool multiply_by_two
108107
### (Multiply Agent) 3: Agent Multiply Agent ended tool multiply_by_two with result 74
109-
### (Multiply Agent) 4: Agent Multiply Agent started
110-
### (Multiply Agent) 5: Agent Multiply Agent ended with output number=74
108+
### (Multiply Agent) 4: Agent Multiply Agent ended with output number=74
111109
Done!
112110
"""

examples/basic/lifecycle_example.py

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -105,14 +105,12 @@ async def main() -> None:
105105
Enter a max number: 250
106106
### 1: Agent Start Agent started. Usage: 0 requests, 0 input tokens, 0 output tokens, 0 total tokens
107107
### 2: Tool random_number started. Usage: 1 requests, 148 input tokens, 15 output tokens, 163 total tokens
108-
### 3: Tool random_number ended with result 101. Usage: 1 requests, 148 input tokens, 15 output tokens, 163 total tokens
109-
### 4: Agent Start Agent started. Usage: 1 requests, 148 input tokens, 15 output tokens, 163 total tokens
110-
### 5: Handoff from Start Agent to Multiply Agent. Usage: 2 requests, 323 input tokens, 30 output tokens, 353 total tokens
111-
### 6: Agent Multiply Agent started. Usage: 2 requests, 323 input tokens, 30 output tokens, 353 total tokens
112-
### 7: Tool multiply_by_two started. Usage: 3 requests, 504 input tokens, 46 output tokens, 550 total tokens
113-
### 8: Tool multiply_by_two ended with result 202. Usage: 3 requests, 504 input tokens, 46 output tokens, 550 total tokens
114-
### 9: Agent Multiply Agent started. Usage: 3 requests, 504 input tokens, 46 output tokens, 550 total tokens
115-
### 10: Agent Multiply Agent ended with output number=202. Usage: 4 requests, 714 input tokens, 63 output tokens, 777 total tokens
108+
### 3: Tool random_number ended with result 101. Usage: 1 requests, 148 input tokens, 15 output tokens, 163 total token
109+
### 4: Handoff from Start Agent to Multiply Agent. Usage: 2 requests, 323 input tokens, 30 output tokens, 353 total tokens
110+
### 5: Agent Multiply Agent started. Usage: 2 requests, 323 input tokens, 30 output tokens, 353 total tokens
111+
### 6: Tool multiply_by_two started. Usage: 3 requests, 504 input tokens, 46 output tokens, 550 total tokens
112+
### 7: Tool multiply_by_two ended with result 202. Usage: 3 requests, 504 input tokens, 46 output tokens, 550 total tokens
113+
### 8: Agent Multiply Agent ended with output number=202. Usage: 4 requests, 714 input tokens, 63 output tokens, 777 total tokens
116114
Done!
117115
118116
"""
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# MCP Prompt Server Example
2+
3+
This example uses a local MCP prompt server in [server.py](server.py).
4+
5+
Run the example via:
6+
7+
```
8+
uv run python examples/mcp/prompt_server/main.py
9+
```
10+
11+
## Details
12+
13+
The example uses the `MCPServerStreamableHttp` class from `agents.mcp`. The server runs in a sub-process at `http://localhost:8000/mcp` and provides user-controlled prompts that generate agent instructions.
14+
15+
The server exposes prompts like `generate_code_review_instructions` that take parameters such as focus area and programming language. The agent calls these prompts to dynamically generate its system instructions based on user-provided parameters.
16+
17+
## Workflow
18+
19+
The example demonstrates two key functions:
20+
21+
1. **`show_available_prompts`** - Lists all available prompts on the MCP server, showing users what prompts they can select from. This demonstrates the discovery aspect of MCP prompts.
22+
23+
2. **`demo_code_review`** - Shows the complete user-controlled prompt workflow:
24+
- Calls `generate_code_review_instructions` with specific parameters (focus: "security vulnerabilities", language: "python")
25+
- Uses the generated instructions to create an Agent with specialized code review capabilities
26+
- Runs the agent against vulnerable sample code (command injection via `os.system`)
27+
- The agent analyzes the code and provides security-focused feedback using available tools
28+
29+
This pattern allows users to dynamically configure agent behavior through MCP prompts rather than hardcoded instructions.

examples/mcp/prompt_server/main.py

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
import asyncio
2+
import os
3+
import shutil
4+
import subprocess
5+
import time
6+
from typing import Any
7+
8+
from agents import Agent, Runner, gen_trace_id, trace
9+
from agents.mcp import MCPServer, MCPServerStreamableHttp
10+
from agents.model_settings import ModelSettings
11+
12+
13+
async def get_instructions_from_prompt(mcp_server: MCPServer, prompt_name: str, **kwargs) -> str:
14+
"""Get agent instructions by calling MCP prompt endpoint (user-controlled)"""
15+
print(f"Getting instructions from prompt: {prompt_name}")
16+
17+
try:
18+
prompt_result = await mcp_server.get_prompt(prompt_name, kwargs)
19+
content = prompt_result.messages[0].content
20+
if hasattr(content, 'text'):
21+
instructions = content.text
22+
else:
23+
instructions = str(content)
24+
print("Generated instructions")
25+
return instructions
26+
except Exception as e:
27+
print(f"Failed to get instructions: {e}")
28+
return f"You are a helpful assistant. Error: {e}"
29+
30+
31+
async def demo_code_review(mcp_server: MCPServer):
32+
"""Demo: Code review with user-selected prompt"""
33+
print("=== CODE REVIEW DEMO ===")
34+
35+
# User explicitly selects prompt and parameters
36+
instructions = await get_instructions_from_prompt(
37+
mcp_server,
38+
"generate_code_review_instructions",
39+
focus="security vulnerabilities",
40+
language="python",
41+
)
42+
43+
agent = Agent(
44+
name="Code Reviewer Agent",
45+
instructions=instructions, # Instructions from MCP prompt
46+
model_settings=ModelSettings(tool_choice="auto"),
47+
)
48+
49+
message = """Please review this code:
50+
51+
def process_user_input(user_input):
52+
command = f"echo {user_input}"
53+
os.system(command)
54+
return "Command executed"
55+
56+
"""
57+
58+
print(f"Running: {message[:60]}...")
59+
result = await Runner.run(starting_agent=agent, input=message)
60+
print(result.final_output)
61+
print("\n" + "=" * 50 + "\n")
62+
63+
64+
async def show_available_prompts(mcp_server: MCPServer):
65+
"""Show available prompts for user selection"""
66+
print("=== AVAILABLE PROMPTS ===")
67+
68+
prompts_result = await mcp_server.list_prompts()
69+
print("User can select from these prompts:")
70+
for i, prompt in enumerate(prompts_result.prompts, 1):
71+
print(f" {i}. {prompt.name} - {prompt.description}")
72+
print()
73+
74+
75+
async def main():
76+
async with MCPServerStreamableHttp(
77+
name="Simple Prompt Server",
78+
params={"url": "http://localhost:8000/mcp"},
79+
) as server:
80+
trace_id = gen_trace_id()
81+
with trace(workflow_name="Simple Prompt Demo", trace_id=trace_id):
82+
print(f"Trace: https://platform.openai.com/traces/trace?trace_id={trace_id}\n")
83+
84+
await show_available_prompts(server)
85+
await demo_code_review(server)
86+
87+
88+
if __name__ == "__main__":
89+
if not shutil.which("uv"):
90+
raise RuntimeError("uv is not installed")
91+
92+
process: subprocess.Popen[Any] | None = None
93+
try:
94+
this_dir = os.path.dirname(os.path.abspath(__file__))
95+
server_file = os.path.join(this_dir, "server.py")
96+
97+
print("Starting Simple Prompt Server...")
98+
process = subprocess.Popen(["uv", "run", server_file])
99+
time.sleep(3)
100+
print("Server started\n")
101+
except Exception as e:
102+
print(f"Error starting server: {e}")
103+
exit(1)
104+
105+
try:
106+
asyncio.run(main())
107+
finally:
108+
if process:
109+
process.terminate()
110+
print("Server terminated.")
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
from mcp.server.fastmcp import FastMCP
2+
3+
# Create server
4+
mcp = FastMCP("Prompt Server")
5+
6+
7+
# Instruction-generating prompts (user-controlled)
8+
@mcp.prompt()
9+
def generate_code_review_instructions(
10+
focus: str = "general code quality", language: str = "python"
11+
) -> str:
12+
"""Generate agent instructions for code review tasks"""
13+
print(f"[debug-server] generate_code_review_instructions({focus}, {language})")
14+
15+
return f"""You are a senior {language} code review specialist. Your role is to provide comprehensive code analysis with focus on {focus}.
16+
17+
INSTRUCTIONS:
18+
- Analyze code for quality, security, performance, and best practices
19+
- Provide specific, actionable feedback with examples
20+
- Identify potential bugs, vulnerabilities, and optimization opportunities
21+
- Suggest improvements with code examples when applicable
22+
- Be constructive and educational in your feedback
23+
- Focus particularly on {focus} aspects
24+
25+
RESPONSE FORMAT:
26+
1. Overall Assessment
27+
2. Specific Issues Found
28+
3. Security Considerations
29+
4. Performance Notes
30+
5. Recommended Improvements
31+
6. Best Practices Suggestions
32+
33+
Use the available tools to check current time if you need timestamps for your analysis."""
34+
35+
36+
if __name__ == "__main__":
37+
mcp.run(transport="streamable-http")

src/agents/function_schema.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,8 @@ def function_schema(
337337
# 5. Return as a FuncSchema dataclass
338338
return FuncSchema(
339339
name=func_name,
340-
description=description_override or doc_info.description if doc_info else None,
340+
# Ensure description_override takes precedence even if docstring info is disabled.
341+
description=description_override or (doc_info.description if doc_info else None),
341342
params_pydantic_model=dynamic_model,
342343
params_json_schema=json_schema,
343344
signature=sig,

0 commit comments

Comments
 (0)