- Notifications
You must be signed in to change notification settings - Fork 158
SGR Integration & Examples
admin edited this page Nov 7, 2025 · 4 revisions
Simple Python examples for using OpenAI client with SGR Agent Core system.
pip install openaiSimple research query without clarifications.
from openai import OpenAI # Initialize client client = OpenAI( base_url="http://localhost:8010/v1", api_key="dummy", # Not required for local server ) # Make research request response = client.chat.completions.create( model="sgr-agent", messages=[{"role": "user", "content": "Research BMW X6 2025 prices in Russia"}], stream=True, temperature=0.4, ) # Print streaming response for chunk in response: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="")Handle agent clarification requests and continue conversation.
import json from openai import OpenAI client = OpenAI(base_url="http://localhost:8010/v1", api_key="dummy") # Step 1: Initial research request print("Starting research...") response = client.chat.completions.create( model="sgr-agent", messages=[{"role": "user", "content": "Research AI market trends"}], stream=True, temperature=0, ) agent_id = None clarification_questions = [] # Process streaming response for chunk in response: # Extract agent ID from model field if chunk.model and chunk.model.startswith("sgr_agent_"): agent_id = chunk.model print(f"\nAgent ID: {agent_id}") # Check for clarification requests if chunk.choices[0].delta.tool_calls: for tool_call in chunk.choices[0].delta.tool_calls: if tool_call.function and tool_call.function.name == "clarification": args = json.loads(tool_call.function.arguments) clarification_questions = args.get("questions", []) # Print content if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="") # Step 2: Handle clarification if needed if clarification_questions and agent_id: print(f"\n\nClarification needed:") for i, question in enumerate(clarification_questions, 1): print(f"{i}. {question}") # Provide clarification clarification = "Focus on LLM market trends for 2024-2025, global perspective" print(f"\nProviding clarification: {clarification}") # Continue with agent ID response = client.chat.completions.create( model=agent_id, # Use agent ID as model messages=[{"role": "user", "content": clarification}], stream=True, temperature=0, ) # Print final response for chunk in response: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="") print("\n\nResearch completed!")- Replace
localhost:8010with your server URL - The
api_keycan be any string for local server - Agent ID is returned in the
modelfield during streaming - Clarification questions are sent via
tool_callswith function nameclarification - Use the agent ID as model name to continue conversation
The system provides a fully OpenAI-compatible API with advanced agent interruption and clarification capabilities.
curl -X POST "http://localhost:8010/v1/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "sgr_agent", "messages": [{"role": "user", "content": "Research BMW X6 2025 prices in Russia"}], "stream": true, "max_tokens": 1500, "temperature": 0.4 }'When the agent needs clarification, it returns a unique agent ID in the streaming response model field. You can then continue the conversation using this agent ID.
curl -X POST "http://localhost:8010/v1/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "sgr_agent", "messages": [{"role": "user", "content": "Research AI market trends"}], "stream": true, "max_tokens": 1500, "temperature": 0 }'The streaming response includes the agent ID in the model field:
{ "model": "sgr_agent_b84d5a01-c394-4499-97be-dad6a5d2cb86", "choices": [{ "delta": { "tool_calls": [{ "function": { "name": "clarification", "arguments": "{\"questions\":[\"Which specific AI market segment are you interested in (LLM, computer vision, robotics)?\", \"What time period should I focus on (2024, next 5 years)?\", \"Are you looking for global trends or specific geographic regions?\", \"Do you need technical analysis or business/investment perspective?\"]}" } }] } }] }curl -X POST "http://localhost:8010/v1/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "sgr_agent_b84d5a01-c394-4499-97be-dad6a5d2cb86", "messages": [{"role": "user", "content": "Focus on LLM market trends for 2024-2025, global perspective, business analysis"}], "stream": true, "max_tokens": 1500, "temperature": 0 }'# Get all active agents curl http://localhost:8010/agents # Get specific agent state curl http://localhost:8010/agents/{agent_id}/state # Direct clarification endpoint curl -X POST "http://localhost:8010/agents/{agent_id}/provide_clarification" \ -H "Content-Type: application/json" \ -d '{ "messages": [{"role": "user", "content": "Focus on luxury models only"}], "stream": true }'2025 // vamplab