Skip to main content

OpenInference Autogen-Agentchat Instrumentation

Project description

OpenInference Autogen-Agentchat Instrumentation

PyPI Version

OpenTelelemetry instrumentation for Autogen AgentChat, enabling tracing of agent interactions and conversations.

The traces emitted by this instrumentation are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as arize-phoenix

Installation

pip install openinference-instrumentation-autogen-agentchat 

Quickstart

In this example we will instrument a simple Autogen AgentChat application and observe the traces via arize-phoenix.

Install required packages.

pip install openinference-instrumentation-autogen-agentchat autogen-agentchat arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp 

Start the phoenix server so that it is ready to collect traces. The Phoenix server runs entirely on your machine and does not send data over the internet.

phoenix serve 

Here's a simple example using a single assistant agent:

import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_ext.models.openai import OpenAIChatCompletionClient from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk import trace as trace_sdk from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentor # Set up the tracer provider endpoint = "http://127.0.0.1:6006/v1/traces" tracer_provider = trace_sdk.TracerProvider() tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) # Instrument AutogenAgentChat AutogenAgentChatInstrumentor().instrument(tracer_provider=tracer_provider) async def main(): model_client = OpenAIChatCompletionClient( model="gpt-3.5-turbo", ) def get_weather(city: str) -> str:  """Get the weather for a given city.""" return f"The weather in {city} is 73 degrees and Sunny." # Create an assistant agent with tools agent = AssistantAgent( name="weather_agent", model_client=model_client, tools=[get_weather], system_message="You are a helpful assistant that can check the weather.", reflect_on_tool_use=True, model_client_stream=True, ) result = await agent.run(task="What is the weather in New York?") await model_client.close() print(result) if __name__ == "__main__": asyncio.run(main()) 

For a more complex example using multiple agents in a team:

import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.conditions import TextMentionTermination from autogen_agentchat.teams import RoundRobinGroupChat from autogen_ext.models.openai import OpenAIChatCompletionClient from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk import trace as trace_sdk from opentelemetry.sdk.trace.export import SimpleSpanProcessor from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentor # Set up the tracer provider endpoint = "http://127.0.0.1:6006/v1/traces" tracer_provider = trace_sdk.TracerProvider() tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) # Instrument AutogenAgentChat AutogenAgentChatInstrumentor().instrument(tracer_provider=tracer_provider) async def main(): model_client = OpenAIChatCompletionClient( model="gpt-4", ) # Create two agents: a primary and a critic primary_agent = AssistantAgent( "primary", model_client=model_client, system_message="You are a helpful AI assistant.", ) critic_agent = AssistantAgent( "critic", model_client=model_client, system_message="""  Provide constructive feedback.  Respond with 'APPROVE' when your feedbacks are addressed.  """, ) # Termination condition: stop when the critic says "APPROVE" text_termination = TextMentionTermination("APPROVE") # Create a team with both agents team = RoundRobinGroupChat( [primary_agent, critic_agent], termination_condition=text_termination ) # Run the team on a task result = await team.run(task="Write a short poem about the fall season.") await model_client.close() print(result) if __name__ == "__main__": asyncio.run(main()) 

Since we are using OpenAI, we must set the OPENAI_API_KEY environment variable to authenticate with the OpenAI API.

export OPENAI_API_KEY=[your_key_here] 

Now simply run the python file and observe the traces in Phoenix.

python your_file.py 

More Info

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file openinference_instrumentation_autogen_agentchat-0.1.5.tar.gz.

File metadata

File hashes

Hashes for openinference_instrumentation_autogen_agentchat-0.1.5.tar.gz
Algorithm Hash digest
SHA256 f3fb5df2bacb49fcbee0509f3f0b66caaa4acd1ce1852f81cb8850fce330004f
MD5 ad95ee1fc8ca2c537634d249ce03af64
BLAKE2b-256 fa37d4cfef631972de69b3ac66e65d7ea5c367502d7179c46910db54dc3ff6fe

See more details on using hashes here.

File details

Details for the file openinference_instrumentation_autogen_agentchat-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for openinference_instrumentation_autogen_agentchat-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 7051a1177f209559f57930a10a81039c663275b06942961b4aa08a0103393d43
MD5 1c6c99595647d301b352ef20170a899a
BLAKE2b-256 dfe6662265c4e0a0e5396d392b8256353d5f2f3e4de46b092806187179e47196

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page