Quickstart: Tracing:

Learn how to trace your LLM application in Arize AX

Tracing lets you capture, inspect, and debug every step of your LLM or agent workflow — from user prompts to model responses and tool calls. With Arize AX tracing, you can visualize execution flow, surface bottlenecks, and understand model behavior in real time.

Choose your Path to getting started: You can follow along step-by-step, or dive straight into examples.

In this quickstart, you’ll learn how to:

  1. Install the tracing packages

  2. Get your API keys & connect to Arize AX

  3. Add tracing to your application

  4. Run your application and start viewing traces

By the end, you’ll have tracing fully integrated into your application — ready to explore spans, latency breakdowns, and context propagation in the Arize AX dashboard.

Step by Step

1. Install our tracing packages

Run the following commands below to install our open source tracing packages, which works on top of OpenTelemetry. This example below uses openai, and we support many LLM providers (see full list).

Using pip:

pip install arize-otel openai openinference-instrumentation-openai opentelemetry-exporter-otlp

Using conda:

conda install -c conda-forge openai openinference-instrumentation-openai opentelemetry-exporter-otlp

2. Get your API keys

Go to your space settings in the left navigation, and create a key using the button below.

Where to find your API Keys

3. Add our tracing code

Arize AX is an OpenTelemetry collector, which means you can configure your tracer and span processor. For more OTEL configurability, see how to set your tracer for auto instrumentors.

The package we are using is arize-otel, which is a lightweight convenience package to set up OpenTelemetry and send traces to Arize AX.

Python and JS/TS examples are shown below.

Are you coding with Javascript instead of Python? See our detailed guide on auto-instrumentation or manual instrumentation with Javascript examples.

The following code snippet showcases how to automatically instrument your OpenAI application.

# Import open-telemetry dependencies from arize.otel import register  # Setup OTel via our convenience function tracer_provider = register(  space_id = "your-space-id", # in app space settings page  api_key = "your-api-key", # in app space settings page  project_name = "your-project-name", # name this to whatever you would like )  # Import the automatic instrumentor from OpenInference from openinference.instrumentation.openai import OpenAIInstrumentor  # Finish automatic instrumentation OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

Set OpenAI Key:

import os from getpass import getpass os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key")

To test, let's send a chat request to OpenAI:

import openai  client = openai.OpenAI() response = client.chat.completions.create(  model="gpt-3.5-turbo",  messages=[{"role": "user", "content": "Write a haiku."}],  max_tokens=20, ) print(response.choices[0].message.content)

Now start asking questions to your LLM app and watch the traces being collected by Arize.

4. Run your LLM application

Once you've executed a sufficient number of queries (or chats) to your application, you can view the details on the LLM Tracing page.

A detailed view of a trace of a RAG application using LlamaIndex

To continue with this guide, go to the trace evaluations guide to add evaluation labels to your traces!

Next steps

Dive deeper into the following topics to keep improving your LLM application!

Last updated

Was this helpful?