Skip to main content

OpenInference liteLLM Instrumentation

Project description

OpenInference LiteLLM Instrumentation

LiteLLM allows developers to call all LLM APIs using the openAI format. LiteLLM Proxy is a proxy server to call 100+ LLMs in OpenAI format. Both are supported by this auto-instrumentation.

This package implements OpenInference tracing for the following LiteLLM functions:

  • completion()
  • acompletion()
  • completion_with_retries()
  • embedding()
  • aembedding()
  • image_generation()
  • aimage_generation()

These traces are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as Arize Phoenix.

Installation

pip install openinference-instrumentation-litellm 

Quickstart

In a notebook environment (jupyter, colab, etc.) install openinference-instrumentation-litellm if you haven't already as well as arize-phoenix and litellm.

pip install openinference-instrumentation-litellm arize-phoenix litellm 

First, import dependencies required to autoinstrument liteLLM and set up phoenix as an collector for OpenInference traces.

import litellm import phoenix as px from openinference.instrumentation.litellm import LiteLLMInstrumentor from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import SimpleSpanProcessor 

Next, we'll start a phoenix server and set it as a collector.

session = px.launch_app() endpoint = "http://127.0.0.1:6006/v1/traces" tracer_provider = TracerProvider() tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) 

Set up any API keys needed in you API calls. For example:

import os os.environ["OPENAI_API_KEY"] = "PASTE_YOUR_API_KEY_HERE" 

Instrumenting LiteLLM is simple:

LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider) 

Now, all calls to LiteLLM functions are instrumented and can be viewed in the phoenix UI.

completion_response = litellm.completion(model="gpt-3.5-turbo", messages=[{"content": "What's the capital of China?", "role": "user"}]) print(completion_response) 
acompletion_response = await litellm.acompletion( model="gpt-3.5-turbo", messages=[{ "content": "Hello, I want to bake a cake","role": "user"}, { "content": "Hello, I can pull up some recipes for cakes.","role": "assistant"}, { "content": "No actually I want to make a pie","role": "user"},], temperature=0.7, max_tokens=20 ) print(acompletion_response) 
embedding_response = litellm.embedding(model='text-embedding-ada-002', input=["good morning!"]) print(embedding_response) 
image_gen_response = litellm.image_generation(model='dall-e-2', prompt="cute baby otter") print(image_gen_response) 

You can also uninstrument the functions as follows

LiteLLMInstrumentor().uninstrument(tracer_provider=tracer_provider) 

Now any liteLLM function calls you make will not send traces to Phoenix until instrumented again

More Info

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openinference_instrumentation_litellm-0.1.27.tar.gz (33.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file openinference_instrumentation_litellm-0.1.27.tar.gz.

File metadata

File hashes

Hashes for openinference_instrumentation_litellm-0.1.27.tar.gz
Algorithm Hash digest
SHA256 60140cb8f1adf1bce7b9c33a2ad51e90a8bb7c5316bafd4c33a647721c107abc
MD5 53373557ab2e8e9444b7e36c070cea2d
BLAKE2b-256 dddddd82e518a501eb277b126836619eb16d6a8005db1491e2c9ead0323db46b

See more details on using hashes here.

File details

Details for the file openinference_instrumentation_litellm-0.1.27-py3-none-any.whl.

File metadata

File hashes

Hashes for openinference_instrumentation_litellm-0.1.27-py3-none-any.whl
Algorithm Hash digest
SHA256 f87a1b1d7142bc5ebbf13fbc511231aceba34296f8f8c93d3c5c327678fc0b70
MD5 b24632a605196daa9ebee35076829310
BLAKE2b-256 c13f7e8e2c202674c65dbb35c112e31dfda301d41025330c89e33dc6c7e2234d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page