Observability for Semantic Kernel (Python) with Opik

Semantic Kernel is a powerful open-source SDK from Microsoft. It facilitates the combination of LLMs with popular programming languages like C#, Python, and Java. Semantic Kernel empowers developers to build sophisticated AI applications by seamlessly integrating AI services, data sources, and custom logic, accelerating the delivery of enterprise-grade AI solutions.

Learn more about Semantic Kernel in the official documentation.

Semantic Kernel Integration

Getting started

To use the Semantic Kernel integration with Opik, you will need to have Semantic Kernel and the required OpenTelemetry packages installed:

$pip install semantic-kernel opentelemetry-exporter-otlp-proto-http

Environment configuration

Configure your environment variables based on your Opik deployment:

If you are using Opik Cloud, you will need to set the following environment variables:

$export OTEL_EXPORTER_OTLP_ENDPOINT=https://www.comet.com/opik/api/v1/private/otel
>export OTEL_EXPORTER_OTLP_HEADERS='Authorization=<your-api-key>,Comet-Workspace=default'

To log the traces to a specific project, you can add the projectName parameter to the OTEL_EXPORTER_OTLP_HEADERS environment variable:

$export OTEL_EXPORTER_OTLP_HEADERS='Authorization=<your-api-key>,Comet-Workspace=default,projectName=<your-project-name>'

You can also update the Comet-Workspace parameter to a different value if you would like to log the data to a different workspace.

Using Opik with Semantic Kernel

Important: By default, Semantic Kernel does not emit spans for AI connectors because they contain experimental gen_ai attributes. You must set one of these environment variables to enable telemetry:

  • SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE=true - Includes sensitive data (prompts and completions)
  • SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS=true - Non-sensitive data only (model names, operation names, token usage)

Without one of these variables set, no AI connector spans will be emitted.

For more details, see Microsoft’s Semantic Kernel Environment Variables documentation.

Semantic Kernel has built-in OpenTelemetry support. Enable telemetry and configure the OTLP exporter:

1import asyncio
2import os
3
4# REQUIRED: Enable Semantic Kernel diagnostics
5# Option 1: Include sensitive data (prompts and completions)
6os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE"] = (
7 "true"
8)
9
10# Option 2: Hide sensitive data (prompts and completions)
11# os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS"] = "true"
12
13from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
14from opentelemetry.sdk.resources import Resource
15from opentelemetry.sdk.trace import TracerProvider
16from opentelemetry.sdk.trace.export import BatchSpanProcessor
17from opentelemetry.semconv.resource import ResourceAttributes
18from opentelemetry.trace import set_tracer_provider
19from semantic_kernel import Kernel
20from semantic_kernel.connectors.ai.function_choice_behavior import (
21 FunctionChoiceBehavior,
22)
23from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
24from semantic_kernel.connectors.ai.prompt_execution_settings import (
25 PromptExecutionSettings,
26)
27from semantic_kernel.functions.kernel_arguments import KernelArguments
28from semantic_kernel.functions.kernel_function_decorator import kernel_function
29
30
31class BookingPlugin:
32 @kernel_function(
33 name="find_available_rooms",
34 description="Find available conference rooms for today.",
35 )
36 def find_available_rooms(
37 self,
38 ) -> list[str]:
39 return ["Room 101", "Room 201", "Room 301"]
40
41 @kernel_function(
42 name="book_room",
43 description="Book a conference room.",
44 )
45 def book_room(self, room: str) -> str:
46 return f"Room {room} booked."
47
48
49def set_up_tracing():
50 # Create a resource to represent the service/sample
51 resource = Resource.create(
52 {ResourceAttributes.SERVICE_NAME: "semantic-kernel-app"}
53 )
54
55 exporter = OTLPSpanExporter()
56
57 # Initialize a trace provider for the application. This is a factory for creating tracers.
58 tracer_provider = TracerProvider(resource=resource)
59 # Span processors are initialized with an exporter which is responsible
60 # for sending the telemetry data to a particular backend.
61 tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
62 # Sets the global default tracer provider
63 set_tracer_provider(tracer_provider)
64
65
66# This must be done before any other telemetry calls
67set_up_tracing()
68
69
70async def main():
71 # Create a kernel and add a service
72 kernel = Kernel()
73 kernel.add_service(OpenAIChatCompletion(ai_model_id="gpt-4.1"))
74 kernel.add_plugin(BookingPlugin(), "BookingPlugin")
75
76 answer = await kernel.invoke_prompt(
77 "Reserve a conference room for me today.",
78 arguments=KernelArguments(
79 settings=PromptExecutionSettings(
80 function_choice_behavior=FunctionChoiceBehavior.Auto(),
81 ),
82 ),
83 )
84 print(answer)
85
86
87if __name__ == "__main__":
88 asyncio.run(main())

Choosing between the environment variables:

  • Use SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE=true if you want complete visibility into your LLM interactions, including the actual prompts and responses. This is useful for debugging and development.

  • Use SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS=true for production environments where you want to avoid logging sensitive data while still capturing important metrics like token usage, model names, and operation performance.

Further improvements

If you have any questions or suggestions for improving the Semantic Kernel integration, please open an issue on our GitHub repository.