Django OpenTelemetry Instrumentation Guide
OpenTelemetry enables comprehensive monitoring of Django applications by automatically collecting telemetry data including request and response times, database query performance, and custom metrics. By integrating OpenTelemetry, you can capture distributed traces and metrics, then export this data to various observability backends for analysis and visualization.
What is Django?
Django is an open-source, high-level web framework written in Python. It follows the Model-View-Template (MVT) architectural pattern and provides a robust set of tools and features that simplify and accelerate web application development.
Django is designed to promote clean, maintainable code and follows the "Don't Repeat Yourself" (DRY) principle, emphasizing code reusability and reducing redundancy.
What is OpenTelemetry?
OpenTelemetry is an open-source observability framework that provides a standardized way to collect, process, and export telemetry data from applications and infrastructure. It combines metrics, logs, and distributed traces into a unified toolkit that helps developers understand how their systems are performing.
The framework solves vendor lock-in by providing consistent instrumentation across different technology stacks. Teams can instrument their applications once with OpenTelemetry and send that data to any compatible observability platform, making it easier to monitor complex, distributed systems without being tied to specific monitoring vendors.
Installation
To instrument a Django application with OpenTelemetry, install the required packages:
pip install opentelemetry-api opentelemetry-sdk opentelemetry-instrumentation-django
Additional Database-Specific Instrumentation
Depending on your database backend, you may also need to install additional instrumentation packages:
# For PostgreSQL pip install opentelemetry-instrumentation-psycopg2 # For MySQL pip install opentelemetry-instrumentation-dbapi # For SQLite pip install opentelemetry-instrumentation-sqlite3
Exporter Installation
To export telemetry data to observability backends, install an appropriate exporter:
# For OTLP (recommended) pip install opentelemetry-exporter-otlp # For console output (development/testing) pip install opentelemetry-exporter-otlp-proto-http
OpenTelemetry SDK Configuration
Initialize OpenTelemetry in your Django application by adding the following configuration to your application's startup code. This should be placed in your manage.py
, wsgi.py
, or a dedicated configuration module:
from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import ( BatchSpanProcessor, ConsoleSpanExporter, ) from opentelemetry.sdk.resources import SERVICE_NAME, Resource # Create resource with service information resource = Resource(attributes={ SERVICE_NAME: "your-django-app" }) # Create and configure the tracer provider provider = TracerProvider(resource=resource) processor = BatchSpanProcessor(ConsoleSpanExporter()) provider.add_span_processor(processor) # Set the global default tracer provider trace.set_tracer_provider(provider) # Create a tracer from the global tracer provider tracer = trace.get_tracer("my.tracer.name")
Basic Usage
Manual Instrumentation
Django instrumentation relies on the DJANGO_SETTINGS_MODULE
environment variable to locate your settings file. Since Django defines this variable in the manage.py
file, you should instrument your Django application from that file:
# manage.py import os import sys from opentelemetry.instrumentation.django import DjangoInstrumentor def main(): # Ensure DJANGO_SETTINGS_MODULE is set before instrumenting os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings") # Initialize Django instrumentation DjangoInstrumentor().instrument() # Your existing Django management code here try: from django.core.management import execute_from_command_line except ImportError as exc: raise ImportError( "Couldn't import Django. Are you sure it's installed and " "available on your PYTHONPATH environment variable? Did you " "forget to activate a virtual environment?" ) from exc execute_from_command_line(sys.argv) if __name__ == '__main__': main()
Auto-Instrumentation
For a simpler setup without code modifications, you can use OpenTelemetry's auto-instrumentation:
# First, install the auto-instrumentation packages pip install opentelemetry-distro # Bootstrap to install all relevant instrumentation packages opentelemetry-bootstrap -a install # Run your Django application with auto-instrumentation opentelemetry-instrument python manage.py runserver --noreload
Note: Use the --noreload
flag to prevent Django from running the initialization twice.
Creating Custom Spans
OpenTelemetry automatically traces incoming HTTP requests to your Django application. You can also create custom spans to trace specific parts of your code:
from opentelemetry import trace tracer = trace.get_tracer(__name__) def my_view(request): # Create a custom span for specific business logic with tracer.start_as_current_span("custom-business-logic"): # Your application code here result = perform_complex_operation() # Add attributes to the span for better observability span = trace.get_current_span() span.set_attribute("operation.result", str(result)) span.set_attribute("user.id", request.user.id if request.user.is_authenticated else "anonymous") return HttpResponse("Hello, World!")
Deployment Configuration
uWSGI
When using Django with uWSGI, initialize OpenTelemetry using the post-fork hook to ensure proper instrumentation in worker processes. Add this to your application startup file:
from uwsgidecorators import postfork from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter from opentelemetry.instrumentation.django import DjangoInstrumentor @postfork def init_tracing(): # Initialize OpenTelemetry provider = TracerProvider() processor = BatchSpanProcessor(ConsoleSpanExporter()) provider.add_span_processor(processor) trace.set_tracer_provider(provider) # Instrument Django DjangoInstrumentor().instrument()
Gunicorn
For Django with Gunicorn, use the post-fork hook in your gunicorn.conf.py
:
from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter from opentelemetry.instrumentation.django import DjangoInstrumentor def post_fork(server, worker): server.log.info("Worker spawned (pid: %s)", worker.pid) # Initialize OpenTelemetry provider = TracerProvider() processor = BatchSpanProcessor(ConsoleSpanExporter()) provider.add_span_processor(processor) trace.set_tracer_provider(provider) # Instrument Django DjangoInstrumentor().instrument()
Database Instrumentation
PostgreSQL
Configure Django to use PostgreSQL by updating your settings.py
:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': '127.0.0.1', 'PORT': '5432', } }
The PostgreSQL backend uses the psycopg2 library internally. Install and configure the instrumentation:
pip install opentelemetry-instrumentation-psycopg2
Then instrument the library in your initialization code:
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor Psycopg2Instrumentor().instrument()
For Gunicorn or uWSGI deployments, add this instrumentation to your post-fork hooks.
MySQL
Configure Django to use MySQL by updating your settings.py
:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': '127.0.0.1', 'PORT': '3306', } }
The MySQL backend uses the mysqlclient library, which implements the Python Database API. Install the DB API instrumentation:
pip install opentelemetry-instrumentation-dbapi
Then instrument the mysqlclient library:
import MySQLdb from opentelemetry.instrumentation.dbapi import trace_integration trace_integration(MySQLdb, "connect", "mysql")
For Gunicorn or uWSGI deployments, add this instrumentation to your post-fork hooks.
SQLite
Configure Django to use SQLite by updating your settings.py
:
DATABASES = { "default": { "ENGINE": "django.db.backends.sqlite3", "NAME": BASE_DIR / "db.sqlite3", } }
The SQLite backend uses Python's built-in sqlite3 library. Install the SQLite instrumentation:
pip install opentelemetry-instrumentation-sqlite3
Then instrument the sqlite3 library:
from opentelemetry.instrumentation.sqlite3 import SQLite3Instrumentor SQLite3Instrumentor().instrument()
For Gunicorn or uWSGI deployments, add this instrumentation to your post-fork hooks.
Advanced Configuration
Environment Variables
OpenTelemetry supports configuration through environment variables. Common ones include:
# Service identification export OTEL_SERVICE_NAME="my-django-app" export OTEL_SERVICE_VERSION="1.0.0" # Exporter configuration export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317" export OTEL_EXPORTER_OTLP_PROTOCOL="grpc" # Resource attributes export OTEL_RESOURCE_ATTRIBUTES="service.name=my-django-app,service.version=1.0.0" # Disable specific instrumentations if needed export OTEL_PYTHON_DJANGO_INSTRUMENT=false
SQL Commentor
You can enable SQL commentor to enrich database queries with contextual information:
from opentelemetry.instrumentation.django import DjangoInstrumentor DjangoInstrumentor().instrument(is_sql_commentor_enabled=True)
This will append contextual tags to your SQL queries, transforming SELECT * FROM users
into SELECT * FROM users /*controller='users',action='index'*/
.
Sampling Configuration
To control the volume of telemetry data, configure sampling in your tracer provider:
from opentelemetry.sdk.trace.sampling import TraceIdRatioBased # Sample 10% of traces sampler = TraceIdRatioBased(0.1) provider = TracerProvider(sampler=sampler)
Custom Resource Attributes
Add custom resource attributes to provide additional context:
from opentelemetry.sdk.resources import Resource resource = Resource.create({ "service.name": "my-django-app", "service.version": "1.0.0", "deployment.environment": "production", "service.instance.id": "instance-123" }) provider = TracerProvider(resource=resource)
Error Handling and Exception Tracking
OpenTelemetry automatically captures unhandled exceptions, but you can also manually record exceptions:
from opentelemetry import trace from opentelemetry.trace import Status, StatusCode def my_view(request): span = trace.get_current_span() try: # Your business logic here result = risky_operation() return JsonResponse({"result": result}) except Exception as e: # Record the exception and set error status span.record_exception(e) span.set_status(Status(StatusCode.ERROR, str(e))) return JsonResponse({"error": "Something went wrong"}, status=500)
Performance Considerations
OpenTelemetry is designed to be efficient and lightweight, but it does introduce some overhead due to instrumentation and telemetry collection. The performance impact depends on several factors:
- Volume of telemetry data: More spans and attributes increase overhead
- Exporter configuration: Batch exporters are more efficient than real-time exporters
- Sampling rate: Lower sampling rates reduce overhead
- System resources: Available CPU and memory affect performance
To optimize performance:
- Configure appropriate sampling rates to balance observability needs with performance
- Use batch exporters instead of synchronous exporters when possible
- Monitor resource usage and adjust configuration as needed
- Profile your application to identify any performance bottlenecks introduced by instrumentation
- Use environment variables to disable unnecessary instrumentations
Production Deployment Best Practices
Security Considerations
- Use secure endpoints (HTTPS) for exporting telemetry data
- Avoid logging sensitive information in span attributes
- Configure proper authentication for telemetry backends
- Review and sanitize custom attributes before adding them to spans
Resource Management
- Set appropriate memory limits for batch processors
- Configure timeouts for exporters to prevent blocking
- Monitor the telemetry pipeline's resource consumption
- Use compression when exporting large volumes of data
Monitoring the Monitor
- Set up health checks for your telemetry pipeline
- Monitor exporter success/failure rates
- Track the volume of telemetry data being generated
- Alert on telemetry pipeline failures
What is Uptrace?
Uptrace is a high-performance, open-source Application Performance Monitoring (APM) platform built for modern distributed systems. Whether you're debugging microservices, optimizing database queries, or ensuring SLA compliance, Uptrace provides unified observability without vendor lock-in.
Key Features
Uptrace offers all-in-one observability with a single interface for traces, metrics, and logs, eliminating the need for context switching between multiple tools. Built on OpenTelemetry standards for maximum compatibility, it provides:
- Blazing Fast Performance: Processes 10,000+ spans per second on a single core with advanced compression reducing 1KB spans to ~40 bytes
- Cost-Effective at Scale: AGPL open-source license with 95% storage savings vs. traditional solutions and no per-seat pricing or data ingestion limits
- Developer-First Experience: SQL-like query language for traces, PromQL-compatible metrics queries, and modern Vue.js UI with intuitive workflows
Architecture
Uptrace leverages ClickHouse for real-time analysis with sub-second queries on billions of spans and exceptional 10:1 compression ratios, while PostgreSQL handles metadata with ACID compliance and rich data types support.
Deployment Options
- Cloud (Fastest): Uptrace Cloud requires no installation, maintenance, or scaling. Sign up for a free account and get 1TB of storage and 50,000 timeseries
- Self-Hosted: Deploy using Docker Compose with full control over your data
uptrace-python OpenTelemetry Wrapper
uptrace-python is a thin wrapper over OpenTelemetry Python that configures OpenTelemetry SDK to export data to Uptrace. It does not add any new functionality and is provided only for your convenience.
Installation
pip install uptrace
Configuration for Django
Add the following configuration to your Django application's startup code (typically in manage.py
or wsgi.py
):
import uptrace from opentelemetry import trace # Configure OpenTelemetry with Uptrace uptrace.configure_opentelemetry( # Set DSN or use UPTRACE_DSN environment variable dsn="<your-uptrace-dsn>", service_name="my-django-app", service_version="1.0.0", deployment_environment="production", ) # Get a tracer tracer = trace.get_tracer("my_django_app", "1.0.0")
Configuration Options
The following configuration options are supported:
Option | Description |
---|---|
dsn | A data source that is used to connect to uptrace.dev |
service_name | service.name resource attribute |
service_version | service.version resource attribute |
deployment_environment | deployment.environment resource attribute |
resource_attributes | Any other resource attributes |
Environment Variables
You can also use environment variables to configure the client:
# Data source name export UPTRACE_DSN="https://<token>@uptrace.dev/<project_id>" # Resource attributes export OTEL_RESOURCE_ATTRIBUTES="service.name=my-django-app,service.version=1.0.0" # Service name (takes precedence over OTEL_RESOURCE_ATTRIBUTES) export OTEL_SERVICE_NAME="my-django-app" # Propagators export OTEL_PROPAGATORS="tracecontext,baggage"
Django Integration Example
Here's a complete example for integrating uptrace-python with Django:
# manage.py import os import sys import uptrace from opentelemetry.instrumentation.django import DjangoInstrumentor def main(): # Set Django settings module os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings") # Configure Uptrace OpenTelemetry uptrace.configure_opentelemetry( # DSN from environment variable or directly # dsn="https://<token>@uptrace.dev/<project_id>", service_name="my-django-app", service_version="1.0.0", deployment_environment=os.environ.get("ENVIRONMENT", "development"), ) # Instrument Django DjangoInstrumentor().instrument() try: from django.core.management import execute_from_command_line except ImportError as exc: raise ImportError( "Couldn't import Django. Are you sure it's installed and " "available on your PYTHONPATH environment variable? Did you " "forget to activate a virtual environment?" ) from exc execute_from_command_line(sys.argv) if __name__ == '__main__': main()
Gunicorn Integration
With Gunicorn, use the post_fork hook in your gunicorn.conf.py
:
import uptrace from opentelemetry.instrumentation.django import DjangoInstrumentor def post_fork(server, worker): uptrace.configure_opentelemetry( service_name="my-django-app", service_version="1.0.0", deployment_environment="production", ) DjangoInstrumentor().instrument()
uWSGI Integration
With uWSGI, use the postfork decorator:
from uwsgidecorators import postfork import uptrace from opentelemetry.instrumentation.django import DjangoInstrumentor @postfork def init_tracing(): uptrace.configure_opentelemetry( service_name="my-django-app", service_version="1.0.0", deployment_environment="production", ) DjangoInstrumentor().instrument()
Viewing Traces
uptrace-python provides a convenient way to get trace URLs:
from opentelemetry import trace import uptrace def my_view(request): span = trace.get_current_span() # Your business logic here # Get the trace URL for debugging trace_url = uptrace.trace_url(span) print(f"View trace at: {trace_url}") return JsonResponse({"status": "success"})
Performance Recommendations
To maximize performance and efficiency when using Uptrace, consider the following recommendations:
- Use BatchSpanProcessor: Essential for exporting multiple spans in a single request
- Enable gzip compression: Essential to compress data before sending and reduce traffic costs
- Prefer delta metrics temporality: Recommended because such metrics are smaller and Uptrace converts cumulative metrics to delta anyway
- Use Protobuf over JSON: Recommended as a generally more efficient encoding
- Prefer HTTP over gRPC: Recommended as a more mature transport with better compatibility
Troubleshooting SSL Errors
If you are getting SSL errors, try using different root certificates as a workaround:
export GRPC_DEFAULT_SSL_ROOTS_FILE_PATH=/etc/ssl/certs/ca-certificates.crt
Troubleshooting
Common Issues
- DJANGO_SETTINGS_MODULE not found: Ensure the environment variable is set before calling
DjangoInstrumentor().instrument()
- Missing database instrumentation: Install the appropriate database-specific instrumentation packages
- Worker process issues: Use post-fork hooks for uWSGI and Gunicorn deployments
- High performance overhead: Adjust sampling rates and exporter configuration
- Spans not appearing: Check that your exporter is configured correctly and the backend is reachable
- Auto-reload conflicts: Use
--noreload
flag when running Django withrunserver
Debug Mode
To enable debug logging for OpenTelemetry:
import logging # Enable debug logging for OpenTelemetry logging.getLogger("opentelemetry").setLevel(logging.DEBUG) logging.basicConfig(level=logging.DEBUG)
Testing Instrumentation
You can test your instrumentation locally using the console exporter:
from opentelemetry.sdk.trace.export import ConsoleSpanExporter # Use console exporter for testing processor = BatchSpanProcessor(ConsoleSpanExporter())
Best Practices
- Initialize instrumentation early: Set up OpenTelemetry before your Django application starts handling requests
- Use meaningful span names: Create descriptive span names that help identify operations
- Add relevant attributes: Include context-specific attributes to make traces more useful
- Monitor performance impact: Regularly assess the overhead introduced by instrumentation
- Configure appropriate sampling: Balance observability needs with performance requirements
- Use semantic conventions: Follow OpenTelemetry semantic conventions for consistency
- Implement proper error handling: Capture and record exceptions with appropriate context
- Regular maintenance: Keep OpenTelemetry libraries updated and review configurations periodically
What's Next?
By integrating OpenTelemetry with Django, you gain valuable insights into your application's performance, behavior, and dependencies. You can monitor and troubleshoot issues, optimize performance, and ensure the reliability of your Django applications.
The telemetry data collected can help you:
- Identify performance bottlenecks
- Track request flows across services
- Monitor database query performance
- Understand user behavior patterns
- Detect and diagnose errors
- Optimize resource utilization
- Improve overall application reliability