DEV Community

Cover image for Monitor NextJS with OpenTelemetry - Complete Tutorial
Yuvraj Singh Jadon
Yuvraj Singh Jadon

Posted on • Originally published at signoz.io

Monitor NextJS with OpenTelemetry - Complete Tutorial

Vercel gives you some observability out of the box for your NextJS application: function logs, perf insights, basic metrics. But as your app grows, the cracks start showing.

Where Vercel Falls Short

  • No End-to-End Traces: You get function-level timings, but not the full request lifecycle across middleware, DB calls, and third-party APIs.
  • Locked In: Logs and metrics stay tied to Vercel. Want to move infra or test locally? Too bad.
  • Zero Custom Instrumentation: Can’t trace auth flow timing, cache performance, or business-critical logic.
  • No Service Correlation: Can’t trace calls across microservices, queue workers, or even a database hop.
  • Weak Debug Context: No slow query breakdowns. No cache hit/miss metrics. No idea why stuff is slow.

At this point, you’ll likely explore Jaeger or Datadog. But:

  • Jaeger: Great for tracing, useless for metrics and logs.
  • Datadog: Full-featured—but don’t blink, or your bill will triple.

This guide walks through how to instrument a Next.js app using OpenTelemetry, explains Jaeger’s limits, and shows how SigNoz gives you a complete observability pipeline—without breaking the bank.

Why Even Bother Instrumenting Next.js?

You might think: “My app’s just pages and API routes. Why trace anything?”

But Next.js isn’t simple. It just looks simple.

The Hidden Complexity

Next.js runs on multiple layers:

  • Multiple environments - Node? Edge? Browser? each with its own quirks.
  • Hybrid rendering: SSR, SSG, ISR—each with different bottlenecks.
  • Middleware: Runs on every request.
  • API routes: Mini backends.
  • Client/server boundaries: Blurred, dynamic, often hard to trace.

Without instrumentation, you're flying blind.

Real Scenarios Where Tracing Saves Hours

The Mysterious Slow Page Load

User says: “The dashboard is slow.”

Without tracing: Stare at server logs. Hope for a clue.

With tracing: 50ms render + 3s wait on a third-party API. Case closed.

The Intermittent 500

API crashes randomly.

Without tracing: Try to reproduce locally. Maybe cry.

With tracing: DB timeout after 12 retries—problem found.

The Deployment Regression

Site feels slower after latest deploy.

Without tracing: Play “spot the difference” in DevTools.

With tracing: New query in Server Component adds 200ms—roll it back or optimize.

Bottom line: Instrumentation turns debugging from guesswork into precision diagnosis.

Setup a Real App

Let’s start with something non-trivial. We'll use the official with-supabase example - it’s got real user auth, DB reads/writes, and third-party API usage. Exactly the kind of app where observability shines.

Create the App

npx create-next-app@latest nextjs-observability-demo --example with-supabase cd nextjs-observability-demo 
Enter fullscreen mode Exit fullscreen mode

Add Environment Config

cp .env.example .env.local 
Enter fullscreen mode Exit fullscreen mode

Update .env.local:

NEXT_PUBLIC_SUPABASE_URL=your_supabase_url NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key 
Enter fullscreen mode Exit fullscreen mode

Test it Locally

npm run dev 
Enter fullscreen mode Exit fullscreen mode

Output should look like:

▲ Next.js 15.3.3 (Turbopack) - Local: http://localhost:3000 ✓ Ready in 855ms GET /auth/sign-up 200 in 491ms GET /protected 200 in 665ms 
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:3000 and test signup, login, and protected routes.

NextJS + Supabase Template Project

What This App Gives Us

  • Auth system: Signup, login, password reset, session flow
  • Mixed rendering: SSR, SSG, API routes, Client Components
  • Real DB operations: Auth, user management
  • External APIs: Supabase
  • Middleware: You can add some, we’ll instrument it later

This isn’t “hello world.” It’s mimics production-grade complexity—ideal for tracing, metrics, and debugging in the real world.

Note: In this guide, we will be using different ports (3000, 3001) across the sections. Ensure you are using the port as per your system.

Setting Up OpenTelemetry in Next.js

Time to wire up OpenTelemetry into your app. We'll follow the official Next.js guide and use @vercel/otel, which is the preferred path for most setups.

It supports both Node and Edge runtimes, comes with sane defaults, and is maintained by the Next.js team, so no need to reinvent the wheel unless you really want to

1. Install @vercel/otel

npm install @vercel/otel 
Enter fullscreen mode Exit fullscreen mode

2. Create instrumentation.ts

Create a file named instrumentation.ts at the root of your project (or the src folder if you are using it):

import { registerOTel } from '@vercel/otel' export function register() { registerOTel({ serviceName: 'nextjs-observability-demo' }) } 
Enter fullscreen mode Exit fullscreen mode

This hook auto-registers tracing across your entire app - including API routes, page rendering, and fetch calls.

The serviceName shows up in your tracing backend (Jaeger, SigNoz, etc.) and helps separate services in a distributed system

Update next.config.mjs to include instrumentationHook

This step is only needed when using NextJS 14 and below:

// next.config.ts - This config flag is deprecated const nextConfig = { experimental: { instrumentationHook: true, // 🔴 Only include when using NextJS 14 or Below }, } 
Enter fullscreen mode Exit fullscreen mode

Why Use @vercel/otel?

Because it just works—with minimal boilerplate.

Benefits:

  • Auto-detects Node.js vs Edge environments
  • Pre-configured with smart defaults
  • Maintained by the Next.js team
  • Simple to install, easy to maintain

What If You Want Full Control?

Then go manual:

// For advanced users export async function register() { if (process.env.NEXT_RUNTIME === 'nodejs') { await import('./instrumentation.node.ts') } } 
Enter fullscreen mode Exit fullscreen mode

This gives you full access to the OpenTelemetry Node SDK, but you’ll be responsible for configuring everything—exporters, propagators, batching, etc by creating your own instrumentation.node.ts file as explained in the official documentation

3. Run It and Verify

Start your dev server:

npm run dev 
Enter fullscreen mode Exit fullscreen mode

You should see something like:

✓ Compiled instrumentation Node.js in 157ms ✓ Compiled instrumentation Edge in 107ms ✓ Ready in 1235ms 
Enter fullscreen mode Exit fullscreen mode

Trigger a trace:

curl http://localhost:3001/protected 
Enter fullscreen mode Exit fullscreen mode

What Gets Instrumented Automatically?

Next.js comes with solid OpenTelemetry support out of the box. Once you enable instrumentation, it automatically generates spans for key operations—no manual code required.

According to the official docs, here’s what you get by default:

Span Conventions

All spans follow OpenTelemetry’s semantic conventions and include custom attributes under the next namespace:

Attribute Meaning
next.span_type Internal operation type
next.span_name Duplicates span name
next.route Matched route (e.g. /[id]/edit)
next.rsc Whether it's a React Server Component
next.page Internal identifier for special files

Default Span Types

  1. HTTP Requests

Type: BaseServer.handleRequest

What you get: method, route, status, total duration

  1. Route Rendering

Type: AppRender.getBodyResult

Tells you: how long server-side rendering took

  1. API Route Execution

Type: AppRouteRouteHandlers.runHandler

Covers: custom handlers in app/api/

  1. Fetch Requests

Type: AppRender.fetch

Covers: any fetch() used during rendering

Tip: disable with NEXT_OTEL_FETCH_DISABLED=1

  1. Metadata Generation

Type: ResolveMetadata.generateMetadata

Tracks: SEO-related dynamic metadata costs

  1. Component Loading

Types:

  • clientComponentLoading
  • findPageComponents
  • getLayoutOrPageModule

Insight: Which modules are loaded, how long it takes

  1. Server Response Start

Type: NextNodeServer.startResponse

Why it matters: measures TTFB (Time to First Byte)

  1. Pages Router Support (legacy)

Still using getServerSideProps or getStaticProps? Next.js instruments those too.

Why This Matters

Without writing a single line of tracing code, you now get:

  • Request and route-level performance
  • Server rendering + metadata overhead
  • External API call timings
  • Component/module load times
  • TTFB and response latency

It’s a solid start—but you still need somewhere better than Jaeger to actually work with these traces at scale.

Debugging with Env Variables

Need to see more detail?

export NEXT_OTEL_VERBOSE=1 
Enter fullscreen mode Exit fullscreen mode

You’ll get verbose span logs in the terminal—useful when debugging instrumentation issues.

Next: These traces are only local for now. Let’s plug in a collector and pipe them to something visual - starting with Jaeger.

Running a Collector Locally and Testing Traces

You’ve got instrumentation. Now it’s time to capture those traces with an OpenTelemetry Collector and ship them to something visual like Jaeger.

We’ll use Vercel’s dev setup which comes pre-bundled with:

  • OpenTelemetry Collector
  • Jaeger
  • Zipkin
  • Prometheus
  • Pre-wired Docker Compose config

1. Clone and Start the Collector Stack

git clone https://github.com/vercel/opentelemetry-collector-dev-setup.git cd opentelemetry-collector-dev-setup 
Enter fullscreen mode Exit fullscreen mode

2. Update the Collector Config (Important)

The default config may be outdated. Replace otel-collector-config.yaml with the following updated version:

receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 exporters: prometheus: endpoint: "0.0.0.0:8889" const_labels: label1: value1 debug: verbosity: basic zipkin: endpoint: "http://zipkin-all-in-one:9411/api/v2/spans" format: proto otlp/jaeger: endpoint: jaeger-all-in-one:14250 tls: insecure: true processors: batch: extensions: health_check: pprof: endpoint: :1888 zpages: endpoint: :55679 service: extensions: [pprof, zpages, health_check] pipelines: traces: receivers: [otlp] processors: [batch] exporters: [debug, zipkin, otlp/jaeger] metrics: receivers: [otlp] processors: [batch] exporters: [debug, prometheus] 
Enter fullscreen mode Exit fullscreen mode
  • Replaces deprecated exporters
  • Adds support for latest collector version
  • Sends traces to both Jaeger and Zipkin

3. Start the Collector Stack

export OTELCOL_IMG=otel/opentelemetry-collector-contrib:latest export OTELCOL_ARGS="" docker compose up -d 
Enter fullscreen mode Exit fullscreen mode

4. Confirm It’s Running

docker compose ps 
Enter fullscreen mode Exit fullscreen mode

You should see:

Configure Next.js to Export Traces

Add this to your .env.local:

OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4318/v1/traces OTEL_EXPORTER_OTLP_TRACES_PROTOCOL=http/protobuf OTEL_LOG_LEVEL=debug NEXT_OTEL_VERBOSE=1 
Enter fullscreen mode Exit fullscreen mode

Then restart:

npm run dev 
Enter fullscreen mode Exit fullscreen mode

You should see logs like:

✓ Compiled instrumentation Node.js in 157ms ✓ Compiled instrumentation Edge in 107ms @vercel/otel/otlp: onSuccess 200 OK 
Enter fullscreen mode Exit fullscreen mode

Test It

Send a few requests:

curl http://localhost:3000 curl http://localhost:3000/protected curl http://localhost:3000/auth/login 
Enter fullscreen mode Exit fullscreen mode

Check collector logs:

docker compose logs otel-collector --tail=10 
Enter fullscreen mode Exit fullscreen mode

Look for:

info Traces {"resource spans": 2, "spans": 23} 
Enter fullscreen mode Exit fullscreen mode

View Traces in Jaeger

  1. Go to http://localhost:16686
  2. Select nextjs-observability-demo in the dropdown
  3. Click Find Traces
  4. Click any trace to view its full request flow

Example NextJS trace in Jaeger

Troubleshooting

Problem Likely Cause Fix
Collector keeps restarting Bad YAML config Run docker compose logs otel-collector and fix otel-collector-config.yaml
onError in Next.js console Collector unreachable Use 0.0.0.0:4318, not localhost:4318 in collector config
No traces in Jaeger Missing env vars Check .env.local for correct OTEL_EXPORTER_OTLP_... values
Docker doesn’t start Docker Desktop not running open -a Docker (macOS) or start it manually

Now that traces are flowing and visualized, we’ll look at how to interpret them—what each span tells you, and how to drill into performance issues in your Next.js stack using real trace data.

Useful Jaeger Features

  • Trace comparison — spot regressions
  • Dependency map — visualize service call flow
  • Direct links — share traces with teammates
  • Export support — for deeper analysis

But Here's the Catch

Jaeger is great—until it isn’t.

What It Does Well ❌ What It Lacks
Visualize request flow No dashboards
Drill into trace timings No metrics or alerts
Debug route/API latency No log correlation
Understand call dependencies No business KPI visibility

We need more than just a trace viewer. In production, you want:

  • Metrics + alerting
  • Logs + trace correlation
  • Dashboards for teams
  • Advanced filters and KPIs

Next up: how to plug SigNoz into your setup to get full-stack observability—without leaving OpenTelemetry.

Let’s go.

Sending Data to SigNoz: Production-Ready Observability

Jaeger is great for local debugging—but when you move to production, you need more: metrics, logs, alerting, dashboards. That’s where SigNoz comes in.

Why SigNoz Over Jaeger?

Capability Jaeger SigNoz
Distributed tracing
Metrics dashboard
Log aggregation + correlation
Real-time alerting
Long-term storage
Custom dashboards / KPIs

You get everything Jaeger does, plus actual production monitoring.

Updating the Collector to Send Data to SigNoz

Let's update the existing otel-collector setup to export traces and metrics to SigNoz Cloud - while keeping Jaeger for local use.

Step 1: Add SigNoz Config

Update your .env file in opentelemetry-collector-dev-setup:

SIGNOZ_ENDPOINT=ingest.us.signoz.cloud:443 # change 'us' if needed SIGNOZ_INGESTION_KEY=your-signoz-key-here 
Enter fullscreen mode Exit fullscreen mode

Note: Create an account on SigNoz Cloud and get the ingestion key and region from the settings page.

Step 2: Modify otel-collector-config.yaml

Add SigNoz to the list of exporters:

exporters: otlp/signoz: endpoint: "${SIGNOZ_ENDPOINT}" headers: signoz-access-token: "${SIGNOZ_INGESTION_KEY}" tls: insecure: false service: pipelines: traces: exporters: [debug, zipkin, otlp/jaeger, otlp/signoz] metrics: exporters: [debug, prometheus, otlp/signoz] 
Enter fullscreen mode Exit fullscreen mode

Step 3: Update docker-compose.yaml

Pass SigNoz credentials as environment vars:

environment: - SIGNOZ_ENDPOINT=${SIGNOZ_ENDPOINT} - SIGNOZ_INGESTION_KEY=${SIGNOZ_INGESTION_KEY} 
Enter fullscreen mode Exit fullscreen mode

Step 4: Restart Collector

docker compose down docker compose up -d 
Enter fullscreen mode Exit fullscreen mode

Check logs to verify export:

docker compose logs otel-collector | grep signoz 
Enter fullscreen mode Exit fullscreen mode

You should see:

Successfully exported trace data to SigNoz 
Enter fullscreen mode Exit fullscreen mode

Generate Traffic to Test

curl http://localhost:3001/ curl http://localhost:3001/protected curl http://localhost:3001/auth/login curl http://localhost:3001/nonexistent 
Enter fullscreen mode Exit fullscreen mode

View in SigNoz

  1. Head to your SigNoz dashboard
  2. Check Services – you should see nextjs-observability-demo
  3. Go to Traces and Select one to visualize it

SigNoz Trace Details Page with Support for over a million span.

Alternatively: Direct Export to SigNoz

You can send traces directly from the app (skip collector), but it’s not ideal for production:

import { registerOTel, OTLPHttpJsonTraceExporter } from '@vercel/otel' export function register() { registerOTel({ serviceName: 'nextjs-observability-demo', traceExporter: new OTLPHttpJsonTraceExporter({ url: 'https://ingest.us.signoz.cloud/v1/traces', headers: { 'signoz-access-token': process.env.SIGNOZ_INGESTION_KEY || '' } }) }) } 
Enter fullscreen mode Exit fullscreen mode

Why Stick with the Collector?

  • Works with multiple backends (SigNoz + Jaeger)
  • Better reliability and batching
  • Local debugging + remote monitoring

Troubleshooting Common Issues

Problem Fix
❌ No data in SigNoz Check collector logs for trace export failures
❌ Wrong region Update SIGNOZ_ENDPOINT to match your region
❌ No environment variables Verify .env is loaded correctly
❌ Missing ingestion key Copy it from SigNoz Cloud settings

Exploring the Out-of-the-Box APM in SigNoz

Now that your Next.js app is streaming traces to SigNoz, you’ll notice something right away: this isn’t just a Jaeger clone with better UI.

SigNoz ships with full-blown APM (Application Performance Monitoring) capabilities out of the box - without you having to write a single custom metric.

That means no YAML spelunking, no manual dashboard setups—just useful insights, right there, from the moment traces start flowing.

SigNoz out of the box APM

What Makes SigNoz APM Actually Useful?

Most APMs either overload you with graphs or bury useful data behind paywalls. SigNoz hits the sweet spot: high-fidelity insights derived directly from OpenTelemetry traces, backed by high performing columnar database for blazing-fast queries and dashboards that update in near real-time.

You get:

  • Latency breakdowns (P50, P90, P99)
  • Request rate, error rate, Apdex
  • Database and external API performance
  • Auto-discovered endpoints

No extra config. No magic wrappers. Just signal.

From Spikes to Spans in 3 Clicks

Let’s say you see a latency spike around 2:45 PM. Instead of guessing, you:

  1. Click the spike on the chart
  2. SigNoz filters down to relevant traces
  3. You spot a rogue DB query eating 2.1 seconds

No guesswork. No slow log tailing. Just trace → span → fix.

"spans": [ { "name": "GET /auth/login", "duration": "2.3s" }, { "name": "supabase_auth", "duration": "150ms" }, { "name": "db.query.getUser", "duration": "2.1s" } // 💥 ] 
Enter fullscreen mode Exit fullscreen mode

External API + DB Visibility, No Extra Setup

If your app calls Supabase, Stripe, or any third-party API, you’ll see how long those calls take, how often they fail, and where they sit in your request timeline.

Same goes for your database queries: SigNoz shows frequency, duration, and highlights outliers as slow queries—without needing a separate DB monitoring solution.

Next: Visualizing + Building Dashboards in SigNoz

In the upcoming articles, we'll explore how to actually use SigNoz to:

  • Find slow endpoints
  • Track API performance
  • Alert on spikes or failures
  • Create dashboards for your team

Let’s observe!

Top comments (0)