DEV Community

Cover image for Real-Time Slack Alerts for Microservices with Prometheus: A Practical Guide
vineetsarpal
vineetsarpal

Posted on

Real-Time Slack Alerts for Microservices with Prometheus: A Practical Guide

In a microservices architecture, knowing when things go wrong is half the battle!
And you don’t want your users to be the first ones to cut you some Slack.

Wouldn’t it be better if your services could instantly alert you before things spiral out of control?

In this practical guide, we’ll wire up Prometheus and Alertmanager to ping your Slack channel the moment something fishy happens in production.


🚨 Why Monitor Microservices?

Microservices architecture brings flexibility and scalability but it also introduces complexity. Unlike monolithic apps, you're now managing multiple independently deployed services, each with potential failure points.

Proactive monitoring and alerting ensures that you catch issues early, before they impact users.

🏗️ Architecture Overview

We’ll set up the following components:

  1. Microservices with embedded metrics
  2. Prometheus to scrape and store metrics
  3. Alertmanager to evaluate and route alerts
  4. Slack to receive notifications
[Microservices] → [Prometheus] → [Alertmanager] → [Slack] ↓ ↓ ↓ ↓ /metrics Storage Rules Notifications 
Enter fullscreen mode Exit fullscreen mode

🧰 Project Setup

Feel free to clone the full project from GitHub and follow along:
👉 https://github.com/vineetsarpal/prometheus-slack-alerts
or use this guide to integrate the setup into your own project.

⚙️ Setting Up the Microservices

We’ll include three microservices: one each in Python (FastAPI), JavaScript (Node.js), and Go (Golang).

🟡 FastAPI service with Automatic Instrumentation

For Python-based services, the prometheus-fastapi-instrumentator library provides automatic metrics collection:

from fastapi import FastAPI from prometheus_fastapi_instrumentator import Instrumentator app = FastAPI() # Automatic Prometheus instrumentation Instrumentator().instrument(app).expose(app) @app.get("/") async def root(): return {"message": "Hello from FastAPI service"} 
Enter fullscreen mode Exit fullscreen mode

This automatically exposes metrics at the /metrics endpoint.

🟢 Node.js service with Custom Metrics

For Node.js services, use the prom-client library for default metrics:

const express = require('express') const client = require('prom-client') const app = express() // Enable default system metrics client.collectDefaultMetrics() // Expose metrics endpoint app.get('/metrics', async (req, res) => { res.set('Content-Type', client.register.contentType) res.end(await client.register.metrics()) }) app.get('/', (req, res) => { res.json({ message: 'Hello from Node.js service!' }); }) app.listen(PORT, () => { console.log(`Node.js server running on port ${PORT}`) }) 
Enter fullscreen mode Exit fullscreen mode

🔵 Go service with Prometheus Client

Use the official Prometheus Go client:

package main import ( "fmt" "net/http" "github.com/prometheus/client_golang/prometheus/promhttp" ) func helloHandler(w http.ResponseWriter, r *http.Request) { fmt.Fprintln(w, `{"message": "Hello from Go service!"}`) } func main() { http.HandleFunc("/", helloHandler) http.Handle("/metrics", promhttp.Handler()) fmt.Println("Go service listening on port 8002") http.ListenAndServe(":8002", nil) } 
Enter fullscreen mode Exit fullscreen mode

This exposes default Go process/runtime metrics at /metrics.

📡 Prometheus Configuration

Configure Prometheus to scrape metrics from your services:

# prometheus.yml global: scrape_interval: 15s alerting: alertmanagers: - static_configs: - targets: - 'alertmanager:9093' rule_files: - 'alert-rules.yml' scrape_configs: - job_name: 'fastapi-service' static_configs: - targets: ['fastapi-service:8000'] - job_name: 'nodejs-service' static_configs: - targets: ['nodejs-service:8001'] - job_name: 'go-service' static_configs: - targets: ['go-service:8002'] 
Enter fullscreen mode Exit fullscreen mode

We are scraping all services every 15 seconds.

📏 Defining Alert Rules

# alert-rules.yml groups: - name: service_alerts rules: - alert: ServiceDown expr: up == 0 for: 10s labels: severity: critical annotations: summary: "Service {{ $labels.instance }} is down" description: "{{ $labels.instance }} has been down for more than 10 seconds." 
Enter fullscreen mode Exit fullscreen mode

This rule fires when any service becomes unreachable for more than 10 seconds.

🔔 Configuring Slack Alerts

🪝 Setting Up Slack Webhook

To get Slack alerts flowing, you’ll first need to create a Slack App and hook it up to a channel of your choice. Here’s how:

  1. Create a Slack channel where you’d like to receive the alerts (e.g. #prometheus-alerts)
  2. Head over to 👉 https://api.slack.com/apps
  3. Click “Create New App” → Choose “From scratch”
  4. Give your app a name (like "Prometheus Alerts"), select your workspace, and hit Create App
  5. In the left sidebar, go to Incoming Webhooks → Toggle Activate Incoming Webhooks
  6. Scroll down and click “Add New Webhook to Workspace” → Select the desired channel → Click Allow
  7. Slack will generate a Webhook URL: Slack Webhook URL
  8. Copy the URL and paste it into the file config/secrets/slack-webhook-url.txt
  9. This file will be mounted as a secret inside the alertmanager container at runtime

🧭 Alertmanager Config

# alertmanager.yml global: resolve_timeout: 5m route: group_by: ['alertname'] group_wait: 10s group_interval: 10s repeat_interval: 1h receiver: 'slack-notifications' receivers: - name: 'slack-notifications' slack_configs: - api_url_file: /etc/secrets/slack-webhook-url.txt send_resolved: true title: '{{ if eq .Status "firing" }}🚨 {{ .CommonLabels.severity | toUpper }} {{ else if eq .Status "resolved" }}✅ RESOLVED{{ end }} - {{ .CommonLabels.alertname }} - {{ .CommonLabels.instance }}' title_link: 'http://localhost:9090/alerts' text: | *Description:* {{ .CommonAnnotations.description }} color: '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}' footer: 'Prometheus Alertmanager' 
Enter fullscreen mode Exit fullscreen mode

Docker Compose Setup

Tie everything together with Docker Compose:

# docker-compose.yml services: # FastAPI Service fastapi-service: build: ./fastapi-service container_name: fastapi-service ports: - "8000:8000" networks: - monitoring-network # Node.js Service nodejs-service: build: ./nodejs-service container_name: nodejs-service ports: - "8001:8001" networks: - monitoring-network # Go Service go-service: build: ./go-service container_name: go-service ports: - "8002:8002" networks: - monitoring-network # Alert manager alertmanager: image: prom/alertmanager:latest container_name: alertmanager ports: - "9093:9093" volumes: - ./config/alertmanager.yml:/etc/alertmanager/alertmanager.yml - ./config/secrets/slack-webhook-url.txt:/etc/secrets/slack-webhook-url.txt:ro command: - '--config.file=/etc/alertmanager/alertmanager.yml' networks: - monitoring-network # Prometheus prometheus: image: prom/prometheus:latest container_name: prometheus ports: - "9090:9090" volumes: - ./config/prometheus.yml:/etc/prometheus/prometheus.yml - ./config/alert-rules.yml:/etc/prometheus/alert-rules.yml command: - '--config.file=/etc/prometheus/prometheus.yml' networks: - monitoring-network depends_on: - alertmanager networks: monitoring-network: driver: bridge 
Enter fullscreen mode Exit fullscreen mode

The individual Dockerfiles for each service is included in the GitHub repo.

🧪 Testing Your Setup

Let the metrics do the talking! It's time to verify that everything is wired up correctly and Prometheus is doing its job.

1. Start the stack

Make sure you're in the project's root directory, then run:

docker compose up -d 
Enter fullscreen mode Exit fullscreen mode

2. Confirm Metrics

# Check if services expose metrics curl http://localhost:8000/metrics curl http://localhost:8001/metrics curl http://localhost:8002/metrics 
Enter fullscreen mode Exit fullscreen mode

You should see something like this:

curl metrics

Visit: http://localhost:9090/targets in your browser to see all the services are up and being scraped by Prometheus:

Prometheus targets

3. Simulate an Alert

Stop a service to trigger a ServiceDown alert

docker-compose stop fastapi-service 
Enter fullscreen mode Exit fullscreen mode

Wait 10+ seconds, check Slack for the alert, and voila!:

Slack alert critical

Restart service to see the resolved notification

docker-compose start fastapi-service 
Enter fullscreen mode Exit fullscreen mode

Wait a few seconds, and check Slack again for the resolved notification:

Slack alert resolved

🧯 Troubleshooting

Alerts Not Firing

  1. Check http://localhost:9090/rules
  2. Manually evaluate rule expressions in Prometheus UI
  3. Check Prometheus and Alertmanager logs

Slack Notifications Not Received

  1. Verify webhook URL is correct and accessible
  2. Check Alertmanager logs for delivery errors
  3. Test webhook manually with curl

✅ Conclusion

Setting up Slack alerts for microservices with Prometheus provides real-time visibility you need to keep services reliable and your team responsible.

The key to success is:

  1. Start simple with basic availability monitoring
  2. Iterate gradually by adding more sophisticated alerts
  3. Focus on actionable alerts that require human intervention
  4. Test thoroughly to ensure reliability when you need it most

This setup forms a solid foundation you can extend with custom metrics, multiple services, and advanced alerting rules as your system evolves.

Remember — great monitoring isn't about tracking everything, but about surfacing the right signals at the right time to keep things running smoothly.

📚 Resources


Cover illustration generated with Google Gemini

Top comments (0)