Introduction
Zero‑downtime deployments are a non‑negotiable expectation for modern services. As a DevOps lead, you’ve probably seen traffic spikes, user complaints, and frantic roll‑backs when a new release brings down the app. This checklist walks you through a practical, Docker‑centric workflow that pairs Nginx as a reverse‑proxy to achieve seamless updates without interrupting users.
✅ The Checklist
✅ | Step | Why it matters |
---|---|---|
1 | Version your Docker images | Guarantees you can roll back instantly. |
2 | Use multi‑stage builds | Keeps images small and fast to pull. |
3 | Run Nginx in front of the app containers | Allows traffic routing without touching the app. |
4 | Implement a blue‑green deployment strategy | Swaps traffic at the proxy level, not the containers. |
5 | Health‑check your services | Prevents bad releases from ever seeing traffic. |
6 | Leverage Docker Compose or Swarm for orchestration | Simplifies service definition and scaling. |
7 | Log and monitor the switch | Gives you confidence and quick rollback if needed. |
Below we expand each item with concrete commands and configuration snippets.
1. Version Your Docker Images
Tag your builds with both a semantic version and a short git SHA. This makes roll‑backs a docker pull
away.
# Build and tag DOCKER_REPO=myorg/api VERSION=1.4.2 SHA=$(git rev-parse --short HEAD) docker build -t $DOCKER_REPO:$VERSION-$SHA . docker push $DOCKER_REPO:$VERSION-$SHA
When you need to revert, just pull the previous tag and redeploy.
2. Multi‑Stage Dockerfile
A lean image reduces pull time, which is crucial for zero‑downtime swaps.
# ---- Build stage ---- FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build # ---- Runtime stage ---- FROM node:20-alpine AS runtime WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/package*.json ./ RUN npm ci --production CMD ["node", "dist/index.js"]
The final image contains only the compiled code and production dependencies.
3. Nginx as a Reverse Proxy
Running Nginx in its own container lets you change upstream servers without touching the app containers.
# /etc/nginx/conf.d/upstream.conf upstream api_backend { # Placeholder – will be replaced by Docker compose at deploy time server api_blue:3000; server api_green:3000; } server { listen 80; location / { proxy_pass http://api_backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }
The api_backend
upstream can point to either the blue or green set of containers.
4. Blue‑Green Deployment Workflow
- Deploy the new version as
api_green
whileapi_blue
continues serving traffic. - Run health checks against
api_green
(e.g.,/health
). - Update Nginx upstream to point only to
api_green
. - Reload Nginx (
docker exec nginx nginx -s reload
). - Monitor for errors. If anything goes wrong, revert the upstream to
api_blue
.
Docker‑Compose Example
version: "3.8" services: nginx: image: nginx:stable-alpine ports: - "80:80" volumes: - ./nginx/conf.d:/etc/nginx/conf.d:ro depends_on: - api_blue - api_green api_blue: image: myorg/api:1.4.2-ab12cd environment: - NODE_ENV=production deploy: replicas: 2 api_green: image: myorg/api:1.4.3-ef34gh environment: - NODE_ENV=production deploy: replicas: 2
When the green stack passes its health checks, you simply edit upstream.conf
to replace api_blue
with api_green
and reload Nginx.
5. Health Checks
Docker Swarm and Compose support healthcheck
directives. Define one that hits an endpoint returning 200
only when the app is ready.
api_green: image: myorg/api:1.4.3-ef34gh healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/health"] interval: 10s timeout: 5s retries: 3
Only after the container reports healthy
should you promote it in Nginx.
6. Orchestrate with Docker Compose / Swarm
Using a single docker-compose.yml
keeps the whole stack version‑controlled. For production you can spin it up with:
docker stack deploy -c docker-compose.yml prod
Swarm will handle rolling updates, but you still retain manual control over the proxy switch, giving you the safety net of a true blue‑green rollout.
7. Logging & Observability
Tie Nginx access logs and container stdout into a centralized system (e.g., Loki, Datadog, or CloudWatch). A quick docker logs
command is handy for debugging, but for production you want structured logs.
# Example: tail both Nginx and green API logs in one pane docker-compose logs -f nginx api_green
Set up alerts on error rates spikes after each switch. If the error rate exceeds a threshold within the first five minutes, automatically roll back the upstream.
Bonus: Zero‑Downtime Database Migrations
While Docker handles the app layer, schema changes can still cause hiccups. Use a migration tool that runs out‑of‑band (e.g., Flyway or Prisma) before the green containers start serving traffic. This way the database is already in the expected state when the switch happens.
Conclusion
Achieving zero‑downtime deployments with Docker and Nginx boils down to disciplined image versioning, a clean reverse‑proxy, and a controlled blue‑green switch. Follow the checklist, automate health checks, and keep observability tight, and you’ll rarely see a user‑visible outage.
If you need help shipping this, the team at RamerLabs can help.
Top comments (0)