Building Reproducible n8n Environments with CLI-Based Configuration Management
When you're building applications with n8n as a core component—not just using it as a standalone automation tool—you need a way to provision n8n instances with pre-configured credentials, workflows, and integrated services. This article shows you a pattern for creating fully reproducible n8n environments using the n8n CLI and environment variable substitution.
The Real Problem: n8n as an Application Component
Most n8n tutorials focus on getting started quickly. But what if you're building an application where n8n is one piece of a larger system? You need:
- Reproducible environments - Same setup across dev, staging, production
- Pre-configured credentials - Database connections ready to use
- Integrated services - PostgreSQL for data storage, Redis for agent memory
- Zero manual setup - No clicking through UIs to configure connections
- Version controlled configuration - Infrastructure as code
This isn't about convenience—it's about treating n8n as a first-class component in your application stack.
The Solution: n8n CLI + Environment Variables
n8n includes powerful CLI commands that most people don't know about:
# Export all credentials to JSON n8n export:credentials --all --output=creds.json # Import credentials from JSON n8n import:credentials --input=creds.json # Export workflows n8n export:workflow --all --output=workflows.json # Import workflows n8n import:workflow --input=workflows.json These commands are the foundation of reproducible n8n deployments. But there's a problem: exported credentials contain hardcoded values. If you export a PostgreSQL credential, it has a specific password baked in.
The envsubst Trick
Here's the key insight: we can use envsubst to transform n8n credential exports into templates with environment variable placeholders.
Step 1: Export a credential manually (one time)
n8n export:credentials --all --output=creds.json Step 2: Replace hardcoded values with environment variables
Transform this:
{ "id": "postgres_local", "name": "Local PostgreSQL Database", "data": { "host": "postgres", "password": "some_hardcoded_password", "user": "n8n_user" }, "type": "postgres" } Into this template:
{ "id": "${N8N_POSTGRES_CREDENTIAL_ID}", "name": "Local PostgreSQL Database", "data": { "host": "${POSTGRES_HOST}", "password": "${POSTGRES_PASSWORD}", "user": "${POSTGRES_USER}", "database": "${POSTGRES_DB}", "port": ${POSTGRES_PORT} }, "type": "postgres" } Step 3: Use envsubst to substitute at runtime
envsubst < creds.json.template > creds.json n8n import:credentials --input=creds.json Now your credentials are environment-driven. Same template works in dev, staging, production—just different environment variables.
Why This Matters: Building Applications with n8n
This pattern unlocks powerful use cases:
1. AI Agents with Redis Memory
n8n's AI Agent node has a "Simple Memory" option that stores conversation history in n8n's database. For production AI applications, you want Redis instead:
- Faster access times
- Better scalability
- Shared memory across n8n instances
- TTL-based conversation expiry
With our pattern, you can provision n8n with Redis credentials already configured. Your workflows can immediately use Redis memory nodes without manual setup.
2. Multi-Tenant SaaS Applications
Spin up isolated n8n instances per customer, each with credentials for their dedicated database:
export POSTGRES_PASSWORD="${CUSTOMER_ID}_db_password" export REDIS_PASSWORD="${CUSTOMER_ID}_redis_password" ./provision-n8n.sh 3. Development Environment Parity
Every developer gets the same n8n setup:
git clone your-app ./init-credentials.sh # Generate dev credentials ./start.sh # Everything works The Demo: Dockerized n8n with PostgreSQL and Redis
Our example repository demonstrates this pattern with a complete Docker Compose setup.
The Architecture
┌─────────────────────────────────────────┐ │ Docker Compose Environment │ │ │ │ ┌──────────┐ ┌──────────┐ ┌──────┐ │ │ │ n8n │→ │ Postgres │ │Redis │ │ │ │ │ │ │ │ │ │ │ │ - Creds │ │ (auto │ │(auto │ │ │ │ auto- │ │ config) │ │ cfg) │ │ │ │ imported│ │ │ │ │ │ │ └──────────┘ └──────────┘ └──────┘ │ │ ↑ │ │ │ envsubst at startup │ │ │ │ │ ┌─────────────────────────────┐ │ │ │ .env (generated secrets) │ │ │ │ POSTGRES_PASSWORD=<random> │ │ │ │ REDIS_PASSWORD=<random> │ │ │ └─────────────────────────────┘ │ └─────────────────────────────────────────┘ How It Works
1. Generate Environment Variables
./init-credentials.sh This script creates a .env file with generated secrets:
POSTGRES_PASSWORD=$(openssl rand -base64 48 | tr -d "=+/" | cut -c1-32) REDIS_PASSWORD=$(openssl rand -base64 48 | tr -d "=+/" | cut -c1-32) N8N_ENCRYPTION_KEY=$(openssl rand -hex 32) 2. Start with Exported Variables
Here's the critical part. Docker Compose can use ${VARIABLE} syntax, but only if those variables are exported to the environment:
#!/bin/bash # start.sh # This is the key - export all variables set -a source .env set +a docker-compose up -d --build The Gotcha: If you just run docker-compose up without exporting variables, your docker-compose.yml file won't resolve ${POSTGRES_PASSWORD} and services will fail to authenticate.
3. Docker Compose Provisions Services
services: postgres: image: postgres:15-alpine environment: POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} # Resolved from exported env POSTGRES_USER: ${POSTGRES_USER} POSTGRES_DB: ${POSTGRES_DB} redis: image: redis:7-alpine command: redis-server --requirepass ${REDIS_PASSWORD} # Also resolved n8n: build: ./n8n environment: DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD} # Same password # ... other config depends_on: postgres: condition: service_healthy redis: condition: service_healthy 4. n8n Container Auto-Imports Credentials
On first startup, our custom entrypoint:
# n8n/entrypoint.sh envsubst < /data/workflow_creds/decrypt_creds.json.template > /data/workflow_creds/decrypt_creds.json n8n import:credentials --input=/data/workflow_creds/decrypt_creds.json Now n8n has credentials for PostgreSQL and Redis, using the same passwords Docker Compose used to provision those services.
The Credential Template
Here's what the template looks like:
[ { "id": "${N8N_POSTGRES_CREDENTIAL_ID}", "name": "Local PostgreSQL Database", "data": { "host": "${POSTGRES_HOST}", "database": "${POSTGRES_DB}", "user": "${POSTGRES_USER}", "password": "${POSTGRES_PASSWORD}", "port": ${POSTGRES_PORT}, "ssl": "disable" }, "type": "postgres" }, { "id": "${N8N_REDIS_CREDENTIAL_ID}", "name": "Local Redis Cache", "data": { "password": "${REDIS_PASSWORD}", "host": "${REDIS_HOST}", "port": ${REDIS_PORT}, "database": 0 }, "type": "redis" } ] This template was originally created by:
- Manually creating credentials in n8n
- Exporting them with
n8n export:credentials - Replacing values with
${VARIABLE}placeholders
Now it's reusable across all environments.
Practical Use Case: Redis for AI Agent Memory
This setup shines when building AI agents with n8n. Instead of this:
AI Agent Node → Simple Memory (in n8n database) You can do this:
AI Agent Node → Redis Chat Memory Node → Redis (provisioned) Benefits:
- Persistence across restarts - Conversation history survives n8n restarts
- Shared state - Multiple n8n instances share conversation history
- Performance - Redis is optimized for this access pattern
- Scalability - Redis can handle millions of conversation threads
And the Redis credential is already configured—no manual setup required.
Extending the Pattern
Add MongoDB
1. Add to .env.example:
MONGO_PASSWORD=__GENERATED__ 2. Update credential template:
{ "id": "mongo_local", "name": "Local MongoDB", "data": { "host": "${MONGO_HOST}", "password": "${MONGO_PASSWORD}", "user": "${MONGO_USER}", "database": "${MONGO_DB}" }, "type": "mongoDb" } 3. Add to docker-compose.yml:
mongo: image: mongo:7 environment: MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD} Import Workflows Too
# In entrypoint.sh, after importing credentials: n8n import:workflow --input=/data/workflows/default-workflows.json This lets you ship n8n with pre-built workflows for your application.
The Repository
The complete implementation is available at:
github.com/alexretana/n8n-docker-demo
Use it as:
- A starting point for your own n8n-based applications
- A reference for the envsubst + CLI pattern
- A foundation to build on
Quick Start
git clone https://github.com/alexretana/n8n-docker-demo cd n8n-docker-demo ./init-credentials.sh # Generate secrets ./start.sh # Start everything # Access n8n at http://localhost:5678 # PostgreSQL and Redis credentials already configured Building Applications Around n8n
This pattern enables you to build applications where n8n is a component, not the entire application. Examples:
- SaaS platforms - n8n handles workflow orchestration, your app handles user management
- AI applications - n8n orchestrates AI agents, Redis stores conversation history
- Data pipelines - n8n coordinates ETL, PostgreSQL stores results
- Internal tools - n8n automates business processes, your app provides the UI
The key is treating n8n configuration as code. With the CLI + envsubst pattern, your n8n setup becomes reproducible, version-controlled, and automatable.
Invitation
The repository is open source and free to use. But more than that—I'd love to see what you build.
If you're creating an application around n8n:
- Fork the repo and adapt it to your needs
- Share what you're building - open an issue or discussion
- Contribute improvements - PRs welcome
- Ask questions - I'm happy to help
The pattern shown here is just a starting point. The real value comes from what you build on top of it.
Key Takeaways
- n8n CLI is powerful - Use
exportandimportcommands for configuration management - envsubst bridges the gap - Transform exported configs into reusable templates
- Export variables in start scripts - Docker Compose needs them exported, not just in .env
- Treat n8n as a component - Build applications around it, not just use it standalone
- Configuration as code - Make your n8n setup reproducible and version-controlled
Happy building!
Further Reading:
Tags: #n8n #automation #docker #devops #postgresql #redis #ai #reproducibility
Top comments (0)