Would you like to automate tasks using AI locally—without relying on the cloud, incurring API costs, or risking data leakage?
This guide will demonstrate how to operate n8n, a robust open-source workflow automation tool, in conjunction with Ollama, a high-speed local LLM runtime (similar to LLaMA, Mistral, or others)—all facilitated through Docker on your personal computer.
Indeed, it is possible to establish a completely local AI automation system at no cost whatsoever.
Let us explore this further.
📁 1. Create a folder structure
mkdir n8n-ollama cd n8n-ollama touch docker-compose.yml
🧾 2. Create docker-compose.yml
Create a file named docker-compose.yml
with:
services: ollama: image: ollama/ollama ports: - "11434:11434" container_name: ollama networks: - n8n-network volumes: - ollama_data:/root/.ollama n8n: image: n8nio/n8n container_name: n8n ports: - "5678:5678" networks: - n8n-network environment: - N8N_HOST=localhost - N8N_PORT=5678 - N8N_EDITOR_BASE_URL=http://localhost:5678 - WEBHOOK_URL=http://localhost:5678 - NODE_FUNCTION_ALLOW_EXTERNAL=* volumes: - n8n_data:/home/node/.n8n networks: n8n-network: volumes: ollama_data: n8n_data:
This puts both containers in the same Docker network so n8n
can reach ollama
using the hostname ollama
.
▶️ 3. Start both services
docker-compose up -d
you will get response like this -
[+] Running 2/2 ✔ Container ollama Started 0.6s ✔ Container n8n Started
✅ Verify container
#verify containers docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0d99d7a06ff9 n8nio/n8n "tini -- /docker-ent…" 3 days ago Up 2 minutes 0.0.0.0:5678->5678/tcp, :::5678->5678/tcp n8n c5eabfa39b70 ollama/ollama "/bin/ollama serve" 3 days ago Up 2 minutes 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp ollama
You should see both ollama
and n8n
containers running.
n8n - http://localhost:5678
ollama - http://localhost:11434
🎉 Great! That means n8n successfully connected to Ollama, and now the only issue is:
⛓️ Pull the correct model inside the Ollama container
Open a terminal inside the Ollama container:
docker exec -it ollama bash
You're now inside the container.
Pull a valid model (e.g., llama3
):
ollama pull llama3 # --- ollama pull llama3.2 # --- ollama pull deepseek-r1:1.5b
root@c5eabfa39b70:/# ollama list NAME ID SIZE MODIFIED deepseek-r1:1.5b e0979632db5a 1.1 GB 3 days ago llama3.2:latest a80c4f17acd5 2.0 GB 3 days ago llama3:latest 365c0bd3c000 4.7 GB 3 days ago
⭐ This will download the official llama3 model.
Exit the container:
exit
In n8n, update your model name:
When setting up the Ollama node in n8n, use:
llama3
✌🏻 This matches the model you pulled.
curl http://localhost:11434/api/generate \ -X POST \ -H "Content-Type: application/json" \ -d '{ "model": "llama3.2", "prompt": "5+5 ?", "stream": false }'
From outside, run:
docker exec -it ollama ollama list
🛑 stop the container:
docker-compose down
You Did It! Now Build AI Agents Locally—Fast & Free.
With Ollama + n8n, you can:
- Run AI like Llama offline (no APIs, no costs)
- Automate content, support, or data tasks in minutes
- Own your AI workflow (no limits, no middlemen)
Your turn—launch a workflow and see the magic happen. 🚀
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.