Meta Description: Learn how to build a professional Home Lab with Proxmox and Docker. Comprehensive guide with practical examples, security best practices, and real-world projects for DevOps engineers. Includes CI/CD pipeline, monitoring, and production-ready configurations.
Tags: Proxmox, Docker, Home Lab, DevOps, virtualization, containerization, CI/CD, Jenkins, Gitea, monitoring, Grafana, Prometheus, infrastructure as code, self-hosted, system administration, IT infrastructure, professional development
Creating your own Home Lab has become essential for every DevOps engineer. The combination of Proxmox and Docker offers a professional solution with enterprise capabilities. Today we'll explore how to build a system that rivals cloud platforms.
Why the Proxmox + Docker Combination is Revolutionary
Proxmox Virtual Environment (PVE) is an enterprise-grade open-source hypervisor. Docker provides containerization with minimal resources. Together they create a powerful ecosystem for home experiments and professional development.
This architecture allows you to simulate complex production environments. You can test new technologies without risk to your main work. At the same time, you develop practical skills with tools used in large companies.
Advantages Over Competitive Solutions
VMware vSphere costs thousands of dollars annually. Proxmox is free with optional premium support. Docker Desktop has licensing restrictions for business use. The Linux-based approach eliminates these problems.
Proxmox offers advanced features like live migration, high availability, and backup replication. Docker provides fast deployment and efficient resource utilization. The combination merges virtualization stability with container flexibility.
Hardware Requirements and Planning
Minimum and Recommended Specifications
Minimum for startup:
- CPU: Intel i5 or AMD Ryzen 5 (4+ cores)
- RAM: 16GB DDR4 (8GB for Proxmox, 8GB for VMs)
- Storage: 500GB SSD for system + 1TB HDD for data
- Network: Gigabit Ethernet
Recommended configuration:
- CPU: Intel i7/i9 or AMD Ryzen 7/9 (8+ cores)
- RAM: 32-64GB DDR4/DDR5
- Storage: 1TB NVMe SSD + 4TB HDD/SSD array
- Network: 10Gb Ethernet or dual 1Gb bonds
Hardware Selection for Optimal Performance
CPU with high single-thread performance is critical for Proxmox. AMD Ryzen processors offer excellent price/performance ratio. Intel processors have better virtualization support in some cases.
RAM is the most important resource for virtualization. Plan minimum 8GB for Proxmox host + 4-8GB for each virtual machine. ECC memory isn't mandatory but increases stability for 24/7 operation.
Storage architecture determines entire system performance. NVMe SSD for VM disks provides excellent responsiveness. Mechanical disks are suitable for backup storage and less critical data.
Step-by-Step Proxmox Installation
Installation Environment Preparation
Download the latest Proxmox VE ISO from the official website. Create bootable USB with tools like Rufus (Windows) or dd (Linux). Ensure UEFI/BIOS settings allow USB boot.
Enable CPU virtualization features (Intel VT-x/AMD-V). Activate IOMMU if planning GPU passthrough. Configure boot order to start first from USB device.
Installation Process and Initial Settings
Boot from USB and follow the graphical installer. Select entire disk for Proxmox or create custom partitioning scheme. Configure network parameters - IP address, gateway, DNS servers.
Create root password and enter email address for notifications. After installation the system restarts automatically. Remove USB device before boot process.
Post-Installation Configuration
Access web interface at https://[IP-address]:8006
. Log in with root user and set password. First update system packages for security patches.
Configure repositories for updates. Enterprise repo requires subscription. No-subscription repo is free for testing. Add community repositories for additional packages.
# System update apt update && apt upgrade -y # Install useful tools apt install -y curl wget git vim htop
Creating Virtual Machines for Docker
Template-Based Approach for Efficiency
Create base Ubuntu Server template for all Docker machines. This accelerates deployment process and guarantees consistency. Template contains pre-configured SSH, network settings, and basic packages.
Template cloning is significantly faster than full installation. You can create specialized templates for different roles - web servers, databases, monitoring systems.
Optimal VM Configuration for Containers
Configure VM with machine type "q35" for best performance. Use SCSI controller with VirtIO drivers. Activate QEMU guest agent for improved integration.
Recommended settings:
- CPU: 2-4 vCPU with CPU type "host"
- RAM: 4-8GB depending on workload
- Disk: 40GB thin provisioned with VirtIO interface
- Network: VirtIO network adapter with dedicated bridge
Network Topology and Security
Create separate Linux bridge for Docker traffic. This allows microsegmentation and traffic control. Configure VLAN tagging for isolation between different environments.
Set up firewall rules at Proxmox level. Block unnecessary ports to external network. Allow only required communication between VM and host systems.
Docker Installation and Configuration
Automatic Installation Script Approach
Docker provides convenience script for quick installation. The script automatically configures repositories and installs latest stable version. Suitable for development environments.
# Download and execute installation script curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh # Add user to docker group sudo usermod -aG docker $USER # Script automatically installs Docker Compose as plugin # Check installation docker compose version
Manual Installation for Production Environments
For critical systems I recommend manual installation. This approach offers greater control over the process. You can select specific version and configure custom settings.
# Update package index sudo apt update # Install prerequisites sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release # Add Docker GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # Add Docker repository echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker Engine sudo apt update sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Docker Daemon Optimization
Configure Docker daemon for optimal performance in VM environment. Set up logging driver, storage driver, and resource limits. These optimizations improve performance and stability.
{ "log-driver": "journald", "storage-driver": "overlay2", "default-ulimits": { "nofile": { "Name": "nofile", "Hard": 64000, "Soft": 64000 } }, "live-restore": true, "userland-proxy": false }
Docker Compose and Portainer for Management
Docker Compose V2 installs automatically as plugin with installation script. The new version uses docker compose
command without dash. This improves integration with Docker CLI and offers better performance.
Portainer - Graphical Interface for Docker
Portainer transforms Docker command line into intuitive web interface. Allows easy management of containers, networks, volumes, and images. Suitable for both beginners and experienced users.
version: '3.8' services: portainer: image: portainer/portainer-ce:latest container_name: portainer restart: unless-stopped ports: - "9000:9000" volumes: - /var/run/docker.sock:/var/run/docker.sock - portainer_data:/data security_opt: - no-new-privileges:true volumes: portainer_data:
Real Projects for Your Home Lab
Complete DevOps Stack
version: '3.8' services: # Reverse Proxy traefik: image: traefik:v3.0 container_name: traefik restart: unless-stopped ports: - "80:80" - "443:443" - "8080:8080" volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - ./traefik.yml:/etc/traefik/traefik.yml - ./dynamic:/etc/traefik/dynamic - traefik-ssl:/ssl networks: - proxy # Code Repository gitea: image: gitea/gitea:latest container_name: gitea restart: unless-stopped environment: - USER_UID=1000 - USER_GID=1000 - GITEA__database__DB_TYPE=postgres - GITEA__database__HOST=gitea-db:5432 - GITEA__database__NAME=gitea - GITEA__database__USER=gitea - GITEA__database__PASSWD=gitea_password volumes: - gitea-data:/data - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - "3001:3000" - "222:22" depends_on: - gitea-db networks: - gitea - proxy labels: - "traefik.enable=true" - "traefik.http.routers.gitea.rule=Host(`git.homelab.local`)" gitea-db: image: postgres:15 container_name: gitea-db restart: unless-stopped environment: - POSTGRES_USER=gitea - POSTGRES_PASSWORD=gitea_password - POSTGRES_DB=gitea volumes: - gitea-db:/var/lib/postgresql/data networks: - gitea # CI/CD Pipeline jenkins: image: jenkins/jenkins:lts container_name: jenkins restart: unless-stopped privileged: true user: root ports: - "8081:8080" - "50000:50000" volumes: - jenkins-data:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock - /usr/local/bin/docker:/usr/local/bin/docker networks: - proxy labels: - "traefik.enable=true" - "traefik.http.routers.jenkins.rule=Host(`jenkins.homelab.local`)" networks: proxy: external: true gitea: internal: true volumes: traefik-ssl: gitea-data: gitea-db: jenkins-data:
Media Server Stack
version: '3.8' services: # Media Server plex: image: lscr.io/linuxserver/plex:latest container_name: plex restart: unless-stopped environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - VERSION=docker volumes: - plex-config:/config - /media/movies:/movies - /media/tvshows:/tv ports: - "32400:32400" networks: - media # Download Manager qbittorrent: image: lscr.io/linuxserver/qbittorrent:latest container_name: qbittorrent restart: unless-stopped environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - WEBUI_PORT=8082 volumes: - qbit-config:/config - /media/downloads:/downloads ports: - "8082:8082" networks: - media # Automation sonarr: image: lscr.io/linuxserver/sonarr:latest container_name: sonarr restart: unless-stopped environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - sonarr-config:/config - /media/tvshows:/tv - /media/downloads:/downloads ports: - "8989:8989" networks: - media networks: media: external: true volumes: plex-config: qbit-config: sonarr-config:
Development Environment
version: '3.8' services: # Database postgres: image: postgres:15 container_name: dev-postgres restart: unless-stopped environment: - POSTGRES_USER=developer - POSTGRES_PASSWORD=dev_password - POSTGRES_DB=development volumes: - postgres-data:/var/lib/postgresql/data ports: - "5432:5432" networks: - development # Redis Cache redis: image: redis:7-alpine container_name: dev-redis restart: unless-stopped command: redis-server --requirepass redis_password ports: - "6379:6379" networks: - development # Node.js Environment nodejs: image: node:18-alpine container_name: dev-nodejs restart: unless-stopped working_dir: /app volumes: - ./projects/nodejs:/app - node-modules:/app/node_modules ports: - "3000:3000" networks: - development command: sh -c "npm install && npm run dev" # Python Environment python: image: python:3.11 container_name: dev-python restart: unless-stopped working_dir: /app volumes: - ./projects/python:/app - python-packages:/usr/local/lib/python3.11/site-packages ports: - "8000:8000" networks: - development command: sh -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000" networks: development: external: true volumes: postgres-data: node-modules: python-packages:
Docker Compose Best Practices
Structure projects with separate directories for each service. Use .env files for sensitive data. Create dedicated networks for different applications.
Version Docker Compose files in Git repository. This allows tracking changes and easy rollback during problems. Document each service with comments in YAML files.
Monitoring and Logs
Prometheus and Grafana Stack
Prometheus is industry standard for metrics collection. Grafana offers beautiful dashboards for visualization. The combination provides comprehensive monitoring of the entire system.
version: '3.8' services: prometheus: image: prom/prometheus:latest container_name: prometheus restart: unless-stopped ports: - "9090:9090" volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml - prometheus_data:/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.path=/prometheus' - '--web.console.libraries=/etc/prometheus/console_libraries' - '--web.console.templates=/etc/prometheus/consoles' grafana: image: grafana/grafana:latest container_name: grafana restart: unless-stopped ports: - "3000:3000" volumes: - grafana_data:/var/lib/grafana environment: - GF_SECURITY_ADMIN_PASSWORD=admin volumes: prometheus_data: grafana_data:
ELK Stack for Centralized Logging
Elasticsearch, Logstash, and Kibana form powerful platform for log analysis. Elasticsearch stores and indexes logs. Logstash processes and transforms data. Kibana provides visualization and search capabilities.
This setup allows you to search through logs of all containers from one interface. You can create custom dashboards for different metrics. Advanced alerting capabilities notify you of problems in real-time.
Backup Strategies
Proxmox Integrated Backup
Proxmox includes sophisticated backup system. Supports full and incremental backups with different retention policies. Can archive to local storage, NFS shares, or cloud services.
Set up automatic backup jobs for all critical VMs. Schedule them for off-peak hours for minimal performance impact. Test restore procedures regularly to guarantee data integrity.
Docker Volume Backup Strategies
Docker volumes contain persistent data of containers. Backup strategy depends on data type and RTO/RPO requirements. Database volumes require application-consistent backups.
# Backup Docker volume docker run --rm -v volume_name:/data -v $(pwd):/backup ubuntu tar czf /backup/volume_backup.tar.gz -C /data . # Restore Docker volume docker run --rm -v volume_name:/data -v $(pwd):/backup ubuntu tar xzf /backup/volume_backup.tar.gz -C /data
Security Hardening
System Level Security
Configure fail2ban for protection against brute force attacks. Set up UFW firewall with restrictive rules. Activate automatic security updates for critical vulnerabilities.
Create separate users for different services. Avoid root access when possible. Configure SSH key authentication and disable password login.
Container Security
Use official Docker images from trusted publishers. Scan images for known vulnerabilities with tools like Trivy. Configure container runtime security with AppArmor or SELinux profiles.
Limit container capabilities to necessary minimum. Use read-only filesystems where possible. Configure resource limits to prevent DoS attacks.
Practical Examples and Use Cases
Development Environment Setup
Create isolated development environments for different projects. Each project can have its own VM with specific PHP, Node.js, or Python versions. This eliminates dependency conflicts between projects.
Docker containers allow rapid switching between different tool versions. You can test application compatibility with different database versions. Development and production environments become identical.
CI/CD Pipeline Integration
Integrate Home Lab into CI/CD processes. GitLab Runner or Jenkins can use Docker containers for build and test jobs. This significantly accelerates development cycle.
Create staging environments that mirror production setup. Automated testing can validate deployments before production release. Code quality gates guarantee maintenance of high standards.
Microservices Testing
Simulate complex microservices architectures with Docker Compose. Test service discovery, load balancing, and failure scenarios. Network policies can simulate latency and packet loss.
Service mesh technologies like Istio can be tested in controlled environment. Observability tools provide detailed insights for service interactions. Chaos engineering experiments improve system resilience.
Performance Tuning and Optimizations
VM Resource Allocation
Proper resource allocation is critical for performance. Over-provisioning leads to resource contention. Under-provisioning limits application capacity.
Monitor CPU, memory, and disk utilization regularly. Adjust resource limits based on real usage patterns. Use memory ballooning for dynamic resource adjustment.
Network Optimization
Configure multiple network bridges for traffic separation. Use VXLAN or GRE tunnels for overlay networking. QoS policies can prioritize critical traffic.
Network bonding improves throughput and provides redundancy. LACP configuration requires managed switch support. Load balancing algorithms should match traffic patterns.
Bonus Tips for Professionals
Advanced Proxmox Techniques
GPU Passthrough for ML/AI Projects
Configure GPU passthrough for TensorFlow or PyTorch containers. This provides native GPU performance for machine learning workloads. Nvidia Tesla or consumer GPUs work excellently for this setup.
ZFS Storage Optimization
Use ZFS for enterprise-grade data protection. ARC cache significantly improves read performance. L2ARC on SSD can accelerate workloads with random access patterns.
Cluster Setup for High Availability
Configure 3-node Proxmox cluster for production-like environment. Shared storage with Ceph provides distributed storage. Live migration allows zero-downtime maintenance.
Docker Production Secrets
Multi-stage Builds for Smaller Images
# Build stage FROM node:18 AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production # Production stage FROM node:18-alpine WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY . . EXPOSE 3000 CMD ["node", "server.js"]
Health Checks for Robust Deployments
healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/health"] interval: 30s timeout: 10s retries: 3 start_period: 60s
Resource Limits for Stability
deploy: resources: limits: memory: 1G cpus: '0.5' reservations: memory: 512M cpus: '0.25'
Network Architecture Patterns
Ingress with SSL Termination
Traefik or Nginx automatically manage SSL certificates with Let's Encrypt. Wildcard certificates simplify multi-service setups. HTTP to HTTPS redirect improves security posture.
Service Discovery and Load Balancing
Consul or etcd provide service registry. HAProxy or Nginx can load balance between container instances. Health check integration guarantees traffic to healthy services.
Network Segmentation for Security
Create separate networks for frontend, backend, and database tiers. Bridge networks isolate traffic between different application stacks. Overlay networks allow multi-host communication.
Infrastructure as Code Approaches
Terraform for Proxmox Automation
resource "proxmox_vm_qemu" "docker-node" { count = 3 name = "docker-node-${count.index + 1}" target_node = "proxmox" memory = 4096 cores = 2 disk { size = "40G" type = "virtio" storage = "local-lvm" } network { model = "virtio" bridge = "vmbr0" } }
Ansible for Configuration Management
- name: Install Docker on all nodes hosts: docker_nodes become: true tasks: - name: Install Docker shell: curl -fsSL https://get.docker.com | sh - name: Add user to docker group user: name: "{{ ansible_user }}" groups: docker append: yes - name: Start Docker service systemd: name: docker enabled: true state: started
Performance Debugging Tips
Container Resource Monitoring
# Real-time resource usage docker stats # Detailed container inspection docker inspect <container_id> # Process tree inside container docker exec <container_id> ps aux
Network Troubleshooting
# Check container networking docker network ls docker network inspect bridge # Test connectivity between containers docker exec -it container1 ping container2 docker exec -it container1 nslookup container2
Storage Performance Analysis
# I/O statistics iostat -x 1 # Disk usage by container docker system df docker image ls --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
Conclusion
The combination of Proxmox and Docker creates professional-grade Home Lab environment. It offers enterprise functionality without enterprise cost. Hardware investment pays back through practical learning experience.
This setup prepares you for real-world DevOps challenges. Practical experience with these tools is invaluable for career development. Home Lab environment allows experimentation without production risk.
Successfully building this architecture demonstrates technical skills and commitment to continuous learning. It serves as proof of concept for new technologies and architectural patterns.
Start with basic configuration and gradually add complexity. Document every step for future reference. Share your experience with DevOps community for mutual benefit.
Your Home Lab is investment in professional future. It provides sandbox for innovation and platform for skill development. Possibilities are limitless - limited only by curiosity and creativity.
Originally written in Bulgarian and translated to English for the dev.to community.
Read the original Bulgarian version: Домашно DevOps решение: Proxmox и Docker за корпоративен Home Lab
Top comments (0)