DEV Community

Cover image for Kubernetes at Home: Set Up Your Own Cluster Using Vagrant & Ansible
Houssam Bourkane
Houssam Bourkane

Posted on

Kubernetes at Home: Set Up Your Own Cluster Using Vagrant & Ansible

TL;DR: Learn how to build a lightweight Kubernetes cluster on your local machine using Vagrant, Ansible, and VMware Fusion. Perfect for ARM-based Mac users looking to experiment with Kubernetes in a reproducible environment.


Why This Guide?

Setting up a Kubernetes cluster manually can be complex and time-consuming. But with the power of Vagrant for managing virtual machines and Ansible for automating setup tasks, you can spin up a local cluster with minimal effort and maximum reproducibility. This tutorial walks you through building a two-node cluster on macOS with ARM chips (e.g., M1/M2), but it's adaptable to other setups.


Table of Contents


Prerequisites

Before we start, ensure you have the following installed:

For macOS ARM users, refer to this special setup guide: Run Vagrant VMs in a M1/M2/M3 Apple Sillicon Chip


Project Structure

Create a project directory and replicate the following structure:

. ├── ansible │ ├── ansible.cfg │ ├── inventory.ini │ └── k8s-cluster-setup.yml └── Vagrantfile 
Enter fullscreen mode Exit fullscreen mode

Step 1: Configure Vagrant

Install the VMware Desktop plugin:

vagrant plugin install vagrant-vmware-desktop 
Enter fullscreen mode Exit fullscreen mode

Next, reserve two static private IPs from your LAN. Scan your local network using:

nmap -sn 192.168.1.0/24 
Enter fullscreen mode Exit fullscreen mode

Replace 192.168.1.0 with you LAN network IP

Update your Vagrantfile with something like this:

Vagrant.configure("2") do |config| config.vm.define "kubmaster" do |kub| kub.vm.box = "spox/ubuntu-arm" kub.vm.box_version = "1.0.0" kub.vm.hostname = 'kubmaster' kub.vm.provision "docker" kub.vm.network "public_network", ip: "192.168.1.101", bridge: "en0: Wifi" kub.vm.provider "vmware_desktop" do |v| v.allowlist_verified = true v.gui = false v.vmx["memsize"] = "4096" v.vmx["numvcpus"] = "2" end end config.vm.define "kubnode1" do |kubnode| kubnode.vm.box = "spox/ubuntu-arm" kubnode.vm.box_version = "1.0.0" kubnode.vm.hostname = 'kubnode1' kubnode.vm.provision "docker" kubnode.vm.network "public_network", ip: "192.168.1.102", bridge: "en0: Wifi" kubnode.vm.provider "vmware_desktop" do |v| v.allowlist_verified = true v.gui = false v.vmx["memsize"] = "4096" v.vmx["numvcpus"] = "2" end end end 
Enter fullscreen mode Exit fullscreen mode

Replace the IPs with ones that match your LAN.

Now bring up the VMs:

vagrant up 
Enter fullscreen mode Exit fullscreen mode

Step 2: Configure Ansible

ansible/inventory.ini

[master] kubmaster ansible_host=192.168.1.101 ansible_ssh_private_key_file=.vagrant/machines/kubmaster/vmware_desktop/private_key [workers] kubnode1 ansible_host=192.168.1.102 ansible_ssh_private_key_file=.vagrant/machines/kubnode1/vmware_desktop/private_key [all:vars] ansible_user=vagrant 
Enter fullscreen mode Exit fullscreen mode

Make sure to replace the IP addresses

ansible/ansible.cfg

[defaults] inventory = inventory.ini host_key_checking = False 
Enter fullscreen mode Exit fullscreen mode

Step 3: Ansible Playbook for Kubernetes Setup

ansible/k8s-cluster-setup.yml

This playbook performs the following:

  • Prepares all nodes: disables swap, installs required packages, configures kernel modules, adds K8s repositories, installs kubeadm, kubelet, and kubectl.
  • Initializes the master node and stores the cluster join command.
  • Sets up Flannel CNI for networking.
  • Joins worker nodes using the generated join command.
--- - name: Prepare Kubernetes Nodes hosts: all become: yes tasks: - name: Disable swap (runtime) command: swapoff -a when: ansible_swaptotal_mb > 0 - name: Comment out swap line in /etc/fstab lineinfile: path: /etc/fstab regexp: '^\s*([^#]\S*\s+\S+\s+swap\s+\S+)\s*$' line: '# \1' backrefs: yes - name: Apply mount changes command: mount -a - name: Stop AppArmor systemd: name: apparmor state: stopped enabled: no - name: Restart containerd systemd: name: containerd state: restarted - name: Configure sysctl for Kubernetes copy: dest: /etc/sysctl.d/kubernetes.conf content: | net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 - name: Apply sysctl settings command: sysctl --system - name: Install transport and curl apt: name: - apt-transport-https - curl update_cache: yes state: present - name: Add Kubernetes APT key shell: | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg args: creates: /etc/apt/keyrings/kubernetes-apt-keyring.gpg - name: Add Kubernetes APT repository apt_repository: repo: "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /" filename: "kubernetes" state: present - name: Install Kubernetes components apt: name: - kubelet - kubeadm - kubectl - kubernetes-cni update_cache: yes state: present - name: Enable kubelet service systemd: name: kubelet enabled: yes - name: Initialize Kubernetes Master hosts: master become: yes vars: pod_cidr: "10.244.0.0/16" tasks: - name: Remove default containerd config file: path: /etc/containerd/config.toml state: absent - name: Restart containerd systemd: name: containerd state: restarted enabled: yes - name: Wait for containerd socket to be available wait_for: path: /run/containerd/containerd.sock state: present timeout: 20 - name: Initialize Kubernetes control plane command: kubeadm init --apiserver-advertise-address={{ ansible_host }} --node-name {{ inventory_hostname }} --pod-network-cidr={{ pod_cidr }} register: kubeadm_output args: creates: /etc/kubernetes/admin.conf - name: Extract join command shell: | kubeadm token create --print-join-command register: join_command changed_when: false - name: Set join command fact set_fact: kube_join_command: "{{ join_command.stdout }}" - name: Create .kube directory for vagrant user become_user: vagrant file: path: /home/vagrant/.kube state: directory mode: 0755 - name: Copy Kubernetes admin config to vagrant user copy: src: /etc/kubernetes/admin.conf dest: /home/vagrant/.kube/config remote_src: yes owner: vagrant group: vagrant mode: 0644 - name: Configure networking hosts: all become: yes tasks: - name: Ensure br_netfilter loads at boot copy: dest: /etc/modules-load.d/k8s.conf content: | br_netfilter - name: Load br_netfilter kernel module now command: modprobe br_netfilter - name: Configure sysctl for Kubernetes networking copy: dest: /etc/sysctl.d/k8s.conf content: | net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 - name: Apply sysctl settings command: sysctl --system - name: Configure flannel hosts: master become: yes tasks: - name: Apply Flannel CNI plugin become_user: vagrant command: kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml environment: KUBECONFIG: /home/vagrant/.kube/config - name: Join worker nodes to cluster hosts: workers become: yes vars: kube_join_command: "{{ hostvars['kubmaster']['kube_join_command'] }}" tasks: - name: Remove default containerd config file: path: /etc/containerd/config.toml state: absent - name: Restart containerd systemd: name: containerd state: restarted enabled: yes - name: Wait until the Kubernetes API server is reachable wait_for: host: "{{ hostvars['kubmaster']['ansible_host'] }}" port: 6443 delay: 10 timeout: 120 state: started - name: Join the node to Kubernetes cluster command: "{{ kube_join_command }}" args: creates: /etc/kubernetes/kubelet.conf 
Enter fullscreen mode Exit fullscreen mode

Step 4: Test and Deploy the Cluster

Test Ansible SSH connectivity:

ansible all -m ping -i ansible/inventory.ini 
Enter fullscreen mode Exit fullscreen mode

Run the full cluster setup:

ansible-playbook -i ansible/inventory.ini ansible/k8s-cluster-setup.yml 
Enter fullscreen mode Exit fullscreen mode

Retrieve the kubeconfig file locally:

vagrant ssh kubmaster -c "sudo cat /etc/kubernetes/admin.conf" > ~/kubeconfig-vagrant.yaml 
Enter fullscreen mode Exit fullscreen mode

Test your cluster:

KUBECONFIG=~/kubeconfig-vagrant.yaml kubectl get nodes 
Enter fullscreen mode Exit fullscreen mode

You should see both the master and worker nodes in Ready status.


What's Next?

You now have a fully functioning local Kubernetes cluster on ARM-based hardware, with everything automated and reproducible. You can now:

  • Experiment with Helm charts
  • Try GitOps with ArgoCD
  • Deploy sample apps

Stay tuned for the next part of this series!


Useful Links

Top comments (2)

Collapse
 
kozmer_vip profile image
Kozmer Vip

Thanks for this tutorial!! I have been struggling doing it manually for so long, can't wait for upcoming sections

Collapse
 
thegeekfellow profile image
GeekGuy

I mean, I've always wanted to automate my K8s cluster deployment, and I looked it up on Google I found this. How lucky am i ???