Interested in Cluster API for automating Kubernetes cluster management but hesitant because you think you need a cloud environment? Good news! You can actually try Cluster API right on your local machine with just Docker. In this guide, we'll walk through setting up a testing environment using CAPD (Cluster API Provider Docker).
Prerequisites
You'll need the following tools installed:
- Docker
- kubectl
- kind
- clusterctl
Overview
Here's what we'll cover:
- Creating a management cluster using kind
- Initializing Cluster API (CAPD)
- Creating a workload cluster
- Setting up CNI
Let's dive in!
1. Creating the Management Cluster
First, we'll create a management cluster using kind that will serve as the foundation for CAPD.
Create a configuration file that enables access to the host's Docker socket:
# kind-cluster-with-extramounts.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: ipFamily: dual nodes: - role: control-plane extraMounts: - hostPath: /var/run/docker.sock containerPath: /var/run/docker.sock
Now, create the kind cluster using this configuration:
kind create cluster --config kind-cluster-with-extramounts.yaml
You should see output similar to this:
Creating cluster "kind" ... โ Ensuring node image (kindest/node:v1.31.0) ๐ผ โ Preparing nodes ๐ฆ โ Writing configuration ๐ โ Starting control-plane ๐น๏ธ โ Installing CNI ๐ โ Installing StorageClass ๐พ Set kubectl context to "kind-kind"
2. Initializing Cluster API
Next, let's install Cluster API on our management cluster:
export CLUSTER_TOPOLOGY=true && clusterctl init --infrastructure docker
This command installs all necessary components. If successful, you'll see:
Fetching providers Installing cert-manager version="v1.16.0" Waiting for cert-manager to be available... Installing provider="cluster-api" version="v1.8.5" targetNamespace="capi-system" Installing provider="bootstrap-kubeadm" version="v1.8.5" targetNamespace="capi-kubeadm-bootstrap-system" Installing provider="control-plane-kubeadm" version="v1.8.5" targetNamespace="capi-kubeadm-control-plane-system" Installing provider="infrastructure-docker" version="v1.8.5" targetNamespace="capd-system" Your management cluster has been initialized successfully!
3. Creating a Workload Cluster
Now let's create a Kubernetes cluster for testing. We'll name it "muscat" ๐:
clusterctl generate cluster muscat \ --flavor development \ --kubernetes-version v1.31.0 \ --control-plane-machine-count=3 \ --worker-machine-count=3 \ > muscat.yaml kubectl apply -f muscat.yaml
Check the cluster status:
kubectl get cluster
NAME CLUSTERCLASS PHASE AGE VERSION muscat quick-start Provisioned 5m47s v1.31.0
4. Setting Up CNI
Finally, let's enable networking functionality by installing Calico:
# First, get the kubeconfig clusterctl get kubeconfig muscat > kubeconfig.muscat.yaml # Install Calico kubectl --kubeconfig=./kubeconfig.muscat.yaml apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
After a few moments, all nodes should be in Ready
state:
kubectl --kubeconfig=./kubeconfig.muscat.yaml get nodes NAME STATUS ROLES AGE VERSION muscat-md-0-9mhvz-4xxcd-42nh8 Ready <none> 4m20s v1.31.0 muscat-md-0-9mhvz-4xxcd-8hghp Ready <none> 4m25s v1.31.0 muscat-md-0-9mhvz-4xxcd-mxg7k Ready <none> 4m20s v1.31.0 muscat-r65sn-592c8 Ready control-plane 3m35s v1.31.0 muscat-r65sn-xrzfl Ready control-plane 4m41s v1.31.0 muscat-worker-08sx08 Ready <none> 4m17s v1.31.0 muscat-worker-u6l39f Ready <none> 4m17s v1.31.0 muscat-worker-ydhg40 Ready <none> 4m17s v1.31.0
Conclusion
You now have a fully functional Cluster API testing environment on your local machine! Here's what we've accomplished:
- A management cluster (kind) running Cluster API
- A workload cluster with three control plane nodes and three worker nodes
- All this running locally with just Docker - no cloud provider needed!
You're now ready to start experimenting with various Cluster API features in this local environment. Happy clustering! ๐
Top comments (0)