DEV Community

Cover image for Practicing Kubernetes Control Plane environment in Killercoda Interactive Terminal
Saravanan Gnanaguru for Kubernetes Community Days Chennai

Posted on • Edited on

Practicing Kubernetes Control Plane environment in Killercoda Interactive Terminal

Killercoda Interactive Terminal

Killercoda offers free environments (based on Ubuntu) with various tools for beginners to try hands-on. It also has the Kubernetes playground which provides control plane server access for 1 hour. In which we can try to practice hands-on with control plane components.
Because sometimes we are dependent on training platforms to try the control plane (or kubeadm) practice, and killercoda comes handy as a free platform to satisfy the needs.

killercoda environment is similar to the killershell Kubernetes certification exam environment, but without the test scenarios.

So to get started with killercoda, users need to sign up for an account using their preferred method listed on the screen.

Image description

What is available in Killercoda Playground

  • There are variety of options available in Killercoda, plain Ubuntu OS, Kubernetes control plane with various versions and other options related to Kubernetes environment

Image description

Choose a Kubernetes environment

Let us choose Kubernetes v1.26 environment and inspect the internal components of Kubernetes control plane

  • Kubectl get nodes and get namespace

Image description

This control plane has 2 nodes and 5 namespaces

  • Kubectl get pods -A -o wide
controlplane $ k get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-5f94594857-kjbzt 1/1 Running 4 2d20h 192.168.0.3 controlplane <none> <none> kube-system canal-5zh75 2/2 Running 0 32m 172.30.1.2 controlplane <none> <none> kube-system canal-9wbgc 2/2 Running 0 32m 172.30.2.2 node01 <none> <none> kube-system coredns-68dc769db8-4nd5b 1/1 Running 0 2d19h 192.168.0.7 controlplane <none> <none> kube-system coredns-68dc769db8-fmx25 1/1 Running 0 2d19h 192.168.1.2 node01 <none> <none> kube-system etcd-controlplane 1/1 Running 0 2d20h 172.30.1.2 controlplane <none> <none> kube-system kube-apiserver-controlplane 1/1 Running 2 2d20h 172.30.1.2 controlplane <none> <none> kube-system kube-controller-manager-controlplane 1/1 Running 2 2d20h 172.30.1.2 controlplane <none> <none> kube-system kube-proxy-7zc4f 1/1 Running 0 2d20h 172.30.1.2 controlplane <none> <none> kube-system kube-proxy-glxxb 1/1 Running 0 2d19h 172.30.2.2 node01 <none> <none> kube-system kube-scheduler-controlplane 1/1 Running 2 2d20h 172.30.1.2 controlplane <none> <none> local-path-storage local-path-provisioner-8bc8875b-lspfz 1/1 Running 0 2d20h 192.168.0.6 controlplane <none> <none> 
Enter fullscreen mode Exit fullscreen mode

We can see the list contains the control plane components

Image description

Contents of the directory /etc/kubernetes

We can find all the important files of Kubernetes control plane components, configurations, secrets and key files inside /etc/kubernetes directory.

controlplane $ pwd /etc/kubernetes controlplane $ tree --dirsfirst . |-- manifests | |-- etcd.yaml | |-- kube-apiserver.yaml | |-- kube-controller-manager.yaml | `-- kube-scheduler.yaml |-- pki | |-- etcd | | |-- ca.crt | | |-- ca.key | | |-- healthcheck-client.crt | | |-- healthcheck-client.key | | |-- peer.crt | | |-- peer.key | | |-- server.crt | | `-- server.key | |-- apiserver-etcd-client.crt | |-- apiserver-etcd-client.key | |-- apiserver-kubelet-client.crt | |-- apiserver-kubelet-client.key | |-- apiserver.crt | |-- apiserver.key | |-- ca.crt | |-- ca.key | |-- front-proxy-ca.crt | |-- front-proxy-ca.key | |-- front-proxy-client.crt | |-- front-proxy-client.key | |-- sa.key | `-- sa.pub |-- admin.conf |-- controller-manager.conf |-- kubelet.conf `-- scheduler.conf 3 directories, 30 files 
Enter fullscreen mode Exit fullscreen mode

Control Plane Component Manifest files

  • Let us see the contents of manifests/kube-apiserver.yaml
  • We can notice the manifest has config values of etcd, tls and other components
controlplane $ cat manifests/kube-apiserver.yaml apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.30.1.2:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address=172.30.1.2 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-issuer=https://kubernetes.default.svc.cluster.local - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key image: registry.k8s.io/kube-apiserver:v1.26.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 172.30.1.2 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-apiserver readinessProbe: failureThreshold: 3 httpGet: host: 172.30.1.2 path: /readyz port: 6443 scheme: HTTPS periodSeconds: 1 timeoutSeconds: 15 resources: requests: cpu: 50m startupProbe: failureThreshold: 24 httpGet: host: 172.30.1.2 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /etc/pki name: etc-pki readOnly: true - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true hostNetwork: true priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/pki type: DirectoryOrCreate name: etc-pki - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {} 
Enter fullscreen mode Exit fullscreen mode
  • Similarly, let us inspect the contents of manifests/etcd.yaml
  • We notice the manifest has an etcd key, cert files and other config values.
controlplane $ cat manifests/etcd.yaml apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://172.30.1.2:2379 creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-system spec: containers: - command: - etcd - --advertise-client-urls=https://172.30.1.2:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --experimental-initial-corrupt-check=true - --experimental-watch-progress-notify-interval=5s - --initial-advertise-peer-urls=https://172.30.1.2:2380 - --initial-cluster=controlplane=https://172.30.1.2:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://127.0.0.1:2379,https://172.30.1.2:2379 - --listen-metrics-urls=http://127.0.0.1:2381 - --listen-peer-urls=https://172.30.1.2:2380 - --name=controlplane - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt image: registry.k8s.io/etcd:3.5.6-0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /health?exclude=NOSPACE&serializable=true port: 2381 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: etcd resources: requests: cpu: 25m memory: 100Mi startupProbe: failureThreshold: 24 httpGet: host: 127.0.0.1 path: /health?serializable=false port: 2381 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /var/lib/etcd name: etcd-data - mountPath: /etc/kubernetes/pki/etcd name: etcd-certs hostNetwork: true priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - hostPath: path: /etc/kubernetes/pki/etcd type: DirectoryOrCreate name: etcd-certs - hostPath: path: /var/lib/etcd type: DirectoryOrCreate name: etcd-data status: {} 
Enter fullscreen mode Exit fullscreen mode

Taking ETCD Backup snapshot in control plane

Now, let us try a ETCD backup using the command from documentation

ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \ --cacert=<trusted-ca-file> \ --cert=<cert-file> \ --key=<key-file> \ snapshot save <backup-file-location> 
Enter fullscreen mode Exit fullscreen mode

We can grep for pki values in manifests/etcd.yaml

controlplane $ grep pki manifests/etcd.yaml - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --key-file=/etc/kubernetes/pki/etcd/server.key - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt 
Enter fullscreen mode Exit fullscreen mode

Replace values of cert-file, key-file and trusted-ca-file in the etcdctl snapshot save command

ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ snapshot save /tmp/snapshot-pre-boot.db 
Enter fullscreen mode Exit fullscreen mode

Finally, we will run the snapshot save command:

controlplane $ ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \ > --cacert=/etc/kubernetes/pki/etcd/ca.crt \ > --cert=/etc/kubernetes/pki/etcd/server.crt \ > --key=/etc/kubernetes/pki/etcd/server.key \ > snapshot save /tmp/snapshot-pre-boot.db {"level":"info","ts":1684855944.7321026,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/tmp/snapshot-pre-boot.db.part"} {"level":"info","ts":1684855944.7623043,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1684855944.7625878,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"} {"level":"info","ts":1684855946.5906115,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"} {"level":"info","ts":1684855948.752495,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"6.1 MB","took":"4 seconds ago"} {"level":"info","ts":1684855948.7526174,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/tmp/snapshot-pre-boot.db"} Snapshot saved at /tmp/snapshot-pre-boot.db 
Enter fullscreen mode Exit fullscreen mode

Conclusion

Idea of this blog is to introduce the killercoda environment to newbies of Kubernetes. So they can explore the Control plane components as part of their learning.
I believe seeing things and getting your hands dirty at the same time allows anyone to catch hold of things faster.
In terms of kubernetes learning the more practice we do, the more confidence we do get.

Follow me on,

Top comments (0)