DEV Community

Cover image for Persistent storage for raspberry pi k8s cluster
NULLX
NULLX

Posted on • Edited on

Persistent storage for raspberry pi k8s cluster

If you want to check the setup of this cluster, check my last post.

As I said in that post, I installed kubernetes with the k3s tool.

It's cool to deploy stateless applications, but when you need something more complex, you'll need a persistent volume.

K3s comes with local-path-provisioner which is used to create local storages in each node of your cluster.
What is the problem? If we are running a simple deployment with multiple replicas and each node needs to store data, we will realize that each node will save its own data. No storage will be shared between nodes and we don't want that behavior.

I tried to mount a GlusterFS directly with k8s drivers but I could't make it work. It seems that is a k3s incompatibility problem. So in this tutorial I am going to try another way.

Cluster info

host IP
master-01 192.168.1.100
node-01 192.168.1.101
node-02 192.168.1.102
node-03 192.168.1.103
node-04 192.168.1.104
node-05 192.168.1.105

Ansible hosts.ini

[master] 192.168.1.100 [node] 192.168.1.101 192.168.1.102 192.168.1.103 192.168.1.104 192.168.1.105 [k3s_cluster:children] master node 
Enter fullscreen mode Exit fullscreen mode

Install glusterfs

INSTALL IN ALL NODES DEPENDENCIES:

ansible all -i hosts.ini -a "sudo modprobe fuse" -b 
Enter fullscreen mode Exit fullscreen mode
ansible all -i hosts.ini -a "sudo apt-get install -y xfsprogs glusterfs-server" -b 
Enter fullscreen mode Exit fullscreen mode

** Add glusterd service to startup services **

ansible all -i hosts.ini -a "sudo systemctl enable glusterd" -b 
Enter fullscreen mode Exit fullscreen mode
ansible all -i hosts.ini -a "sudo systemctl start glusterd" -b 
Enter fullscreen mode Exit fullscreen mode

ONLY ON MASTER NODES:

Ping all slave nodes

sudo gluster peer probe 192.168.1.101 
Enter fullscreen mode Exit fullscreen mode
sudo gluster peer probe 192.168.1.102 
Enter fullscreen mode Exit fullscreen mode
sudo gluster peer probe 192.168.1.103 
Enter fullscreen mode Exit fullscreen mode
sudo gluster peer probe 192.168.1.104 
Enter fullscreen mode Exit fullscreen mode
sudo gluster peer probe 192.168.1.105 
Enter fullscreen mode Exit fullscreen mode

Check the connection status

sudo gluster peer status 
Enter fullscreen mode Exit fullscreen mode

Create folder in all master nodes

sudo mkdir -p /mnt/glusterfs/myvolume/brick1/ 
Enter fullscreen mode Exit fullscreen mode

Create the volume

sudo gluster volume create brick1 192.168.1.100:/mnt/glusterfs/myvolume/brick1/ force 
Enter fullscreen mode Exit fullscreen mode

Start the volume

sudo gluster volume start brick1 
Enter fullscreen mode Exit fullscreen mode

 Check the volume status

sudo gluster volume status 
Enter fullscreen mode Exit fullscreen mode

Final step: mount them

Create the folder in all nodes for mounting (it will create on masters also but wont mount)

ansible all -i hosts.ini -a "mkdir -p /mnt/general-volume" -b 
Enter fullscreen mode Exit fullscreen mode
  • Mount in all nodes
ansible all -i hosts.ini -a "mount -t glusterfs 192.168.1.100:/brick1 /mnt/general-volume" -b 
Enter fullscreen mode Exit fullscreen mode

Mount on reboot

  • /root/mount-volumes.sh: create this file in all slaves nodes
#!/bin/sh function check_glusterd_running() { # Explanation of why ps command is used this way: # https://stackoverflow.com/questions/9117507/linux-unix-command-to-determine-if-process-is-running if ! ps cax | grep -w '[g]lusterd' > /dev/null 2>&1 then echo "ERROR: Glusterd is not running" exit 1 fi } while [[ ! -z $(check_glusterd_running) ]]; do sleep 1s; done echo "Glusterd is running" # =====> start volume mounts <====== mount -t glusterfs 192.168.1.100:/brick1 /mnt/general-volume 
Enter fullscreen mode Exit fullscreen mode

After creating this file on all slave nodes:

  • Create crontab line
sudo crontab -e @reboot bash /root/mount-volumes.sh # ^^^^^^^^^^^^ Add this line 
Enter fullscreen mode Exit fullscreen mode

k8s Manifest example

apiVersion: v1 kind: Pod metadata: name: ubuntu labels: app: ubuntu spec: volumes: - name: general-volume hostPath: path: "/mnt/general-volume/test" containers: - image: ubuntu command: - "sleep" - "604800" imagePullPolicy: IfNotPresent name: ubuntu volumeMounts: - mountPath: "/app/data" name: general-volume readOnly: false restartPolicy: Always 
Enter fullscreen mode Exit fullscreen mode

Create a file in the container

kubectl exec -it ubuntu -- mkdir -p /app/data && touch /app/data/hello-world 
Enter fullscreen mode Exit fullscreen mode

 SSH to any node and check if the new file is there

ssh ubuntu@192.168.1.102 
Enter fullscreen mode Exit fullscreen mode
ubuntu@node-02:~$ ls /mnt/general-volume/test 
Enter fullscreen mode Exit fullscreen mode

You are ready to go!

Top comments (0)