DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Gérer des conteneurs qui ressemblent à des machines virtuelles avec Bootloose (successeur de…

Dans plusieurs articles précédents, j’avais décrit l’usage de Footloose, un binaire écrit en Go qui permettait de créer des conteneurs qui ressemblent à des machines virtuelles.

Ces conteneurs exécutent systemd en tant que PID 1 et un démon ssh qui peut être utilisé pour se connecter au conteneur. Ces “machines” se comportent comme des VM, il est même possible d’y faire tourner dockerd.

GitHub - weaveworks/footloose: Container Machines - Containers that look like Virtual Machines

Footloose peut être utilisé pour une variété de tâches, partout où vous aimeriez avoir des machines virtuelles mais voulez des temps de démarrage rapides ou avez besoin de plusieurs d’entre elles.

C’est au tour de l’équipe à l’origine de k0s de reprendre le flambeau avec Bootloose

GitHub - k0sproject/bootloose: Manage containers that look like virtual machines

Sur le même principe …

Illustration avec le lancement d’une instance Ubuntu 22.04 LTS en ARM64 Bits dans Hetzner Cloud :

dans laquelle je commence par installer le moteurDocker :

root@ubuntu-16gb-hel1-1:~# curl -fsSL https://get.docker.com | sh - # Executing docker install script, commit: e5543d473431b782227f8908005543bb4389b8de + sh -c apt-get update -qq >/dev/null + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null + sh -c install -m 0755 -d /etc/apt/keyrings + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg + sh -c chmod a+r /etc/apt/keyrings/docker.gpg + sh -c echo "deb [arch=arm64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" > /etc/apt/sources.list.d/docker.list + sh -c apt-get update -qq >/dev/null + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin >/dev/null + sh -c docker version Client: Docker Engine - Community Version: 26.0.0 API version: 1.45 Go version: go1.21.8 Git commit: 2ae903e Built: Wed Mar 20 15:18:14 2024 OS/Arch: linux/arm64 Context: default Server: Docker Engine - Community Engine: Version: 26.0.0 API version: 1.45 (minimum version 1.24) Go version: go1.21.8 Git commit: 8b79278 Built: Wed Mar 20 15:18:14 2024 OS/Arch: linux/arm64 Experimental: false containerd: Version: 1.6.28 GitCommit: ae07eda36dd25f8a1b98dfbf587313b99c0190bb runc: Version: 1.1.12 GitCommit: v1.1.12-0-g51d5e94 docker-init: Version: 0.19.0 GitCommit: de40ad0 ================================================================================ To run Docker as a non-privileged user, consider setting up the Docker daemon in rootless mode for your user: dockerd-rootless-setuptool.sh install Visit https://docs.docker.com/go/rootless/ to learn about rootless mode. To run the Docker daemon as a fully privileged service, but granting non-root users access, refer to https://docs.docker.com/go/daemon-access/ WARNING: Access to the remote API on a privileged Docker daemon is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' documentation for details: https://docs.docker.com/go/attack-surface/ ================================================================================ root@ubuntu-16gb-hel1-1:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
Enter fullscreen mode Exit fullscreen mode

Puis récupération du binaire Bootloose depuis GitHub :

root@ubuntu-16gb-hel1-1:~# wget -c https://github.com/k0sproject/bootloose/releases/download/v0.7.3/bootloose-linux-arm64 HTTP request sent, awaiting response... 200 OK Length: 6291456 (6.0M) [application/octet-stream] Saving to: ‘bootloose-linux-arm64’ bootloose-linux-arm64 100%[=====================================================================================================>] 6.00M 21.8MB/s in 0.3s (21.8 MB/s) - ‘bootloose-linux-arm64’ saved [6291456/6291456] root@ubuntu-16gb-hel1-1:~# chmod +x bootloose-linux-arm64 root@ubuntu-16gb-hel1-1:~# mv bootloose-linux-arm64 /usr/local/bin/bootloose root@ubuntu-16gb-hel1-1:~# bootloose bootloose - Container Machines Usage: bootloose [command] Available Commands: completion Generate the autocompletion script for the specified shell config Manage cluster configuration create Create a cluster delete Delete a cluster help Help about any command show Show all running machines or a single machine with a given hostname. ssh SSH into a machine start Start cluster machines stop Stop cluster machines version Print bootloose version Flags: -c, --config string Cluster configuration file (default "bootloose.yaml") -h, --help help for bootloose Use "bootloose [command] --help" for more information about a command. 
Enter fullscreen mode Exit fullscreen mode

Releases · k0sproject/bootloose

Et création de de fichier de configuration YAML pour créer mes instances avec cette image :


root@ubuntu-16gb-hel1-1:~# bootloose config create --replicas 3 --image quay.io/k0sproject/bootloose-ubuntu22.04:latest root@ubuntu-16gb-hel1-1:~# cat bootloose.yaml cluster: name: cluster privateKey: ~/.ssh/id_rsa machines: - count: 3 spec: image: quay.io/k0sproject/bootloose-ubuntu22.04:latest name: node%d portMappings: - containerPort: 22 privileged: true volumes: - type: volume destination: /var/lib/k0s root@ubuntu-16gb-hel1-1:~# bootloose create INFO[0000] Docker Image: quay.io/k0sproject/bootloose-ubuntu22.04:latest present locally INFO[0000] Creating machine: cluster-node0 ... INFO[0000] Creating machine: cluster-node1 ... INFO[0000] Creating machine: cluster-node2 ... root@ubuntu-16gb-hel1-1:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f452da977dd6 quay.io/k0sproject/bootloose-ubuntu22.04:latest "/sbin/init" About a minute ago Up About a minute 0.0.0.0:32770->22/tcp, :::32770->22/tcp cluster-node2 790a6594d4a3 quay.io/k0sproject/bootloose-ubuntu22.04:latest "/sbin/init" About a minute ago Up About a minute 0.0.0.0:32769->22/tcp, :::32769->22/tcp cluster-node1 c674c7d3551f quay.io/k0sproject/bootloose-ubuntu22.04:latest "/sbin/init" About a minute ago Up About a minute 0.0.0.0:32768->22/tcp, :::32768->22/tcp cluster-node0 root@ubuntu-16gb-hel1-1:~# docker inspect $(docker ps -aq) | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "172.17.0.4", "IPAddress": "172.17.0.4", "SecondaryIPAddresses": null, "IPAddress": "172.17.0.3", "IPAddress": "172.17.0.3", "SecondaryIPAddresses": null, "IPAddress": "172.17.0.2", "IPAddress": "172.17.0.2", root@ubuntu-16gb-hel1-1:~# for i in {2..4}; do ssh root@172.17.0.$i 'hostname'; done Warning: Permanently added '172.17.0.2' (ED25519) to the list of known hosts. node0 Warning: Permanently added '172.17.0.3' (ED25519) to the list of known hosts. node1 Warning: Permanently added '172.17.0.4' (ED25519) to the list of known hosts. node2 
Enter fullscreen mode Exit fullscreen mode

Les instances sont donc accessibles en SSH et je vais utiliser k0sctl pour y créer un cluster Kubernetes via k0s:

GitHub - k0sproject/k0sctl: A bootstrapping and management tool for k0s clusters.

root@ubuntu-16gb-hel1-1:~# chmod +x k0sctl-linux-arm64 root@ubuntu-16gb-hel1-1:~# mv k0sctl-linux-arm64 /usr/local/bin/k0sctl root@ubuntu-16gb-hel1-1:~# k0sctl NAME: k0sctl - k0s cluster management tool USAGE: k0sctl [global options] command [command options] COMMANDS: version Output k0sctl version apply Apply a k0sctl configuration kubeconfig Output the admin kubeconfig of the cluster init Create a configuration template reset Remove traces of k0s from all of the hosts backup Take backup of existing clusters state config Configuration related sub-commands completion help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --debug, -d Enable debug logging (default: false) [$DEBUG] --trace Enable trace logging (default: false) [$TRACE] --no-redact Do not hide sensitive information in the output (default: false) --help, -h show help 
Enter fullscreen mode Exit fullscreen mode

Initialisation du cluster dans Bootloose via ce fichier de configuration YAML :

root@ubuntu-16gb-hel1-1:~# k0sctl init --k0s > k0sctl.yaml root@ubuntu-16gb-hel1-1:~# cat k0sctl.yaml apiVersion: k0sctl.k0sproject.io/v1beta1 kind: Cluster metadata: name: k0s-cluster spec: hosts: - ssh: address: 172.17.0.2 user: root port: 22 keyPath: /root/.ssh/id_rsa role: controller - ssh: address: 172.17.0.3 user: root port: 22 keyPath: /root/.ssh/id_rsa role: worker - ssh: address: 172.17.0.4 user: root port: 22 keyPath: /root/.ssh/id_rsa role: worker k0s: version: null versionChannel: stable dynamicConfig: false config: apiVersion: k0s.k0sproject.io/v1beta1 kind: Cluster metadata: name: k0s spec: api: k0sApiPort: 9443 port: 6443 installConfig: users: etcdUser: etcd kineUser: kube-apiserver konnectivityUser: konnectivity-server kubeAPIserverUser: kube-apiserver kubeSchedulerUser: kube-scheduler konnectivity: adminPort: 8133 agentPort: 8132 network: kubeProxy: disabled: false mode: iptables kuberouter: autoMTU: true mtu: 0 peerRouterASNs: "" peerRouterIPs: "" podCIDR: 10.244.0.0/16 provider: kuberouter serviceCIDR: 10.96.0.0/12 podSecurityPolicy: defaultPolicy: 00-k0s-privileged storage: type: etcd telemetry: enabled: false 
Enter fullscreen mode Exit fullscreen mode

Lancement de la création du cluster k0s avec ce fichier de configuration :

root@ubuntu-16gb-hel1-1:~# k0sctl apply --config k0sctl.yaml ⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███ ⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███ ⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███ ⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███ ⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████ k0sctl v0.17.5 Copyright 2023, k0sctl authors. Anonymized telemetry of usage will be sent to the authors. By continuing to use k0sctl you agree to these terms: https://k0sproject.io/licenses/eula INFO ==> Running phase: Set k0s version INFO Looking up latest stable k0s version INFO Using k0s version v1.29.2+k0s.0 INFO ==> Running phase: Connect to hosts INFO [ssh] 172.17.0.4:22: connected INFO [ssh] 172.17.0.3:22: connected INFO [ssh] 172.17.0.2:22: connected INFO ==> Running phase: Detect host operating systems INFO [ssh] 172.17.0.4:22: is running Ubuntu 22.04.3 LTS INFO [ssh] 172.17.0.2:22: is running Ubuntu 22.04.3 LTS INFO [ssh] 172.17.0.3:22: is running Ubuntu 22.04.3 LTS INFO ==> Running phase: Acquire exclusive host lock INFO ==> Running phase: Prepare hosts INFO [ssh] 172.17.0.2:22: is a container, applying a fix INFO [ssh] 172.17.0.3:22: is a container, applying a fix INFO [ssh] 172.17.0.4:22: is a container, applying a fix INFO ==> Running phase: Gather host facts INFO [ssh] 172.17.0.4:22: using node2 as hostname INFO [ssh] 172.17.0.3:22: using node1 as hostname INFO [ssh] 172.17.0.2:22: using node0 as hostname INFO [ssh] 172.17.0.4:22: discovered eth0 as private interface INFO [ssh] 172.17.0.3:22: discovered eth0 as private interface INFO [ssh] 172.17.0.2:22: discovered eth0 as private interface INFO ==> Running phase: Validate hosts INFO ==> Running phase: Validate facts INFO ==> Running phase: Download k0s on hosts INFO [ssh] 172.17.0.4:22: downloading k0s v1.29.2+k0s.0 INFO [ssh] 172.17.0.3:22: downloading k0s v1.29.2+k0s.0 INFO [ssh] 172.17.0.2:22: downloading k0s v1.29.2+k0s.0 INFO ==> Running phase: Install k0s binaries on hosts INFO [ssh] 172.17.0.2:22: validating configuration INFO ==> Running phase: Configure k0s INFO [ssh] 172.17.0.2:22: installing new configuration INFO ==> Running phase: Initialize the k0s cluster INFO [ssh] 172.17.0.2:22: installing k0s controller INFO [ssh] 172.17.0.2:22: waiting for the k0s service to start INFO [ssh] 172.17.0.2:22: waiting for kubernetes api to respond INFO ==> Running phase: Install workers INFO [ssh] 172.17.0.3:22: validating api connection to https://172.17.0.2:6443 INFO [ssh] 172.17.0.4:22: validating api connection to https://172.17.0.2:6443 INFO [ssh] 172.17.0.2:22: generating a join token for worker 1 INFO [ssh] 172.17.0.2:22: generating a join token for worker 2 INFO [ssh] 172.17.0.4:22: writing join token INFO [ssh] 172.17.0.3:22: writing join token INFO [ssh] 172.17.0.3:22: installing k0s worker INFO [ssh] 172.17.0.4:22: installing k0s worker INFO [ssh] 172.17.0.4:22: starting service INFO [ssh] 172.17.0.3:22: starting service INFO [ssh] 172.17.0.4:22: waiting for node to become ready INFO [ssh] 172.17.0.3:22: waiting for node to become ready INFO ==> Running phase: Release exclusive host lock INFO ==> Running phase: Disconnect from hosts INFO ==> Finished in 48s INFO k0s cluster version v1.29.2+k0s.0 is now installed INFO Tip: To access the cluster you can now fetch the admin kubeconfig using: INFO k0sctl kubeconfig 
Enter fullscreen mode Exit fullscreen mode

Je peux alors récupérer le fichier kubeconfig pour l’utilisation locale du client kubectl :

root@ubuntu-16gb-hel1-1:~# mkdir .kube root@ubuntu-16gb-hel1-1:~# k0sctl kubeconfig > .kube/config root@ubuntu-16gb-hel1-1:~# cat .kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lVSnhLUU9xRlVzZ0NnY0FiTWNDdEFSTXlmNDFVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0dERVdNQlFHQTFVRUF4TU5hM1ZpWlhKdVpYUmxjeTFqWVRBZUZ3MHlOREF6TXpFeE5qQXhNREJhRncwegpOREF6TWpreE5qQXhNREJhTUJneEZqQVVCZ05WQkFNVERXdDFZbVZ5Ym1WMFpYTXRZMkV3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURXM3V4cVZtMUdMWWNJWlozMDIrajcwTXQ3Y1Y1Zkw3ZmYKcDNwaURhUVk1d1RGMHVmMEVWcDZqQXV5TVpNaGtWTzlSSmo2TGYyb0xNcVV1V0ZqVzFKdVZUVHY5U0JDc1FhVApudVpxVktyWnJ0Nkt5bm1zaVBuWTlMZW5wWklkdzA4NUZnOHdWdGFlclVmOUlaaUMrSEZPL1grdjlIOUFySlJJCjVNby80dzBaZEJjdXVUcEREVzZpcTNNMjF2b1pCR3Jwbk43TkcvWUpLcGNwM2xlbVV5d1JHL0dEMisyTlRML08KdEx5cmF4TDVWVkVYbENEbTFlTXJZOXBzSjNQcDJySmE0RmxyMGRVVFJRUHcycjhBUUxyTFhJUmFBYXpEV3lBLwpKR1NKZG81YUdrL3NxM2l3clRJb3c0UmlwRGRQWmV2UkdZSlRLaWZWOStRYk14bkVuUEZSQWdNQkFBR2pRakJBCk1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJScFl0V2MKMERrSlRaTDFxR0tkK284N1ZndTFBakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbWtGVTBYam14ZWRPMDFtegozWU9QS3hBeTg2N1NpMlM0WTRMcDNqL3NCcW9TWVlkYldzS3c2Wmx1aTdra2VYUEMvM0pJRTd4YjNoL0p6WWxvCnRZMEliOEpnUnZvVk1sV0ZHNitxMnNnZEo3cmx6blg1QXBQZkRqUXY5RUJHR3VXM21IY1UxbXVCUStrUG9JK2cKaU5OZzFITjRoYllYMVJOMitOK3pKcVNVaWh1WFZDbVhFa2YwQ05WU3VyTEZ6cGNKWWJpNWowVVcwYmRhVWFpTwpJRDJOd3VZcUhDNS96RDM0RDB6T1RVMzRtUTVsZDRUYUx4ZmlwRzZrWDVVSEJtT0U5d3lIbVZuY2VzelJ0alZTCnlFblNJV0g1cmM5Sk84RVBXLzVPRTJsb2E3a240U05iQU5NbUNFaVZtZE9IdU5NSHFGaUQxR2oxOWI5MzVpSnkKUnpGS2hBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://172.17.0.2:6443 name: k0s-cluster contexts: - context: cluster: k0s-cluster user: admin name: k0s-cluster current-context: k0s-cluster kind: Config preferences: {} users: - name: admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURXVENDQWtHZ0F3SUJBZ0lVY3lhN3dQUGVIR0c1eUM3WWdUV1FCZnlNM2lJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0dERVdNQlFHQTFVRUF4TU5hM1ZpWlhKdVpYUmxjeTFqWVRBZUZ3MHlOREF6TXpFeE5qQXhNREJhRncweQpOVEF6TXpFeE5qQXhNREJhTURReEZ6QVZCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJrd0Z3WURWUVFECkV4QnJkV0psY201bGRHVnpMV0ZrYldsdU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0MKQVFFQTFrbjJONVorWEt0MW50N3A1QSt2YWQ5OU9DQ25DQ1FKN1JDeXUzTlRaZXlPWDlwK2lRNzBqaC9mWTZjagovNElFMDVDQlJNZzU1cGtVSWszc0k5anJqNFhBcmRaSnQ1QTFUbDZFY0tuWEQ2NzJyWFV4bVlsRG9FK3p6SitkClc1VnVkc0JmbXVqRnluTGJyT2xDYzZEMDlOWmo5b24zRVlGK0dhcXFBTG5wOU9mNENkaFdLL3hYcWVabFBPaFMKZ2RLdERYVzRXYjlEVEZpbytXQ2RObmR6NWd5SUF5OTM3akRzUG9ESEl2OXdEMkVkd2JSVzNMUERjRjhFaFpKWQpUcUV1Z2F0bzB6ek1vaVZzdkptNVZSU0ZjS2h6S1NkZW1obnFyaTBLWXFFTG9Oa2QwcnFkbFF2cnhCSUlvd1E1Cnl4TWhnOHhRb3pDSUtVZXY5SVhoVTlLT2VRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWUQKVlIwbEJCWXdGQVlJS3dZQkJRVUhBd0VHQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwTwpCQllFRk5SbjV4ek1qVXZSeFVtRXdmbzBLVm45cFgzME1COEdBMVVkSXdRWU1CYUFGR2xpMVp6UU9RbE5rdldvCllwMzZqenRXQzdVQ01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjRqU29XNmpnZmRkMlFGM0FRNWxJckVSMFkKQVJSV1RtNTdtTkZPM041OE1PZXR1UDAzOWxvSnIrbDY4aXVQcEUyd2VoL1hTNGxrM3B6dSszSS9IVDNlNW1PcwpiU0pGMXZWbHlaZnZTd1BveEMxcVBPakR3R28xQW1xUUpYRzhjc1E1WWVPR3lVN3BjdjY0KzRrL2xXQmovSEliCmQ4ZnVkMzh4cUo5SUV5SXdBU0lVenhVbllPZ1ZyOWlzRGtxTlk4czBtc3loQXpFYkNQdDQ2NFhiU0pqWG40aSsKOTFVK2FFZG1BMHN5Mi9jc2h6TFZCYzlOanJER1Yya0ZTL2hQNU9YZ0ZycEpEcnNnRVVOcjRudzNRaEl0RkJvdAp4Q2VXYkxTdXFPWTJjcnVvU1hWdmMrdFkyV0xWZ0RQVDRkZ3E3UGhhZm5oTUJkNmxyME55WFNuVzdOWFQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMWtuMk41WitYS3QxbnQ3cDVBK3ZhZDk5T0NDbkNDUUo3UkN5dTNOVFpleU9YOXArCmlRNzBqaC9mWTZjai80SUUwNUNCUk1nNTVwa1VJazNzSTlqcmo0WEFyZFpKdDVBMVRsNkVjS25YRDY3MnJYVXgKbVlsRG9FK3p6SitkVzVWdWRzQmZtdWpGeW5MYnJPbENjNkQwOU5aajlvbjNFWUYrR2FxcUFMbnA5T2Y0Q2RoVwpLL3hYcWVabFBPaFNnZEt0RFhXNFdiOURURmlvK1dDZE5uZHo1Z3lJQXk5MzdqRHNQb0RISXY5d0QyRWR3YlJXCjNMUERjRjhFaFpKWVRxRXVnYXRvMHp6TW9pVnN2Sm01VlJTRmNLaHpLU2RlbWhucXJpMEtZcUVMb05rZDBycWQKbFF2cnhCSUlvd1E1eXhNaGc4eFFvekNJS1VldjlJWGhVOUtPZVFJREFRQUJBb0lCQUFhR3dpVDNSR24ySHVMegp6eFBQRm55Vy9lMVRzVUtpTmxzdUF3T0tnNk83REtzR3NJdmtGTGF2YWRKVEtObURVRHBSVUY2VDZvK0hZZ0daCmRmT3hpNXNYYThMZm4rY2pVVHhOektMUnlXY0U0U1p2UjA5eHlzbDdJL0s3ZWNOc1RhejROdkUwM2JGSXhrQUIKNnJBeTJzTUtOSWt4c29DcC9Qa3pKWEpZTnpQcVBtL05Kb3hja1h5KzVXRUxiY25Od2lxUUZzU09zT21ldEZ3YQpYcTZrSkVYRzIwVGpvQjg0SlFsNldNN0M5TFp0TWUrcnpsRGdsZFl1ZU92M1FWSE9FU0d2T0VuTFh3TEVSdkg3CmpPbUs4L040dU53L0NpTXJlUjh0WWR2dVBQNVA2Z1V5OUdmdXc2bkM4SzlTOE9CSVZmOWpxRjlsdG9wZndhNVMKRmdCd1BnRUNnWUVBNEI4T0hVZmp1ajR1amNyU01RYm0xQzRhN0QxckJ3YmxuY2RuNFlSYndpKzhOdmhEcGt5SQowMjBod1czRWpUTGJTSTB5UGJvNnQ4MWJEZk1xcjdudm1CblBZTDN2T1lmY3hDRklMcWJwSHd3M1h3aTBsM1BuCmhKVkZyaGJZckthSlVNcDZXVW9zTGdpVDE3MWx2ZnRRZFd5WGpWaGhLQ25SVTJSTExIOEg5NUVDZ1lFQTlNVGgKQWMyaTVSTnRzWDFFOTBFQjZUK0Rxekd3WkQ5WG1YVFQ4M3hrSzVENWV5UzIyUFpGa0NVU3pTSUZtVkpHSzh1ZwpuL2tQQTVZbWMzQUNiWEptVHlDQStSTkxhMU5TTDhGNXN4RzJ3Ris4OXNLVDBhT3VnNE5Ld1RHYkd6QVFKSjN0CitndmcwYXhJQW04b2RrU1RMbTg3K2N0UDVWK2QyNGNHVmh4VHhHa0NnWUVBaE5uancvZVpSZzBXQzNidW9hRTEKc3hDaFpPZ0RTV2NOTlRtK21pK2JOTUNYRVA2Wkd6ckM0SkVRTVZpZjZoTDdhVVpKUWMzaWdKRjZLQXE4Z3UzMAoySFIwT1NSZGFmemZJR09hSmcxS290emE5YnB6VWxPaUtUVlMySjh5VVNWbXdEMUZ5U2Z1aUZzTlNCVTgraUMrCjBOeE4rYnNwM0dUdGNFRkRUbHorbkFFQ2dZQXBOTS9RZHViNmU2Zlczb2p5dXgzd1A0SVNHdjVnRWczVVJzZTcKMFBBb0tYTG1tVXF6QWRxNkpwT2d0eDZNTGo2ajl0Ym10NDRnZzNHYnMzcWxKRUkzQmZqUWRjQVhwR0pNcWR5cwpHY3BUWG9xNFhBOTRsbjYxb3krOWtIVlZRV1VtVlBRUVNWbWkwc2NZcWMvOUFSUnFGODNZQXJORG9USTVGK1VvCk1BS21LUUtCZ1FDZ0tSOEpYMkpOakR6QjVWd3lwdmo3SDdFNFV4Q1FBVXdZZ1JVZ0VCUThZZElPUitSWTFuREEKUWRLd1NOaWxodWxrTjFERFVCZmZqUHhJWndTNExJZEkyN0lqTHlqMFUvbDFrMkludU94OFdHck1hUnp6WXRGcgpTTjhQV2x6eERZTHNHaVpyR2MzVXk0Znd6Ty9lV2xFUFpod0NRVk5lTFpHcFRCYVpvbUIxQkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= root@ubuntu-16gb-hel1-1:~# curl -LO https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubectl && chmod +x kubectl && mv kubectl /usr/local/bin/ root@ubuntu-16gb-hel1-1:~# chmod 400 .kube/config root@ubuntu-16gb-hel1-1:~# kubectl cluster-info Kubernetes control plane is running at https://172.17.0.2:6443 CoreDNS is running at https://172.17.0.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 
Enter fullscreen mode Exit fullscreen mode

Le cluster est alors opérationnel et actif avec deux noeuds Workers :

root@ubuntu-16gb-hel1-1:~# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node1 Ready <none> 4m58s v1.29.2+k0s 172.17.0.3 <none> Ubuntu 22.04.3 LTS 5.15.0-100-generic containerd://1.7.13 node2 Ready <none> 4m58s v1.29.2+k0s 172.17.0.4 <none> Ubuntu 22.04.3 LTS 5.15.0-100-generic containerd://1.7.13 root@ubuntu-16gb-hel1-1:~# kubectl get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-555d98c87b-7dxxh 1/1 Running 0 5m kube-system pod/coredns-555d98c87b-9ngzm 1/1 Running 0 5m kube-system pod/konnectivity-agent-4fb6m 1/1 Running 0 5m5s kube-system pod/konnectivity-agent-kj99g 1/1 Running 0 5m5s kube-system pod/kube-proxy-6pbnw 1/1 Running 0 5m5s kube-system pod/kube-proxy-xg4mh 1/1 Running 0 5m5s kube-system pod/kube-router-tvzsb 1/1 Running 0 5m5s kube-system pod/kube-router-wt97k 1/1 Running 0 5m5s kube-system pod/metrics-server-7556957bb7-qtgd2 1/1 Running 0 5m6s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m23s kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 5m14s kube-system service/metrics-server ClusterIP 10.100.127.105 <none> 443/TCP 5m10s 
Enter fullscreen mode Exit fullscreen mode

Je vais utiliser l’image Docker du projet Kasmweb pour obtenir un Desktop en ligne au sein de ce cluster :

Pour cela conversion de la formule donnée dans le Docker Hub en fichier YAML pour docker-compose via Composerize :

Composerize

Puis conversion en manifests YAML via Kompose:

root@ubuntu-16gb-hel1-1:~# cat docker-compose.yml name: ubuntu services: ubuntu-jammy-dind: stdin_open: true tty: true shm_size: 512m ports: - 6901:6901 environment: - VNC_PW=password image: kasmweb/ubuntu-jammy-dind:1.14.0-rolling root@ubuntu-16gb-hel1-1:~# ./kompose convert INFO Kubernetes file "ubuntu-jammy-dind-service.yaml" created INFO Kubernetes file "ubuntu-jammy-dind-deployment.yaml" created root@ubuntu-16gb-hel1-1:~# cat ubuntu-jammy-dind-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: ./kompose convert kompose.version: 1.32.0 (765fde254) labels: io.kompose.service: ubuntu-jammy-dind name: ubuntu-jammy-dind spec: replicas: 1 selector: matchLabels: io.kompose.service: ubuntu-jammy-dind template: metadata: annotations: kompose.cmd: ./kompose convert kompose.version: 1.32.0 (765fde254) labels: io.kompose.network/ubuntu-default: "true" io.kompose.service: ubuntu-jammy-dind spec: containers: - env: - name: VNC_PW value: password image: kasmweb/ubuntu-jammy-dind:1.14.0-rolling name: ubuntu-jammy-dind ports: - containerPort: 6901 hostPort: 6901 protocol: TCP stdin: true tty: true restartPolicy: Always root@ubuntu-16gb-hel1-1:~# cat ubuntu-jammy-dind-service.yaml apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: ./kompose convert kompose.version: 1.32.0 (765fde254) labels: io.kompose.service: ubuntu-jammy-dind name: ubuntu-jammy-dind spec: ports: - name: "6901" port: 6901 targetPort: 6901 selector: io.kompose.service: ubuntu-jammy-dind 
Enter fullscreen mode Exit fullscreen mode

Déploiement de ces derniers dans le cluster k0s…

 root@ubuntu-16gb-hel1-1:~# kubectl apply -f ubuntu-jammy-dind-deployment.yaml deployment.apps/ubuntu-jammy-dind created root@ubuntu-16gb-hel1-1:~# kubectl apply -f ubuntu-jammy-dind-service.yaml service/ubuntu-jammy-dind created root@ubuntu-16gb-hel1-1:~# kubectl get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE default pod/ubuntu-jammy-dind-5d84c9847f-vhftx 1/1 Running 0 3m37s kube-system pod/coredns-555d98c87b-7dxxh 1/1 Running 0 24m kube-system pod/coredns-555d98c87b-9ngzm 1/1 Running 0 24m kube-system pod/konnectivity-agent-4fb6m 1/1 Running 0 24m kube-system pod/konnectivity-agent-kj99g 1/1 Running 0 24m kube-system pod/kube-proxy-6pbnw 1/1 Running 0 24m kube-system pod/kube-proxy-xg4mh 1/1 Running 0 24m kube-system pod/kube-router-tvzsb 1/1 Running 0 24m kube-system pod/kube-router-wt97k 1/1 Running 0 24m kube-system pod/metrics-server-7556957bb7-qtgd2 1/1 Running 0 24m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24m default service/ubuntu-jammy-dind ClusterIP 10.97.43.198 <none> 6901/TCP 3m30s kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 24m kube-system service/metrics-server ClusterIP 10.100.127.105 <none> 443/TCP 24m 
Enter fullscreen mode Exit fullscreen mode

J’expose le service via kubectl pour accéder au Desktop via NoVNC et les identifiants par défaut :

 root@ubuntu-16gb-hel1-1:~# kubectl port-forward svc/ubuntu-jammy-dind 6901:6901 --address='0.0.0.0' Forwarding from 0.0.0.0:6901 -> 6901 
Enter fullscreen mode Exit fullscreen mode

Comme dans l’article précédent, récupération d’Ollama en test …

curl -fsSL https://ollama.com/install.sh | sh 
Enter fullscreen mode Exit fullscreen mode

avec Llama2 :

D’autres exemples d’utilisation peuvent se retrouver dans le dépôt GitHub avec notamment Ansible ou Ignite …

À suivre !

Top comments (0)