DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Installer Canonical Kubernetes dans LXD avec Juju …

Canonical Kubernetes que l’on a pu aborder dans l’article précédent,

k8s-snap (Canonical Kubernetes) pour un déploiement simple et rapide d’un cluster k8s …

peut également être installé dans LXD. C’est un excellent moyen, par exemple, de tester Canonical Kubernetes en cluster sans avoir besoin de plusieurs hôtes physiques.

LXD: five easy pieces | Ubuntu

Je pars donc d’une simple instance Ubuntu 22.04 LTS dans DigitalOcean :

curl -X POST -H 'Content-Type: application/json' \ -H 'Authorization: Bearer '$TOKEN'' \ -d '{"name":"lxd", "size":"s-4vcpu-16gb-amd", "region":"fra1", "image":"ubuntu-22-04-x64", "vpc_uuid":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"}' \ "https://api.digitalocean.com/v2/droplets" 
Enter fullscreen mode Exit fullscreen mode

et j’y installe et initie LXD ainsi que de l’orchestrateur Juju :

root@lxd:~# snap remove lxd --purge && snap install lxd --candidate lxd removed lxd (candidate) 5.21.0-eb655b9 from Canonical✓ installed root@lxd:~# snap list Name Version Rev Tracking Publisher Notes core20 20230207 1828 latest/stable canonical✓ base core22 20240111 1122 latest/stable canonical✓ base lxd 5.21.0-eb655b9 27957 latest/candidate canonical✓ - snapd 2.58.2 18357 latest/stable canonical✓ snapd root@lxd:~# lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (dir, lvm, powerflex, zfs, btrfs, ceph) [default=zfs]: dir Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none Would you like the LXD server to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
Enter fullscreen mode Exit fullscreen mode

The simplest way to deploy and maintain applications in the cloud

root@lxd:~# snap install juju juju (3/stable) 3.4.0 from Canonical✓ installed 
Enter fullscreen mode Exit fullscreen mode

Edition du profil par défaut dans LXD pour lui permettre de prendre en charge Canonical Kubernetes avec ce modèle …

Install Canonical Kubernetes in LXD

name: default description: Default LXD profile config: boot.autostart: "true" linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,iptable_raw,netlink_diag,nf_nat,overlay,br_netfilter,xt_socket raw.lxc: | lxc.apparmor.profile=unconfined lxc.mount.auto=proc:rw sys:rw cgroup:rw lxc.cgroup.devices.allow=a lxc.cap.drop= security.nesting: "true" security.privileged: "true" devices: aadisable2: path: /dev/kmsg source: /dev/kmsg type: unix-char dev-loop-control: major: "10" minor: "237" path: /dev/loop-control type: unix-char dev-loop0: major: "7" minor: "0" path: /dev/loop0 type: unix-block dev-loop1: major: "7" minor: "1" path: /dev/loop1 type: unix-block dev-loop2: major: "7" minor: "2" path: /dev/loop2 type: unix-block dev-loop3: major: "7" minor: "3" path: /dev/loop3 type: unix-block dev-loop4: major: "7" minor: "4" path: /dev/loop4 type: unix-block dev-loop5: major: "7" minor: "5" path: /dev/loop5 type: unix-block dev-loop6: major: "7" minor: "6" path: /dev/loop6 type: unix-block dev-loop7: major: "7" minor: "7" path: /dev/loop7 type: unix-block dev-loop8: major: "7" minor: "8" path: /dev/loop8 type: unix-block dev-loop9: major: "7" minor: "9" path: /dev/loop9 type: unix-block dev-loop10: major: "7" minor: "10" path: /dev/loop10 type: unix-block dev-loop11: major: "7" minor: "11" path: /dev/loop11 type: unix-block dev-loop12: major: "7" minor: "12" path: /dev/loop12 type: unix-block dev-loop13: major: "7" minor: "13" path: /dev/loop13 type: unix-block dev-loop14: major: "7" minor: "14" path: /dev/loop14 type: unix-block dev-loop15: major: "7" minor: "15" path: /dev/loop15 type: unix-block dev-loop16: major: "7" minor: "16" path: /dev/loop16 type: unix-block eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk used_by: [] 
Enter fullscreen mode Exit fullscreen mode

Application de ce profil par défaut dans LXD :

root@lxd:~# lxc profile edit default < lxd-profile-default.yaml root@lxd:~# lxc profile show default name: default description: Default LXD profile config: boot.autostart: "true" linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,iptable_raw,netlink_diag,nf_nat,overlay,br_netfilter,xt_socket raw.lxc: | lxc.apparmor.profile=unconfined lxc.mount.auto=proc:rw sys:rw cgroup:rw lxc.cgroup.devices.allow=a lxc.cap.drop= security.nesting: "true" security.privileged: "true" devices: aadisable2: path: /dev/kmsg source: /dev/kmsg type: unix-char dev-loop-control: major: "10" minor: "237" path: /dev/loop-control type: unix-char dev-loop0: major: "7" minor: "0" path: /dev/loop0 type: unix-block dev-loop1: major: "7" minor: "1" path: /dev/loop1 type: unix-block dev-loop2: major: "7" minor: "2" path: /dev/loop2 type: unix-block dev-loop3: major: "7" minor: "3" path: /dev/loop3 type: unix-block dev-loop4: major: "7" minor: "4" path: /dev/loop4 type: unix-block dev-loop5: major: "7" minor: "5" path: /dev/loop5 type: unix-block dev-loop6: major: "7" minor: "6" path: /dev/loop6 type: unix-block dev-loop7: major: "7" minor: "7" path: /dev/loop7 type: unix-block dev-loop8: major: "7" minor: "8" path: /dev/loop8 type: unix-block dev-loop9: major: "7" minor: "9" path: /dev/loop9 type: unix-block dev-loop10: major: "7" minor: "10" path: /dev/loop10 type: unix-block dev-loop11: major: "7" minor: "11" path: /dev/loop11 type: unix-block dev-loop12: major: "7" minor: "12" path: /dev/loop12 type: unix-block dev-loop13: major: "7" minor: "13" path: /dev/loop13 type: unix-block dev-loop14: major: "7" minor: "14" path: /dev/loop14 type: unix-block dev-loop15: major: "7" minor: "15" path: /dev/loop15 type: unix-block dev-loop16: major: "7" minor: "16" path: /dev/loop16 type: unix-block eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk used_by: [] 
Enter fullscreen mode Exit fullscreen mode

Suivi du lancement d’un contrôleur local dans LXD pour Juju :

root@lxd:~# mkdir -p ~/.local/share/juju root@lxd:~# juju bootstrap localhost lxd Creating Juju controller "lxd" on localhost/localhost Looking for packaged Juju agent version 3.4.0 for amd64 Located Juju agent version 3.4.0-ubuntu-amd64 at https://streams.canonical.com/juju/tools/agent/3.4.0/juju-3.4.0-linux-amd64.tgz To configure your system to better support LXD containers, please see: https://documentation.ubuntu.com/lxd/en/latest/explanation/performance_tuning/ Launching controller instance(s) on localhost/localhost... - juju-b6d031-0 (arch=amd64) Installing Juju agent on bootstrap instance Waiting for address Attempting to connect to 10.11.106.167:22 Connected to 10.11.106.167 Running machine configuration script... Bootstrap agent now started Contacting Juju controller at 10.11.106.167 to verify accessibility... Bootstrap complete, controller "lxd" is now available Controller machines are in the "controller" model Now you can run juju add-model <model-name> to create a new model to deploy workloads. root@lxd:~# juju controllers Use --refresh option with this command to see the latest information. Controller Model User Access Cloud/Region Models Nodes HA Version lxd* - admin superuser localhost/localhost 1 1 none 3.4.0 root@lxd:~# lxc ls +---------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+----------------------+------+-----------+-----------+ | juju-b6d031-0 | RUNNING | 10.11.106.167 (eth0) | | CONTAINER | 0 | +---------------+---------+----------------------+------+-----------+-----------+ 
Enter fullscreen mode Exit fullscreen mode

Visualisation des Charms pour Canonical Kubernetes avant installation :

root@lxd:~# juju info k8s name: k8s publisher: Canonical Kubernetes summary: A machine charm for K8s description: "A machine charm which operates a complete Kubernetes cluster.\n\nThis charm installs and operates a Kubernetes cluster via the k8s snap. It exposes\nrelations to co-operate with other kubernetes components such as optional CNIs, \noptional cloud-providers, optional schedulers, external backing stores, and external\ncertificate storage.\n\nThis charm provides the following running components:\n* kube-apiserver\n* kube-scheduler\n* kube-controller-manager\n* kube-proxy\n* kubelet\n* containerd\n\nThis charm can optionally disable the following components:\n* A Kubernetes Backing Store\n* A Kubernetes CNI\n" store-url: https://charmhub.io/k8s charm-id: 5GG4ydqob6yLrcZ3rrscqL0gVJvK53Ij supports: ubuntu@20.04, ubuntu@22.04 subordinate: false relations: provides: cos-agent: cos_agent cos-worker-tokens: cos-tokens k8s-cluster: k8s-cluster requires: {} channels: | latest/stable: – latest/candidate: – latest/beta: – latest/edge: 26 2024-03-29 (26) 3MB amd64 ubuntu@20.04, ubuntu@22.04 root@lxd:~# juju info k8s-worker name: k8s-worker publisher: Canonical Kubernetes summary: A machine charm for a K8s Worker description: | A machine charm which operates a Kubernetes worker. This charm installs and operates a Kubernetes worker via the k8s snap. It exposes relations to co-operate with other kubernetes components This charm provides the following running components: * kube-proxy * kubelet * containerd store-url: https://charmhub.io/k8s-worker charm-id: ZL6PvT8yleZOMih0oKRYPgQI4FntLB48 supports: ubuntu@20.04, ubuntu@22.04 subordinate: false relations: provides: cos-agent: cos_agent requires: cluster: k8s-cluster cos-tokens: cos-tokens channels: | latest/stable: – latest/candidate: – latest/beta: – latest/edge: 27 2024-03-29 (27) 6MB amd64 ubuntu@20.04, ubuntu@22.04 
Enter fullscreen mode Exit fullscreen mode

qui peuvent être installés avec Juju, avec ici le noeud contrôleur du futur cluster Kubernetes …

Install Canonical Kubernetes from a charm

root@lxd:~# juju add-model k8s Added 'k8s' model on localhost/localhost with credential 'localhost' for user 'admin' root@lxd:~# juju deploy k8s --channel=latest/edge Deployed "k8s" from charm-hub charm "k8s", revision 26 in channel latest/edge on ubuntu@22.04/stable root@lxd:~# juju status Model Controller Cloud/Region Version SLA Timestamp k8s lxd localhost/localhost 3.4.0 unsupported 12:37:00Z App Version Status Scale Charm Channel Rev Exposed Message k8s waiting 1 k8s latest/edge 26 no Cluster not yet ready Unit Workload Agent Machine Public address Ports Message k8s/0* waiting idle 0 10.11.106.116 Cluster not yet ready Machine State Address Inst id Base AZ Message 0 started 10.11.106.116 juju-85fd7d-0 ubuntu@22.04 Running 
Enter fullscreen mode Exit fullscreen mode

Déploiement de deux noeuds Workers :

root@lxd:~# juju deploy k8s-worker --channel=latest/edge -n 2 Deployed "k8s-worker" from charm-hub charm "k8s-worker", revision 27 in channel latest/edge on ubuntu@22.04/stable root@lxd:~# juju integrate k8s k8s-worker:cluster 
Enter fullscreen mode Exit fullscreen mode

Le cluster Kubernetes est alors actif et opérationnel :

root@lxd:~# juju status Model Controller Cloud/Region Version SLA Timestamp k8s lxd localhost/localhost 3.4.0 unsupported 12:39:53Z App Version Status Scale Charm Channel Rev Exposed Message k8s 1.29.3 active 1 k8s latest/edge 26 no Ready k8s-worker 1.29.3 active 2 k8s-worker latest/edge 27 no Ready Unit Workload Agent Machine Public address Ports Message k8s-worker/0* active idle 1 10.11.106.20 Ready k8s-worker/1 active idle 2 10.11.106.178 Ready k8s/0* active idle 0 10.11.106.116 6443/tcp Ready Machine State Address Inst id Base AZ Message 0 started 10.11.106.116 juju-85fd7d-0 ubuntu@22.04 Running 1 started 10.11.106.20 juju-85fd7d-1 ubuntu@22.04 Running 2 started 10.11.106.178 juju-85fd7d-2 ubuntu@22.04 Running 
Enter fullscreen mode Exit fullscreen mode

Connexion au noeud Contrôleur du cluster pour vérifier son statut :

root@lxd:~# juju ssh k8s/0 The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@juju-85fd7d-0:~$ sudo k8s status status: ready high-availability: no datastore: voter-nodes: - 10.11.106.116:6400 standby-nodes: none spare-nodes: none network: enabled: true dns: enabled: true cluster-domain: cluster.local service-ip: 10.152.183.28 upstream-nameservers: - /etc/resolv.conf metrics-server: enabled: true ubuntu@juju-85fd7d-0:~$ sudo k8s kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:6443 CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/coredns:udp-53/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ubuntu@juju-85fd7d-0:~$ sudo k8s kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME juju-85fd7d-0 Ready control-plane,worker 7m10s v1.29.3 10.11.106.116 <none> Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28 juju-85fd7d-1 Ready worker 3m23s v1.29.3 10.11.106.20 <none> Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28 juju-85fd7d-2 Ready worker 3m10s v1.29.3 10.11.106.178 <none> Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28 ubuntu@juju-85fd7d-0:~$ sudo k8s kubectl get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/cilium-b5hkt 1/1 Running 0 7m13s kube-system pod/cilium-gghfz 1/1 Running 0 3m17s kube-system pod/cilium-kzwpz 1/1 Running 0 3m30s kube-system pod/cilium-operator-597b55b886-jhqkq 1/1 Running 0 7m13s kube-system pod/coredns-66579b5b88-x9dv2 1/1 Running 0 7m13s kube-system pod/metrics-server-57db9dfb7b-lnllv 1/1 Running 0 7m13s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 7m20s kube-system service/coredns ClusterIP 10.152.183.28 <none> 53/UDP,53/TCP 7m21s kube-system service/hubble-peer ClusterIP 10.152.183.184 <none> 443/TCP 7m21s kube-system service/metrics-server ClusterIP 10.152.183.60 <none> 443/TCP 7m20s 
Enter fullscreen mode Exit fullscreen mode

Je peux récupérer le fichierkubeconfigavec Juju :

root@lxd:~# juju ssh k8s/0 'sudo k8s config' apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHekNDQWdPZ0F3SUJBZ0lRQ2RCNysvUWI1Ums4MzdmRGJhWlp2ekFOQmdrcWhraUc5dzBCQVFzRkFEQVkKTVJZd0ZBWURWUVFERXcxcmRXSmxjbTVsZEdWekxXTmhNQjRYRFRJME1ETXpNVEV5TXpVeE1Gb1hEVFEwTURNegpNVEV5TXpVeE1Gb3dHREVXTUJRR0ExVUVBeE1OYTNWaVpYSnVaWFJsY3kxallUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNK1VRNllEVS9vaEVrWjhGcHlnQ0Y3N1JHaWdEMUdEZUR5ekdCeTIKZjhleVhZMGRUWlFJcmw0amtIL29SS2tiRnB4QnZVSVlzWFV5RlpkOUxKM3VFMWhPQTNDNXYzM0VOUVhrdU5pNQpELy9kRHNyZ0pHWitzTnJKSTIzSWZOV1VXanNLc1hzejFzZXN5cHRYb2dYbytkTkxxRitVTjFnUjVBOTZ6VVB0ClZOZFJLKzUyaTFxcWcxeUFhczcybVhwVkNQZUxxSi9adXhDSFhTcE8rcWt0Q2pSeUVTKzJmVFVTaUhtRlQ5clMKVG9vZndTZ0RkWHJlMmsvN3NFSjg5aGk2NFpQeVQyZDFPYzcxbzEyQUlyZ0VBYlorZnJ0NnFCSVhENkkxSEJzKwphZlh1UmVmTyswUEFVYzgxN25QaGhTWW1OcGJrdk5vdFF4WlRZRGRONEhEbEJUVUNBd0VBQWFOaE1GOHdEZ1lEClZSMFBBUUgvQkFRREFnS0VNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01DQmdnckJnRUZCUWNEQVRBUEJnTlYKSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnFobUlNNzlRVnFYMks1YXVuVUdBMENYOTBkVEFOQmdrcQpoa2lHOXcwQkFRc0ZBQU9DQVFFQWNnaUJad0R3MTlVVlFaQVpHRCs5THFSOFBycXJyMlRqVFh0Q1RkbzFGWWZICnlla2IzdmV1YjJUblNOT2RUeEhjWmtlR2d3QXFYV216Z1MrSHl5TFZSTlY2WTNJK0ZKQmJpVkJJOTV0TlRmaG4KMEZvSEVzWUNSY3Q5L1JVQkVOTTFsZDY1bWFVNFc2TWFvdE5MOVpSczNGTDJ3SmRKTjdCVEJIamxYZzhrckkrOQpOdlcvTWZCS0FvcWRleXNBMHJHY3RlczVWZm1YUDRMa1hxNXhQNmV1MWhmSU1ObXM1UTk4QTdsck9TZG5BNzJYClFNKzBBc1ovckhZQ0hnRUFqNjdIK0drTFhQeEFzelZ4UTZpRHRJMmhyWVVKYmNFSTg1REF0RFNhc2xGS2JHLzMKNU53b1U0VTZjVXMxcHZiTjFsVFZvTGVFTHFoYU9EMkdiUnRYb0h4UnFBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://10.11.106.116:6443 name: k8s contexts: - context: cluster: k8s user: k8s-user name: k8s current-context: k8s kind: Config preferences: {} users: - name: k8s-user user: token: token::00669e4efab3674984256068943beee135d06d1a Connection to 10.11.106.116 closed. 
Enter fullscreen mode Exit fullscreen mode

Pour bénéficier localement du client kubectl :

root@lxd:~# juju expose k8s root@lxd:~# snap install kubectl --classic snap "kubectl" is already installed, see 'snap help refresh' root@lxd:~# mkdir .kube root@lxd:~# juju ssh k8s/0 'sudo k8s config' > .kube/config Connection to 10.11.106.116 closed. root@lxd:~# chmod 400 .kube/config root@lxd:~# kubectl cluster-info Kubernetes control plane is running at https://10.11.106.116:6443 CoreDNS is running at https://10.11.106.116:6443/api/v1/namespaces/kube-system/services/coredns:udp-53/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. root@lxd:~# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME juju-85fd7d-0 Ready control-plane,worker 11m v1.29.3 10.11.106.116 <none> Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28 juju-85fd7d-1 Ready worker 7m47s v1.29.3 10.11.106.20 <none> Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28 juju-85fd7d-2 Ready worker 7m34s v1.29.3 10.11.106.178 <none> Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28 root@lxd:~# lxc ls +---------------+---------+--------------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+--------------------------+------+-----------+-----------+ | juju-85fd7d-0 | RUNNING | 10.11.106.116 (eth0) | | CONTAINER | 0 | | | | 10.1.0.231 (cilium_host) | | | | +---------------+---------+--------------------------+------+-----------+-----------+ | juju-85fd7d-1 | RUNNING | 10.11.106.20 (eth0) | | CONTAINER | 0 | | | | 10.1.1.16 (cilium_host) | | | | +---------------+---------+--------------------------+------+-----------+-----------+ | juju-85fd7d-2 | RUNNING | 10.11.106.178 (eth0) | | CONTAINER | 0 | | | | 10.1.2.20 (cilium_host) | | | | +---------------+---------+--------------------------+------+-----------+-----------+ | juju-b6d031-0 | RUNNING | 10.11.106.167 (eth0) | | CONTAINER | 0 | +---------------+---------+--------------------------+------+-----------+-----------+ 
Enter fullscreen mode Exit fullscreen mode

Comme dans l’article précédent, je peux bénéficier d’Ollama et des LLM ouverts via Helm :

root@lxd:~# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash Downloading https://get.helm.sh/helm-v3.14.3-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm root@lxd:~# helm ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION root@lxd:~# kubectl create namespace ollama namespace/ollama created root@lxd:~# helm install ollama ollama-helm/ollama --namespace ollama NAME: ollama LAST DEPLOYED: NAMESPACE: ollama STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace ollama -l "app.kubernetes.io/name=ollama,app.kubernetes.io/instance=ollama" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace ollama $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace ollama port-forward $POD_NAME 8080:$CONTAINER_PORT root@lxd:~# helm ls --namespace ollama NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ollama ollama 1 2024-03-31 13:11:43.199405708 +0000 UTC deployed ollama-0.21.1 0.1.30 root@lxd:~# kubectl get po,svc -n ollama NAME READY STATUS RESTARTS AGE pod/ollama-75c7b79658-rnf7h 1/1 Running 0 94s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ollama ClusterIP 10.152.183.78 <none> 11434/TCP 94s 
Enter fullscreen mode Exit fullscreen mode

Et je peux exposer localement l’API RESTful d’Ollama :

root@lxd:~# kubectl port-forward service/ollama -n ollama 11434:11434 --address='0.0.0.0' Forwarding from 0.0.0.0:11434 -> 11434 root@lxd:~# netstat -tunlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1399/sshd: /usr/sbi tcp 0 0 10.11.106.1:53 0.0.0.0:* LISTEN 9929/dnsmasq tcp 0 0 0.0.0.0:11434 0.0.0.0:* LISTEN 52634/kubectl tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 623/systemd-resolve tcp6 0 0 :::8443 :::* LISTEN 9784/lxd tcp6 0 0 :::22 :::* LISTEN 1399/sshd: /usr/sbi udp 0 0 10.11.106.1:53 0.0.0.0:* 9929/dnsmasq udp 0 0 127.0.0.53:53 0.0.0.0:* 623/systemd-resolve udp 0 0 0.0.0.0:67 0.0.0.0:* 9929/dnsmasq 
Enter fullscreen mode Exit fullscreen mode

Récupération du client Ollama via GitHub :

Releases · ollama/ollama

root@lxd:~# wget -c https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64 && chmod +x ollama-linux-amd64 && mv ollama-linux-amd64 /usr/local/bin/ollama Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ... Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.111.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 297108760 (283M) [application/octet-stream] Saving to: ‘ollama-linux-amd64’ ollama-linux-amd64 100%[=====================================================================================================>] 283.34M 351MB/s in 0.8s (351 MB/s) - ‘ollama-linux-amd64’ saved [297108760/297108760] 
Enter fullscreen mode Exit fullscreen mode

Pour exécuter par exemple ici Code Llama, une famille de LLM pour le code basée sur Llama 2 :

root@lxd:~# ollama run codellama pulling manifest pulling 3a43f93b78ec... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 3.8 GB pulling 8c17c2ebb0ea... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.0 KB pulling 590d74a5569b... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.8 KB pulling 2e0493f67d0c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 59 B pulling 7f6a57943a88... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 120 B pulling 316526ac7323... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 529 B verifying sha256 digest writing manifest removing any unused layers success 
Enter fullscreen mode Exit fullscreen mode

Avec ces petits exemples à parfaire …

>>> How deploy manifest with Juju inside Kubernetes cluster ? To deploy a manifest file with Juju inside a Kubernetes cluster, you can follow these steps: 1. Install Juju on your local machine and connect it to the Kubernetes cluster that you want to deploy the manifest file to. 2. Create a Juju model for the Kubernetes cluster by running the following command: juju add-model <cluster-name> Replace `<cluster-name>` with the name of your Kubernetes cluster. 3. Set the deployment environment to the Kubernetes cluster that you want to deploy the manifest file to by running the following command: juju set-environment <cluster-name> 4. Create a new Juju application using the `juju add` command, passing in the name of the application and the path to the manifest file as arguments. For example: juju add my-application manifests/my-application.yaml Replace `<my-application>` with the name of your application and `<manifests/my-application.yaml>` with the path to your manifest file. 5. Deploy the application by running the following command: juju deploy my-application This will deploy the application using the configuration specified in the manifest file. 6. Once the deployment is complete, you can manage the application using standard Juju commands, such as `juju status` to view the status of the application and `juju logs` to view the logs for the application. 7. To remove the application from the Kubernetes cluster, you can use the `juju destroy` command: juju destroy my-application This will destroy the application and all its resources in the Kubernetes cluster. Note that this is a basic example of how to deploy a manifest file with Juju inside a Kubernetes cluster. There are many other options and features available in Juju, so you can explore those as well. >>> How deploy a Pod with Rust inside Kubernetes ? To deploy a Pod with Rust inside a Kubernetes cluster, you can follow these steps: 1. Install the `cargo` package on your local machine by running the following command: sudo apt-get install cargo 2. Create a new Rust project using the `cargo new` command, passing in the name of your project as an argument. For example: cargo new my-rust-project 3. Change into the directory of your new project by running the following command: cd my-rust-project 4. Add the `kubernetes` crate to your projects dependencies by adding the following line to your `Cargo.toml` file: [dependencies] kubernetes = "0.18" 5. Create a new Rust file for your Kubernetes deployment, such as `deployment.rs`, and add the following code to it: use kubernetes::api::Api; use kubernetes::config::Configuration; use kubernetes::apis::CoreApi; fn main() { let config = Configuration::new("https://<your-kubernetes-cluster>"); let client = Api::all(config); let pod = client.create_namespaced_pod( "default", &Api::Pod("my-rust-deployment"), None, ); } Replace `<your-kubernetes-cluster>` with the URL of your Kubernetes cluster. 6. Compile your Rust code by running the following command: cargo build --release 7. Run your Rust code by running the following command: ./target/release/deployment This will deploy a new Pod with Rust inside the `default` namespace of your Kubernetes cluster. You can view the status of your Pod using the `kubectl get pods` command. Note that this is a basic example of how to deploy a Pod with Rust inside a Kubernetes cluster. There are many other options and features available in Rust, so you can explore those as well >>> Send a message (/? for help) 
Enter fullscreen mode Exit fullscreen mode

Canonical Kubernetes peut également être installé à l’intérieur d’un conteneur LXD en utilisant le profil énoncé çi-dessus comme le précise cette documentation détaillée …

Install Canonical Kubernetes in LXD

Sans compter qu’Ollama et une interface graphique conviviale avec Open WebUI pourraient être installé via Snap ou Helm :

À suivre !

Top comments (0)