DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Une autre manière de faire fonctionner OpenStack au-dessus de Kubernetes avec Atmosphere …

Plusieurs projets de déploiement d’OpenStack qui reposent sur un cluster Kubernetes existent à commencer par OpenStack Helm :

Mais c’est aussi le cas du projet open source Atmosphere du fournisseur Cloud Vexxhost qui fournit un large éventail de fonctionnalités d’infrastructure, notamment des machines virtuelles, Kubernetes, du bare metal, du stockage en bloc et en objet, des équilibreurs de charge en tant que service, et plus encore …

Atmosphere deployment tool | VEXXHOST

Atmosphere fait tourner OpenStack au-dessus de Kubernetes. Cette configuration permet des déploiements, des mises à niveau et des contrôles de santé simples et facile.

VEXXHOST Launches Atmosphere, a New Open Source, OpenStack Deployment Tool

GitHub - vexxhost/atmosphere: Simple & easy private cloud platform featuring VMs, Kubernetes & bare-metal

Je vais simplement suivre le modus operandi proposé par Vexxhost pour un serveur Bare Metal :

Pour cela je pars d’un serveur dans PhoenixNAP sous Ubuntu 22.04 LTS :

Mise à jour de ce dernier et installation des paquets nécessaires au lancement du déploiement d’Atmosphere :

ubuntu@atmosphere:~$ sudo apt-get update && sudo apt-get install git python3-pip -y && sudo pip install poetry ubuntu@atmosphere:~$ git clone https://github.com/vexxhost/atmosphere.git Cloning into 'atmosphere'... remote: Enumerating objects: 25111, done. remote: Counting objects: 100% (825/825), done. remote: Compressing objects: 100% (398/398), done. remote: Total 25111 (delta 377), reused 704 (delta 306), pack-reused 24286 Receiving objects: 100% (25111/25111), 10.78 MiB | 50.17 MiB/s, done. Resolving deltas: 100% (14299/14299), done. ubuntu@atmosphere:~$ cd atmosphere/ ubuntu@atmosphere:~/atmosphere$ ls CHANGELOG.md Jenkinsfile build doc galaxy.yml hack meta playbooks pyproject.toml test-requirements.txt tox.ini Dockerfile README.md charts flake.lock go.mod images mkdocs.yml plugins release-please-config.json tests zuul.d Earthfile atmosphere cmd flake.nix go.sum internal molecule poetry.lock roles tools ubuntu@atmosphere:~/atmosphere$ sudo poetry install --with dev Creating virtualenv atmosphere-NEvTTHEY-py3.10 in /root/.cache/pypoetry/virtualenvs Installing dependencies from lock file Package operations: 89 installs, 0 updates, 0 removals - Installing attrs (23.2.0) - Installing pycparser (2.22) - Installing rpds-py (0.18.0) - Installing cffi (1.16.0) - Installing markupsafe (2.1.5) - Installing mdurl (0.1.2) - Installing referencing (0.35.0) - Installing cryptography (42.0.5) - Installing jinja2 (3.1.3) - Installing jsonschema-specifications (2023.12.1) - Installing markdown-it-py (3.0.0) - Installing packaging (24.0) - Installing pbr (6.0.0) - Installing pygments (2.17.2) - Installing pyyaml (6.0.1) - Installing resolvelib (1.0.1) - Installing wrapt (1.16.0) - Installing ansible-core (2.16.6) - Installing bracex (2.4) - Installing certifi (2024.2.2) - Installing charset-normalizer (3.3.2) - Installing click (8.1.7) - Installing debtcollector (3.0.0) - Installing idna (3.7) - Installing iso8601 (2.1.0) - Installing jsonschema (4.21.1) - Installing netaddr (0.8.0) - Installing netifaces (0.11.0) - Installing oslo-i18n (6.3.0) - Installing pyparsing (3.1.2) - Installing rich (13.7.1) - Installing subprocess-tee (0.4.1) - Installing tzdata (2024.1) - Installing urllib3 (2.2.1) - Installing ansible-compat (4.1.11) - Installing click-help-colors (0.9.4) - Installing decorator (5.1.1) - Installing distro (1.9.0) - Installing enrich (1.2.7) - Installing exceptiongroup (1.2.1) - Installing iniconfig (2.0.0) - Installing jsonpointer (2.4) - Installing mccabe (0.7.0) - Installing msgpack (1.0.8) - Installing os-service-types (1.7.0) - Installing oslo-utils (7.1.0) - Installing pluggy (1.5.0) - Installing pycodestyle (2.9.1) - Installing pyflakes (2.5.0) - Installing requests (2.31.0) - Installing rfc3986 (2.0.0) - Installing six (1.16.0) - Installing stevedore (5.2.0) - Installing tomli (2.0.1) - Installing typing-extensions (4.11.0) - Installing wcmatch (8.5.1) - Installing appdirs (1.4.4) - Installing coverage (7.5.0) - Installing docker (7.0.0) - Installing dogpile-cache (1.3.2) - Installing execnet (2.1.1) - Installing flake8 (5.0.4) - Installing isort (5.13.2) - Installing jmespath (1.0.1) - Installing jsonpatch (1.33) - Installing keystoneauth1 (5.6.0) - Installing molecule (6.0.3) - Installing munch (4.0.0) - Installing oslo-config (9.4.0) - Installing oslo-context (5.5.0) - Installing oslo-serialization (5.4.0) - Installing py (1.11.0) - Installing pyinotify (0.9.6) - Installing pytest (7.4.4) - Installing python-dateutil (2.9.0.post0) - Installing regex (2024.4.28) - Installing requestsexceptions (1.4.0) - Installing selinux (0.3.0) - Installing docker-image-py (0.1.12) - Installing flake8-isort (4.2.0) - Installing molecule-plugins (23.5.3) - Installing openstacksdk (0.62.0) - Installing oslo-log (5.5.1) - Installing pytest-cov (3.0.0) - Installing pytest-forked (1.6.0) - Installing pytest-mock (3.14.0) - Installing pytest-xdist (3.6.1) - Installing rjsonnet (0.5.4) - Installing ruyaml (0.91.0) Installing the current project: atmosphere (1.10.4.post186.dev0+779cb921) 
Enter fullscreen mode Exit fullscreen mode

Très rapidement, je lance le déploiement qui prend un peu moins d’une heure ici :

ubuntu@atmosphere:~/atmosphere$ sudo poetry run molecule converge -s aio INFO aio scenario test matrix: dependency, create, prepare, converge INFO Performing prerun with role_name_check=0... INFO Running aio > dependency WARNING Skipping, missing the requirements file. WARNING Skipping, missing the requirements file. INFO Running aio > create PLAY [Wait for user to read warning] ******************************************* TASK [Gathering Facts] ********************************************************* Monday 03 June 2024 12:02:39 +0000 (0:00:00.020) 0:00:00.020 *********** ok: [localhost] . . . . . PLAY [Configure networking] **************************************************** TASK [Gathering Facts] ********************************************************* Monday 03 June 2024 12:49:26 +0000 (0:00:00.418) 0:45:34.621 *********** ok: [instance] TASK [Add IP address to "br-ex"] *********************************************** Monday 03 June 2024 12:49:28 +0000 (0:00:02.159) 0:45:36.781 *********** ok: [instance] TASK [Set "br-ex" interface to "up"] ******************************************* Monday 03 June 2024 12:49:28 +0000 (0:00:00.153) 0:45:36.934 *********** ok: [instance] PLAY RECAP ********************************************************************* instance : ok=669 changed=267 unreachable=0 failed=0 skipped=246 rescued=0 ignored=1 Monday 03 June 2024 12:49:28 +0000 (0:00:00.304) 0:45:37.239 *********** =============================================================================== vexxhost.atmosphere.percona_xtradb_cluster : Apply Percona XtraDB cluster - 203.51s vexxhost.atmosphere.cinder : Deploy Helm chart ------------------------ 157.84s vexxhost.atmosphere.keycloak : Deploy Helm chart ---------------------- 156.65s vexxhost.atmosphere.heat : Deploy Helm chart -------------------------- 120.67s vexxhost.atmosphere.manila : Deploy Helm chart ------------------------ 104.84s vexxhost.atmosphere.nova : Deploy Helm chart -------------------------- 100.67s vexxhost.ceph.osd : Install OSDs --------------------------------------- 89.67s vexxhost.atmosphere.glance : Deploy Helm chart ------------------------- 88.95s vexxhost.atmosphere.magnum : Deploy Helm chart ------------------------- 88.06s vexxhost.atmosphere.octavia : Deploy Helm chart ------------------------ 83.51s vexxhost.atmosphere.keystone : Deploy Helm chart ----------------------- 80.67s vexxhost.atmosphere.neutron : Deploy Helm chart ------------------------ 74.13s vexxhost.atmosphere.barbican : Deploy Helm chart ----------------------- 67.48s vexxhost.ceph.mon : Run Bootstrap coomand ------------------------------ 62.21s vexxhost.atmosphere.placement : Deploy Helm chart ---------------------- 58.82s vexxhost.kubernetes.cluster_api : Set node selector for Cluster API components -- 57.15s vexxhost.atmosphere.glance_image : Download image ---------------------- 53.93s vexxhost.atmosphere.glance_image : Check if image exists --------------- 50.38s vexxhost.atmosphere.neutron : Create networks -------------------------- 36.62s vexxhost.atmosphere.rabbitmq : Deploy cluster -------------------------- 35.65s 
Enter fullscreen mode Exit fullscreen mode

Le cluster est déployé et je peux vérifier les accès disponibles localement sur ce serveur physique :

root@instance:~# apt install python3-openstackclient -y root@instance:~# source openrc root@instance:~# openstack endpoint list +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------------------+ | 0584539654844f7f956088e43836ed4c | RegionOne | swift | object-store | True | public | https://object-store.131-153-200-197.nip.io/swift/v1/%(tenant_id)s | | 069df7eb37584d91a22a95e7d36493d3 | RegionOne | manila | share | True | internal | http://manila-api.openstack.svc.cluster.local:8786/v1 | | 08b5e9b96ddf4ce79f270f582dc6bf7b | RegionOne | manilav2 | sharev2 | True | public | https://share.131-153-200-197.nip.io/v2 | | 0b95720b8c0e4de692acf18b769ce8be | RegionOne | heat-cfn | cloudformation | True | admin | http://heat-cfn.openstack.svc.cluster.local:8000/v1 | | 120958efeb924a0a98bf03ce807c0ace | RegionOne | cinderv3 | volumev3 | True | internal | http://cinder-api.openstack.svc.cluster.local:8776/v3/%(tenant_id)s | | 1427e413681e4518ac03d768bb44cb60 | RegionOne | placement | placement | True | admin | http://placement-api.openstack.svc.cluster.local:8778/ | | 1611e222d7b8447596aa44c0965aec1a | RegionOne | cinderv3 | volumev3 | True | public | https://volume.131-153-200-197.nip.io/v3/%(tenant_id)s | | 19047158ec1446b5b746682bc8d9dd93 | RegionOne | heat-cfn | cloudformation | True | internal | http://heat-cfn.openstack.svc.cluster.local:8000/v1 | | 1e04bacc8aae44b3818de40e386cf68e | RegionOne | barbican | key-manager | True | admin | http://barbican-api.openstack.svc.cluster.local:9311/ | | 27d7b0d81a104b54a3dcb8632297707f | RegionOne | keystone | identity | True | admin | http://keystone-api.openstack.svc.cluster.local:5000/ | | 2c4218a3da784f98890bcb6ac6cb20ae | RegionOne | heat | orchestration | True | admin | http://heat-api.openstack.svc.cluster.local:8004/v1/%(project_id)s | | 2f19518bd66c421eb6a5e25f7c93c96e | RegionOne | manila | share | True | public | http://manila.openstack.svc.cluster.local/v1 | | 3578d37453854cfbafdc18f2be4f0a62 | RegionOne | glance | image | True | public | https://image.131-153-200-197.nip.io/ | | 3a2e106afdbf43689676c85718319f88 | RegionOne | glance | image | True | admin | http://glance-api.openstack.svc.cluster.local:9292/ | | 3e3240df5df847bc84c204e0b18783f1 | RegionOne | glance | image | True | internal | http://glance-api.openstack.svc.cluster.local:9292/ | | 3f51c888203c433ea31d2ca67cf3e359 | RegionOne | manilav2 | sharev2 | True | admin | http://manila-api.openstack.svc.cluster.local:8786/v2 | | 44f08551b541410d8bb28b3a148dca0f | RegionOne | heat | orchestration | True | internal | http://heat-api.openstack.svc.cluster.local:8004/v1/%(project_id)s | | 6501cca34cd5401d8561522f062ec126 | RegionOne | magnum | container-infra | True | internal | http://magnum-api.openstack.svc.cluster.local:9511/v1 | | 65c4f687017d4ff4b774a61cd670de52 | RegionOne | barbican | key-manager | True | public | https://key-manager.131-153-200-197.nip.io/ | | 695342e32bf84ae48cc0c873227ce1ce | RegionOne | heat | orchestration | True | public | https://orchestration.131-153-200-197.nip.io/v1/%(project_id)s | | 71af00dd937e49c28827dadbe6d55bbe | RegionOne | swift | object-store | True | internal | http://rook-ceph-rgw-ceph.openstack.svc.cluster.local/swift/v1/%(tenant_id)s | | 7ada3efd05b64af1ae726d495fe61a6e | RegionOne | cinderv3 | volumev3 | True | admin | http://cinder-api.openstack.svc.cluster.local:8776/v3/%(tenant_id)s | | 87e74af69cbb4577b4dd29d106aed5fa | RegionOne | magnum | container-infra | True | public | https://container-infra.131-153-200-197.nip.io/v1 | | 8dae31f3eed946eea30025e56fbfd83a | RegionOne | nova | compute | True | admin | http://nova-api.openstack.svc.cluster.local:8774/v2.1 | | 8e562226dccc4df0af3c8cdfdec95084 | RegionOne | barbican | key-manager | True | internal | http://barbican-api.openstack.svc.cluster.local:9311/ | | 8f629f7e748d4c3087bf17aaa0ce2e47 | RegionOne | manila | share | True | admin | http://manila-api.openstack.svc.cluster.local:8786/v1 | | 91149164a0584145b6344ae1d525457b | RegionOne | keystone | identity | True | public | https://identity.131-153-200-197.nip.io/ | | a936e3ca42a54b6e97672f20e17c095b | RegionOne | neutron | network | True | admin | http://neutron-server.openstack.svc.cluster.local:9696/ | | b987ba9af66c4a14ba42d1fd8c1b2285 | RegionOne | placement | placement | True | public | https://placement.131-153-200-197.nip.io/ | | b9e2e43e90d649d5b6089c7474ba30cd | RegionOne | octavia | load-balancer | True | internal | http://octavia-api.openstack.svc.cluster.local:9876/ | | bf2839c099f946e39d21a0b6bcad0b87 | RegionOne | neutron | network | True | public | https://network.131-153-200-197.nip.io/ | | bf959da766554e91972fe1da9cff6d8e | RegionOne | heat-cfn | cloudformation | True | public | https://cloudformation.131-153-200-197.nip.io/v1 | | d027a62b34ca4eaabc508c04ad8f94e7 | RegionOne | magnum | container-infra | True | admin | http://magnum-api.openstack.svc.cluster.local:9511/v1 | | d3e857109be14bfabf82e962068b54af | RegionOne | nova | compute | True | public | https://compute.131-153-200-197.nip.io/v2.1 | | d6985c01aca54ec49e0ec1c728fc271b | RegionOne | nova | compute | True | internal | http://nova-api.openstack.svc.cluster.local:8774/v2.1 | | d74feae56d034bd7ad421e4588f0731e | RegionOne | octavia | load-balancer | True | admin | http://octavia-api.openstack.svc.cluster.local:9876/ | | e7bcb22c547e41489cc7d5732bca4a84 | RegionOne | manilav2 | sharev2 | True | internal | http://manila-api.openstack.svc.cluster.local:8786/v2 | | f005b62e7769497eb54b0c0a3fa3c587 | RegionOne | octavia | load-balancer | True | public | https://load-balancer.131-153-200-197.nip.io/ | | f40bb006a4984428b581042a78539ef8 | RegionOne | neutron | network | True | internal | http://neutron-server.openstack.svc.cluster.local:9696/ | | f4d8af60368645c7a5f1d9605afb5494 | RegionOne | placement | placement | True | internal | http://placement-api.openstack.svc.cluster.local:8778/ | | f5aa8f8ba519480199f224800480e8cb | RegionOne | keystone | identity | True | internal | http://keystone-api.openstack.svc.cluster.local:5000/ | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------------------+ 
Enter fullscreen mode Exit fullscreen mode

Ainsi que des accès via l’Ingress Controller du cluster Kubernetes sous-jacent :

root@instance:~# kubectl cluster-info Kubernetes control plane is running at https://10.96.240.10:6443 CoreDNS is running at https://10.96.240.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. root@instance:~# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME instance Ready control-plane 60m v1.28.4 131.153.200.197 <none> Ubuntu 22.04.4 LTS 5.15.0-105-generic containerd://1.7.9 root@instance:~# kubectl get ing -A NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE auth-system keycloak atmosphere keycloak.131-153-200-197.nip.io 10.98.36.135 80, 443 54m monitoring kube-prometheus-stack-alertmanager atmosphere alertmanager.131-153-200-197.nip.io 10.98.36.135 80, 443 53m monitoring kube-prometheus-stack-grafana atmosphere grafana.131-153-200-197.nip.io 10.98.36.135 80, 443 53m monitoring kube-prometheus-stack-prometheus atmosphere prometheus.131-153-200-197.nip.io 10.98.36.135 80, 443 53m openstack cloudformation atmosphere cloudformation.131-153-200-197.nip.io 10.98.36.135 80, 443 33m openstack compute atmosphere compute.131-153-200-197.nip.io 10.98.36.135 80, 443 38m openstack compute-novnc-proxy atmosphere vnc.131-153-200-197.nip.io 10.98.36.135 80, 443 38m openstack container-infra atmosphere container-infra.131-153-200-197.nip.io 10.98.36.135 80, 443 27m openstack container-infra-registry atmosphere container-infra-registry.131-153-200-197.nip.io 10.98.36.135 80, 443 27m openstack dashboard atmosphere dashboard.131-153-200-197.nip.io 10.98.36.135 80, 443 23m openstack identity atmosphere identity.131-153-200-197.nip.io 10.98.36.135 80, 443 51m openstack image atmosphere image.131-153-200-197.nip.io 10.98.36.135 80, 443 46m openstack key-manager atmosphere key-manager.131-153-200-197.nip.io 10.98.36.135 80, 443 49m openstack load-balancer atmosphere load-balancer.131-153-200-197.nip.io 10.98.36.135 80, 443 30m openstack network atmosphere network.131-153-200-197.nip.io 10.98.36.135 80, 443 36m openstack orchestration atmosphere orchestration.131-153-200-197.nip.io 10.98.36.135 80, 443 33m openstack placement atmosphere placement.131-153-200-197.nip.io 10.98.36.135 80, 443 41m openstack rook-ceph-cluster atmosphere object-store.131-153-200-197.nip.io 10.98.36.135 80, 443 48m openstack sharev2 atmosphere share.131-153-200-197.nip.io 10.98.36.135 80, 443 23m openstack volumev3 atmosphere volume.131-153-200-197.nip.io 10.98.36.135 80, 443 42m 
Enter fullscreen mode Exit fullscreen mode

Le domaine WildCard utilisé ici me donne accès au Dashboard Horizon pour plus de simplicité :

Je charge une image locale d’Ubuntu 24.04 LTS dans le cluster :

root@instance:~# cat openrc # Ansible managed: Do NOT edit this file manually! export OS_IDENTITY_API_VERSION=3 export OS_AUTH_URL="https://identity.131-153-200-197.nip.io/v3" export OS_AUTH_TYPE=password export OS_REGION_NAME="RegionOne" export OS_USER_DOMAIN_NAME=Default export OS_USERNAME="admin-RegionOne" export OS_PASSWORD="lzi232PTaHpzoC2HjwSLKepZELQd6ENJ" export OS_PROJECT_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_CACERT=/usr/local/share/ca-certificates/atmosphere.crt root@instance:~# wget -c https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img Saving to: ‘noble-server-cloudimg-amd64.img’ noble-server-cloudimg-amd64.img 100%[=====================================================================================================>] 454.00M 17.9MB/s in 30s 2024-06-03 13:22:40 (15.3 MB/s) - ‘noble-server-cloudimg-amd64.img’ saved [476053504/476053504] root@instance:~# openstack image create --public --container-format bare --disk-format qcow2 --file ~/noble-server-cloudimg-amd64.img ubuntu-24.04 +------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | container_format | bare | | created_at | 2024-06-03T13:23:12Z | | disk_format | qcow2 | | file | /v2/images/81d8eafa-4054-455c-9640-4e83c0566d21/file | | id | 81d8eafa-4054-455c-9640-4e83c0566d21 | | min_disk | 0 | | min_ram | 0 | | name | ubuntu-24.04 | | owner | 43321f42e8434f8aa53531bd104e2809 | | properties | locations='[]', os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/ubuntu-24.04', owner_specified.openstack.sha256='' | | protected | False | | schema | /v2/schemas/image | | status | queued | | tags | | | updated_at | 2024-06-03T13:23:12Z | | visibility | public | +------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ root@instance:~# openstack image list +--------------------------------------+---------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------+--------+ | 04e2f39a-e4ff-426b-aeee-b82acd3bf611 | amphora-x64-haproxy | active | | 49a3b09d-c191-4c36-8541-31efaffb404d | cirros | active | | b7e75cdb-37b0-4f2a-badf-cfdee7bca83d | manila-service-image | active | | 3c5df46c-7015-411c-8009-afa6695672a6 | ubuntu-2204-kube-v1.27.8s | active | | 81d8eafa-4054-455c-9640-4e83c0566d21 | ubuntu-24.04 | active | +--------------------------------------+---------------------------+--------+ 
Enter fullscreen mode Exit fullscreen mode

Configuration d’une topologie réseau pour un rapide test :

Avec l’image chargée dans Glance, lancement de trois petites instances Ubuntu 24.04 LTS …


root@instance:~# openstack server list +--------------------------------------+-------+--------+--------------------------------------+--------------+----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------+--------+--------------------------------------+--------------+----------+ | 07d6bf6e-edff-4729-9133-31849ea6fe87 | k0s-3 | ACTIVE | network1=10.96.250.216, 11.12.13.185 | ubuntu-24.04 | m1.small | | afd10cda-f99c-4877-9948-a7f25b5e756a | k0s-1 | ACTIVE | network1=10.96.250.211, 11.12.13.99 | ubuntu-24.04 | m1.small | | d70230a1-9fdf-40d9-959d-c9960479b4a5 | k0s-2 | ACTIVE | network1=10.96.250.208, 11.12.13.80 | ubuntu-24.04 | m1.small | +--------------------------------------+-------+--------+--------------------------------------+--------------+----------+ 
Enter fullscreen mode Exit fullscreen mode

Un cluster kubernetes simple sur la base de ces trois instances est alors possible avec k0sctl.

GitHub - k0sproject/k0sctl: A bootstrapping and management tool for k0s clusters.

root@instance:~# wget -c https://github.com/k0sproject/k0sctl/releases/download/v0.17.8/k0sctl-linux-x64 root@instance:~# mv k0sctl-linux-x64 /usr/local/bin/k0sctl && chmod +x /usr/local/bin/k0sctl root@instance:~# k0sctl NAME: k0sctl - k0s cluster management tool USAGE: k0sctl [global options] command [command options] COMMANDS: version Output k0sctl version apply Apply a k0sctl configuration kubeconfig Output the admin kubeconfig of the cluster init Create a configuration template reset Remove traces of k0s from all of the hosts backup Take backup of existing clusters state config Configuration related sub-commands completion help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --debug, -d Enable debug logging (default: false) [$DEBUG] --trace Enable trace logging (default: false) [$TRACE] --no-redact Do not hide sensitive information in the output (default: false) --help, -h show help root@instance:~# k0sctl init --k0s > k0sctl.yaml root@instance:~# cat k0sctl.yaml apiVersion: k0sctl.k0sproject.io/v1beta1 kind: Cluster metadata: name: k0s-cluster spec: hosts: - ssh: address: 10.96.250.211 user: ubuntu port: 22 keyPath: /root/.ssh/id_rsa role: controller - ssh: address: 10.96.250.208 user: ubuntu port: 22 keyPath: /root/.ssh/id_rsa role: worker - ssh: address: 10.96.250.216 user: ubuntu port: 22 keyPath: /root/.ssh/id_rsa role: worker k0s: config: apiVersion: k0s.k0sproject.io/v1beta1 kind: Cluster metadata: name: k0s spec: api: k0sApiPort: 9443 port: 6443 installConfig: users: etcdUser: etcd kineUser: kube-apiserver konnectivityUser: konnectivity-server kubeAPIserverUser: kube-apiserver kubeSchedulerUser: kube-scheduler konnectivity: adminPort: 8133 agentPort: 8132 network: kubeProxy: disabled: false mode: iptables kuberouter: autoMTU: true mtu: 0 peerRouterASNs: "" peerRouterIPs: "" podCIDR: 10.244.0.0/16 provider: kuberouter serviceCIDR: 10.96.0.0/12 podSecurityPolicy: defaultPolicy: 00-k0s-privileged storage: type: etcd telemetry: enabled: true root@instance:~# k0sctl apply --config k0sctl.yaml ⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███ ⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███ ⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███ ⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███ ⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████ k0sctl v0.17.8 Copyright 2023, k0sctl authors. Anonymized telemetry of usage will be sent to the authors. By continuing to use k0sctl you agree to these terms: https://k0sproject.io/licenses/eula INFO ==> Running phase: Set k0s version INFO Looking up latest stable k0s version INFO Using k0s version v1.30.1+k0s.0 INFO ==> Running phase: Connect to hosts INFO [ssh] 10.96.250.216:22: connected INFO [ssh] 10.96.250.208:22: connected INFO [ssh] 10.96.250.211:22: connected INFO ==> Running phase: Detect host operating systems INFO [ssh] 10.96.250.216:22: is running Ubuntu 24.04 LTS INFO [ssh] 10.96.250.208:22: is running Ubuntu 24.04 LTS INFO [ssh] 10.96.250.211:22: is running Ubuntu 24.04 LTS INFO ==> Running phase: Acquire exclusive host lock INFO ==> Running phase: Prepare hosts INFO ==> Running phase: Gather host facts INFO [ssh] 10.96.250.216:22: using k0s-3 as hostname INFO [ssh] 10.96.250.211:22: using k0s-1 as hostname INFO [ssh] 10.96.250.208:22: using k0s-2 as hostname INFO [ssh] 10.96.250.211:22: discovered ens3 as private interface INFO [ssh] 10.96.250.208:22: discovered ens3 as private interface INFO [ssh] 10.96.250.216:22: discovered ens3 as private interface INFO [ssh] 10.96.250.211:22: discovered 11.12.13.99 as private address INFO [ssh] 10.96.250.208:22: discovered 11.12.13.80 as private address INFO [ssh] 10.96.250.216:22: discovered 11.12.13.185 as private address INFO ==> Running phase: Validate hosts INFO ==> Running phase: Validate facts INFO ==> Running phase: Download k0s on hosts INFO [ssh] 10.96.250.216:22: downloading k0s v1.30.1+k0s.0 INFO [ssh] 10.96.250.211:22: downloading k0s v1.30.1+k0s.0 INFO [ssh] 10.96.250.208:22: downloading k0s v1.30.1+k0s.0 INFO ==> Running phase: Install k0s binaries on hosts INFO [ssh] 10.96.250.211:22: validating configuration INFO ==> Running phase: Configure k0s INFO [ssh] 10.96.250.211:22: installing new configuration INFO ==> Running phase: Initialize the k0s cluster INFO [ssh] 10.96.250.211:22: installing k0s controller INFO [ssh] 10.96.250.211:22: waiting for the k0s service to start INFO [ssh] 10.96.250.211:22: waiting for kubernetes api to respond INFO ==> Running phase: Install workers INFO [ssh] 10.96.250.216:22: validating api connection to https://11.12.13.99:6443 INFO [ssh] 10.96.250.208:22: validating api connection to https://11.12.13.99:6443 INFO [ssh] 10.96.250.211:22: generating a join token for worker 1 INFO [ssh] 10.96.250.211:22: generating a join token for worker 2 INFO [ssh] 10.96.250.216:22: writing join token INFO [ssh] 10.96.250.208:22: writing join token INFO [ssh] 10.96.250.216:22: installing k0s worker INFO [ssh] 10.96.250.208:22: installing k0s worker INFO [ssh] 10.96.250.216:22: starting service INFO [ssh] 10.96.250.216:22: waiting for node to become ready INFO [ssh] 10.96.250.208:22: starting service INFO [ssh] 10.96.250.208:22: waiting for node to become ready INFO ==> Running phase: Release exclusive host lock INFO ==> Running phase: Disconnect from hosts INFO ==> Finished in 42s INFO k0s cluster version v1.30.1+k0s.0 is now installed INFO Tip: To access the cluster you can now fetch the admin kubeconfig using: INFO k0sctl kubeconfig 
Enter fullscreen mode Exit fullscreen mode

Kubernetes via k0s est actif et accessible via son fichier kubeconfigspécifique :

root@instance:~# k0sctl kubeconfig > kubeconfig root@instance:~# kubectl --kubeconfig=kubeconfig cluster-info Kubernetes control plane is running at https://10.96.250.211:6443 CoreDNS is running at https://10.96.250.211:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. root@instance:~# kubectl --kubeconfig=kubeconfig get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k0s-2 Ready <none> 2m41s v1.30.1+k0s 11.12.13.80 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.17 k0s-3 Ready <none> 2m41s v1.30.1+k0s 11.12.13.185 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.17 root@instance:~# kubectl --kubeconfig=kubeconfig get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-6997b8f8bd-957bd 1/1 Running 0 2m41s kube-system pod/coredns-6997b8f8bd-ddr5t 1/1 Running 0 2m41s kube-system pod/konnectivity-agent-2bxgl 1/1 Running 0 2m51s kube-system pod/konnectivity-agent-4gfsw 1/1 Running 0 2m51s kube-system pod/kube-proxy-2cq5w 1/1 Running 0 2m51s kube-system pod/kube-proxy-m6rnv 1/1 Running 0 2m51s kube-system pod/kube-router-p9s4t 1/1 Running 0 2m51s kube-system pod/kube-router-qhcp4 1/1 Running 0 2m51s kube-system pod/metrics-server-5cd4986bbc-rf4wc 1/1 Running 0 2m57s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m15s kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3m5s kube-system service/metrics-server ClusterIP 10.97.91.41 <none> 443/TCP 3m1s 
Enter fullscreen mode Exit fullscreen mode

Le démonstrateur FC en une commande (habituelle maintenant) :

root@instance:~# cat test.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: fcdemo3 labels: app: fcdemo3 spec: replicas: 4 selector: matchLabels: app: fcdemo3 template: metadata: labels: app: fcdemo3 spec: containers: - name: fcdemo3 image: mcas/franceconnect-demo2:latest ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: fcdemo-service spec: type: ClusterIP selector: app: fcdemo3 ports: - protocol: TCP port: 80 targetPort: 3000 root@instance:~# kubectl --kubeconfig=kubeconfig apply -f test.yaml deployment.apps/fcdemo3 created service/fcdemo-service created root@instance:~# kubectl --kubeconfig=kubeconfig get po,svc NAME READY STATUS RESTARTS AGE pod/fcdemo3-85f6bd87c-7jvpk 1/1 Running 0 13s pod/fcdemo3-85f6bd87c-btv9m 1/1 Running 0 13s pod/fcdemo3-85f6bd87c-nlw9x 1/1 Running 0 13s pod/fcdemo3-85f6bd87c-v66bs 1/1 Running 0 13s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/fcdemo-service ClusterIP 10.98.123.24 <none> 80/TCP 13s service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8m19s 
Enter fullscreen mode Exit fullscreen mode

Redirection de port ici pour accès local à ce dernier …

root@instance:~# kubectl --kubeconfig=kubeconfig port-forward service/fcdemo-service 12222:80 --address='0.0.0.0' Forwarding from 0.0.0.0:12222 -> 3000 
Enter fullscreen mode Exit fullscreen mode

Sachant qu’Atmosphere disposait de Magnum pour déployer un cluster Kubernetes intrinsèque …

Pour conclure, Atmosphere est un projet pour un déploiement simplifié d’un cluster Openstack conteneurisé dans un cluster Kubernetes de base (en reprenant OpenStack Helm) et que l’on retrouve également dans la nouvelle mouture de Canonical MicroStack par exemple …

OpenStack for the edge, micro clouds and developers

À suivre !

Top comments (0)