DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

OpenStack sur Kubernetes en toute simplicité avec Canonical MicroStack et Pulumi …

MicroStack est une solution développée par Canonical, utilisant Snap, Juju et Kubernetes pour déployer et gérer OpenStack. Cette solution simplifie considérablement la complexité traditionnellement associée à la gestion des clouds OpenStack. Elle a évolué comme je le relatais auparavant dans différents articles en y insérant Canonical Kubernetes :

Avantages de MicroStack

OpenStack on Kubernetes | Ubuntu | Canonical

  • Déploiement Simplifié : MicroStack offre un processus d’installation rationalisé, capable de démarrer un déploiement cloud en moins de 6 commandes, avec un temps de déploiement moyen de 40 minutes. Cela le rend particulièrement adapté aux organisations qui cherchent à établir ou à étendre rapidement un environnement Cloud sans disposer d’une expertise technique approfondie.
  • Flexibilité et Customisation : MicroStack permet une grande flexibilité, incluant l’intégration d’un large éventail de plug-ins et d’extensions, ce qui permet aux entreprises de construire un environnement Cloud qui s’aligne précisément sur leurs objectifs opérationnels.

Mise en oeuvre concrète de Microstack sur un noeud unique selon ces préconisations techniques :

Enterprise requirements | Canonical

Je pars donc d’une instance dédiée sur DigitalOcean autorisant la virtualisation imbriquée :

Pour installer MicroStack sur un nœud unique, vous pouvez suivre ces étapes. Pour commencer, création d’un utilisateur non root avec sudo activé :

root@microstack:~# useradd -s /bin/bash -d /home/ubuntu -m ubuntu root@microstack:~# echo "ubuntu ALL=(ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/ubuntu ubuntu ALL=(ALL) NOPASSWD: ALL root@microstack:~# cp -r .ssh/ /home/ubuntu/ root@microstack:~# chown -R ubuntu:ubuntu /home/ubuntu/.ssh/ 
Enter fullscreen mode Exit fullscreen mode

Puis installation de snapd :

ubuntu@microstack:~$ sudo apt install snapd Reading package lists... Done Building dependency tree... Done Reading state information... Done Suggested packages: zenity | kdialog The following packages will be upgraded: snapd 1 upgraded, 0 newly installed, 0 to remove and 195 not upgraded. Need to get 30.0 MB of archives. After this operation, 5513 kB of additional disk space will be used. Get:1 http://mirrors.digitalocean.com/ubuntu noble-updates/main amd64 snapd amd64 2.66.1+24.04 [30.0 MB] Fetched 30.0 MB in 0s (93.2 MB/s) (Reading database ... 71895 files and directories currently installed.) Preparing to unpack .../snapd_2.66.1+24.04_amd64.deb ... Unpacking snapd (2.66.1+24.04) over (2.63+24.04) ... Setting up snapd (2.66.1+24.04) ... Installing new version of config file /etc/apparmor.d/usr.lib.snapd.snap-confine.real ... snapd.failure.service is a disabled or a static unit not running, not starting it. snapd.snap-repair.service is a disabled or a static unit not running, not starting it. Processing triggers for dbus (1.14.10-4ubuntu4) ... Processing triggers for man-db (2.12.0-4build2) ... Scanning processes... Scanning linux images... Running kernel seems to be up-to-date. No services need to be restarted. No containers need to be restarted. No user sessions are running outdated binaries. No VM guests are running outdated hypervisor (qemu) binaries on this host. ubuntu@microstack:~$ sudo systemctl enable --now snapd 
Enter fullscreen mode Exit fullscreen mode

Installation de MicroStack via la dernière version avec snapd :

Install Canonical MicroStack on Linux | Snap Store

ubuntu@microstack:~$ sudo snap install openstack --channel 2024.1/beta 2024-12-24T08:58:22Z INFO Waiting for automatic snapd restart... openstack (2024.1/beta) 2024.1 from Canonical✓ installed ubuntu@microstack:~$ sudo snap list Name Version Rev Tracking Publisher Notes core24 20240920 609 latest/stable canonical✓ base openstack 2024.1 637 2024.1/beta canonical✓ - snapd 2.66.1 23258 latest/stable canonical✓ snapd 
Enter fullscreen mode Exit fullscreen mode

MicroStack utilise Sunbeam pour générer un script qui s’assure que la machine dispose de toutes les dépendances nécessaires et qu’elle est configurée correctement pour être utilisée dans OpenStack. Lancement de ce dernier de manière directe :

ubuntu@microstack:~$ sunbeam prepare-node-script | bash -x && newgrp snap_daemon ++ lsb_release -sc + '[' noble '!=' noble ']' ++ whoami + USER=ubuntu ++ id -u + '[' 1000 -eq 0 -o ubuntu = root ']' + SUDO_ASKPASS=/bin/false + sudo -A whoami + sudo grep -r ubuntu /etc/sudoers /etc/sudoers.d + grep NOPASSWD:ALL + echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' + sudo install -m 440 /tmp/90-ubuntu-sudo-access /etc/sudoers.d/90-ubuntu-sudo-access + rm -f /tmp/90-ubuntu-sudo-access + dpkg -s openssh-server + dpkg -s curl + sudo usermod --append --groups snap_daemon ubuntu + '[' -f /home/ubuntu/.ssh/id_rsa ']' + ssh-keygen -b 4096 -f /home/ubuntu/.ssh/id_rsa -t rsa -N '' Generating public/private rsa key pair. Your identification has been saved in /home/ubuntu/.ssh/id_rsa Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub The key fingerprint is: SHA256:NTnupee3yat23zAuoYy5U6VoXqK+JaU5R3L36nwrauM ubuntu@microstack The key's randomart image is: +---[RSA 4096]----+ | | | . | | = | | o o. | | . S.oo. | | B+o++. | | =+oBo.o.o | | .=B.+++oo+.| | .o+E*+++**+o| +----[SHA256]-----+ + cat /home/ubuntu/.ssh/id_rsa.pub ++ hostname --all-ip-addresses + ssh-keyscan -H 134.209.225.128 10.19.0.5 10.114.0.2 # 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.114.0.2:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.114.0.2:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.114.0.2:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 # 10.114.0.2:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4 10.114.0.2: Connection closed by remote host + grep -E 'HTTPS?_PROXY' /etc/environment + curl -s -m 10 -x '' api.charmhub.io + sudo snap connect openstack:ssh-keys + sudo snap install --channel 3.6/stable juju juju (3.6/stable) 3.6.1 from Canonical✓ installed + mkdir -p /home/ubuntu/.local/share + mkdir -p /home/ubuntu/.config/openstack ++ snap list openstack --unicode=never --color=never ++ grep openstack + snap_output='openstack 2024.1 637 2024.1/beta canonical** -' ++ awk -v col=4 '{print $col}' + track=2024.1/beta + [[2024.1/beta =~ edge]] + [[2024.1/beta == \-]] + [[2024.1/beta =~ beta]] + risk=beta + [[beta != \s\t\a\b\l\e]] + sudo snap set openstack deployment.risk=beta + echo 'Snap has been automatically configured to deploy from' 'beta channel.' Snap has been automatically configured to deploy from beta channel. + echo 'Override by passing a custom manifest with -m/--manifest.' Override by passing a custom manifest with -m/--manifest. 
Enter fullscreen mode Exit fullscreen mode

Possibilité à cette étape de déployer le cloud OpenStack à l’aide de cette commande en acceptant les valeurs par défaut (cela prend environ 30 minutes en fonction de la vitesse de connexion) :

ubuntu@microstack:~$ sunbeam cluster bootstrap --accept-defaults Node has been bootstrapped with roles: compute, control ubuntu@microstack:~$ sunbeam cluster list controller ┏━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┓ ┃ Node ┃ Cluster ┃ Machine ┃ Compute ┃ Control ┃ Storage ┃ ┡━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━┩ │ microstack │ ONLINE │ running │ active │ active │ │ └────────────┴─────────┴─────────┴─────────┴─────────┴─────────┘ ubuntu@microstack:~$ sudo systemctl status snap.openstack.clusterd.service ● snap.openstack.clusterd.service - Service for snap application openstack.clusterd Loaded: loaded (/etc/systemd/system/snap.openstack.clusterd.service; enabled; preset: enabled) Active: active (running) since Tue 2024-12-24 08:58:36 UTC; 1h 10min ago Main PID: 4497 (sunbeamd) Tasks: 18 (limit: 77123) Memory: 34.6M (peak: 39.2M) CPU: 11.947s CGroup: /system.slice/snap.openstack.clusterd.service └─4497 sunbeamd --state-dir /var/snap/openstack/common/state --socket-group snap_daemon --verbose 
Enter fullscreen mode Exit fullscreen mode

MicroStack est alors déployé et on peut lancer l’environnement de démo qui crée un tenant préconfiguré (avec les accès au dashboard) :

ubuntu@microstack:~$ sunbeam configure --accept-defaults --openrc demo-openrc ⠋ Generating openrc for cloud admin usage ... Writing openrc to demo-openrc ... done The cloud has been configured for sample usage. You can start using the OpenStack client or access the OpenStack dashboard at http://172.16.1.204:80/openstack-horizon 
Enter fullscreen mode Exit fullscreen mode

Je dispose des identifiants pour ce dernier :

ubuntu@microstack:~$ cat demo-openrc # openrc for demo export OS_AUTH_URL=http://172.16.1.204/openstack-keystone/v3 export OS_USERNAME=demo export OS_PASSWORD=C0jg0mAgdvD5 export OS_USER_DOMAIN_NAME=users export OS_PROJECT_DOMAIN_NAME=users export OS_PROJECT_NAME=demo export OS_AUTH_VERSION=3 export OS_IDENTITY_API_VERSION=3 
Enter fullscreen mode Exit fullscreen mode

Ou pour le compte admin :

ubuntu@microstack:~$ sunbeam openrc > admin-openrc ubuntu@microstack:~$ cat admin-openrc # openrc for access to OpenStack export OS_USERNAME=admin export OS_PASSWORD=f7M1ey2dqpHo export OS_AUTH_URL=http://172.16.1.204/openstack-keystone/v3 export OS_USER_DOMAIN_NAME=admin_domain export OS_PROJECT_DOMAIN_NAME=admin_domain export OS_PROJECT_NAME=admin export OS_AUTH_VERSION=3 export OS_IDENTITY_API_VERSION=3 
Enter fullscreen mode Exit fullscreen mode

Lancement rapide d’une instance de test :

ubuntu@microstack:~$ sunbeam launch ubuntu --name instance1 Launching an OpenStack instance ... Access the instance by running the following command: `ssh -i /home/ubuntu/snap/openstack/637/sunbeam ubuntu@172.16.2.31` ubuntu@microstack:~$ source demo-openrc ubuntu@microstack:~$ openstack server list +--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+ | efe46971-56f4-4da4-9c6e-eebee2795b72 | instance1 | ACTIVE | demo-network=172.16.2.31, 192.168.0.166 | ubuntu | m1.tiny | +--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+ 
Enter fullscreen mode Exit fullscreen mode

J’accède ici au dashboard via une redirection ssh :

$ ssh -L 0.0.0.0:8888:172.16.1.204:80 ubuntu@134.209.225.128 
Enter fullscreen mode Exit fullscreen mode

Dans MicroStack, Sunbeam permet l’activation de plusieurs extensions intéressantes :

ubuntu@microstack:~$ sunbeam enable --help Usage: sunbeam enable [OPTIONS] COMMAND [ARGS]... Enable features. Options: -m, --manifest FILE Manifest file. -h, --help Show this message and exit. Commands: caas Enable Container as a Service feature. dns Enable dns service. images-sync Enable images-sync service. ldap Enable ldap service. loadbalancer Enable Loadbalancer service. observability Enable Observability service. orchestration Enable Orchestration service. pro Enable Ubuntu Pro across deployment. resource-optimization Enable Resource Optimization service (watcher). secrets Enable OpenStack Secrets service. telemetry Enable OpenStack Telemetry applications. tls Enable tls group. validation Enable OpenStack Integration Test Suite (tempest). vault Enable Vault. 
Enter fullscreen mode Exit fullscreen mode

Activation de plusieures d’entre elles …

ubuntu@microstack:~$ sunbeam enable orchestration OpenStack orchestration application enabled. ubuntu@microstack:~$ sunbeam enable telemetry OpenStack telemetry application enabled. ubuntu@microstack:~$ sunbeam enable observability embedded Observability enabled. 
Enter fullscreen mode Exit fullscreen mode

Dont celle relative à la stack d’observabilité mettant en oeuvre Grafana dont on obtient ici le lien et les identifiants :

Observability | Canonical

 (base) ubuntu@microstack:~$ sunbeam observability dashboard-url http://172.16.1.205/observability-grafana (base) ubuntu@microstack:~$ juju run --model observability grafana/leader get-admin-password Running operation 5 with 1 task - task 6 on unit-grafana-0 Waiting for task 6... admin-password: 0EAJrXNIt3jd url: http://172.16.1.205/observability-grafana 
Enter fullscreen mode Exit fullscreen mode

Sunbeam utilise une série de manifestes dont on retrouve la liste à la suite de tous ces déploiements :

ubuntu@microstack:~$ sunbeam manifest list ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓ ┃ ID ┃ Applied Date ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩ │ ecd1a516be9a244376923e9f7b8217ce │ 2024-12-24 09:00:10 │ └──────────────────────────────────┴─────────────────────┘ ubuntu@microstack:~$ sunbeam manifest show ecd1a516be9a244376923e9f7b8217ce core: software: charms: cinder-ceph-k8s: channel: 2024.1/beta cinder-k8s: channel: 2024.1/beta glance-k8s: channel: 2024.1/beta horizon-k8s: channel: 2024.1/beta keystone-k8s: channel: 2024.1/beta microceph: channel: squid/beta config: snap-channel: squid/beta neutron-k8s: channel: 2024.1/beta nova-k8s: channel: 2024.1/beta openstack-hypervisor: channel: 2024.1/beta config: snap-channel: 2024.1/beta ovn-central-k8s: channel: 24.03/beta ovn-relay-k8s: channel: 24.03/beta placement-k8s: channel: 2024.1/beta sunbeam-clusterd: channel: 2024.1/beta config: snap-channel: 2024.1/beta sunbeam-machine: channel: 2024.1/beta features: caas: software: charms: magnum-k8s: channel: 2024.1/beta dns: software: charms: designate-bind-k8s: channel: 9/beta designate-k8s: channel: 2024.1/beta images-sync: software: charms: openstack-images-sync-k8s: channel: 2024.1/beta instance-recovery: software: charms: consul-client: channel: 1.19/beta consul-k8s: channel: 1.19/beta masakari-k8s: channel: 2024.1/beta ldap: software: charms: keystone-ldap-k8s: channel: 2024.1/beta loadbalancer: software: charms: octavia-k8s: channel: 2024.1/beta orchestration: software: charms: heat-k8s: channel: 2024.1/beta resource-optimization: software: charms: watcher-k8s: channel: 2024.1/beta secrets: software: charms: barbican-k8s: channel: 2024.1/beta telemetry: software: charms: aodh-k8s: channel: 2024.1/beta ceilometer-k8s: channel: 2024.1/beta gnocchi-k8s: channel: 2024.1/beta openstack-exporter-k8s: channel: 2024.1/beta validation: software: charms: tempest-k8s: channel: 2024.1/beta 
Enter fullscreen mode Exit fullscreen mode

Dont on peut aussi vérifier la présence via Juju :

 ubuntu@microstack:~$ juju status -m admin/controller Model Controller Cloud/Region Version SLA Timestamp controller sunbeam-controller one-deer/default 3.6.1 unsupported 10:04:08Z SAAS Status Store URL ceilometer waiting local microstack/openstack.ceilometer cert-distributor active local microstack/openstack.cert-distributor certificate-authority active local microstack/openstack.certificate-authority cinder-ceph blocked local microstack/openstack.cinder-ceph grafana-dashboards active local microstack/observability.grafana-dashboards keystone-credentials active local microstack/openstack.keystone-credentials keystone-endpoints active local microstack/openstack.keystone-endpoints loki-logging active local microstack/observability.loki-logging nova active local microstack/openstack.nova ovn-relay active local microstack/openstack.ovn-relay prometheus-receive-remote-write active local microstack/observability.prometheus-receive-remote-write rabbitmq active local microstack/openstack.rabbitmq App Version Status Scale Charm Channel Rev Exposed Message controller active 1 juju-controller 3.6/stable 116 no grafana-agent active 1 grafana-agent latest/stable 260 no tracing: off k8s 1.31.3 active 1 k8s 1.31/candidate 141 no Ready microceph unknown 0 microceph squid/beta 84 no openstack-hypervisor waiting 1 openstack-hypervisor 2024.1/beta 221 no (ceph-access) integration incomplete sunbeam-machine active 1 sunbeam-machine 2024.1/beta 49 no Unit Workload Agent Machine Public address Ports Message controller/0* active idle 0 134.209.225.128 k8s/0* active idle 0 134.209.225.128 6443/tcp Ready openstack-hypervisor/0* waiting idle 0 134.209.225.128 (ceph-access) integration incomplete grafana-agent/0* active idle 134.209.225.128 tracing: off sunbeam-machine/0* active idle 0 134.209.225.128 Machine State Address Inst id Base AZ Message 0 started 134.209.225.128 manual: ubuntu@24.04 Manually provisioned machine Offer Application Charm Rev Connected Endpoint Interface Role microceph microceph microceph 84 0/0 ceph ceph-client provider ubuntu@microstack:~$ juju status -m openstack Model Controller Cloud/Region Version SLA Timestamp openstack sunbeam-controller one-deer-k8s/localhost 3.6.1 unsupported 10:04:48Z SAAS Status Store URL grafana-dashboards active local microstack/observability.grafana-dashboards loki-logging active local microstack/observability.loki-logging prometheus-receive-remote-write active local microstack/observability.prometheus-receive-remote-write App Version Status Scale Charm Channel Rev Address Exposed Message aodh active 1 aodh-k8s 2024.1/beta 62 10.152.183.238 no aodh-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.134 no aodh-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.200 no ceilometer waiting 1 ceilometer-k8s 2024.1/beta 62 10.152.183.88 no (workload) Not all relations are ready certificate-authority active 1 self-signed-certificates latest/beta 228 10.152.183.220 no cinder active 1 cinder-k8s 2024.1/beta 99 10.152.183.160 no cinder-ceph blocked 1 cinder-ceph-k8s 2024.1/beta 97 10.152.183.202 no (ceph) integration missing cinder-ceph-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.244 no cinder-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.77 no cinder-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.92 no glance active 1 glance-k8s 2024.1/beta 120 10.152.183.187 no glance-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.163 no glance-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.116 no gnocchi blocked 1 gnocchi-k8s 2024.1/beta 61 10.152.183.81 no (ceph) integration missing gnocchi-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.35 no gnocchi-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.72 no grafana-agent 0.40.4 active 1 grafana-agent-k8s latest/stable 80 10.152.183.169 no heat active 1 heat-k8s 2024.1/beta 79 10.152.183.151 no heat-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.222 no heat-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.249 no horizon active 1 horizon-k8s 2024.1/beta 111 10.152.183.234 no http://172.16.1.204/openstack-horizon horizon-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.131 no horizon-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.140 no keystone active 1 keystone-k8s 2024.1/beta 213 10.152.183.63 no keystone-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.48 no keystone-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.108 no neutron active 1 neutron-k8s 2024.1/beta 119 10.152.183.212 no neutron-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.75 no neutron-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.197 no nova active 1 nova-k8s 2024.1/beta 109 10.152.183.104 no nova-api-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.189 no nova-cell-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.178 no nova-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.24 no nova-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.152 no openstack-exporter active 1 openstack-exporter-k8s 2024.1/beta 72 10.152.183.100 no ovn-central active 1 ovn-central-k8s 24.03/beta 110 10.152.183.194 no ovn-relay active 1 ovn-relay-k8s 24.03/beta 97 172.16.1.201 no placement active 1 placement-k8s 2024.1/beta 92 10.152.183.199 no placement-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.83 no placement-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.248 no rabbitmq 3.12.1 active 1 rabbitmq-k8s 3.12/stable 34 172.16.1.202 no traefik 2.11.0 active 1 traefik-k8s latest/beta 223 10.152.183.125 no Serving at 172.16.1.203 traefik-public 2.11.0 active 1 traefik-k8s latest/beta 223 10.152.183.54 no Serving at 172.16.1.204 vault blocked 1 vault-k8s 1.16/stable 280 10.152.183.78 no Please initialize Vault or integrate with an auto-unseal provider Unit Workload Agent Address Ports Message aodh-mysql-router/0* active idle 10.1.0.57 aodh-mysql/0* active idle 10.1.0.6 Primary aodh/0* active idle 10.1.0.90 ceilometer/0* waiting idle 10.1.0.149 (workload) Not all relations are ready certificate-authority/0* active idle 10.1.0.5 cinder-ceph-mysql-router/0* active idle 10.1.0.167 cinder-ceph/0* blocked idle 10.1.0.108 (ceph) integration missing cinder-mysql-router/0* active idle 10.1.0.253 cinder-mysql/0* active idle 10.1.0.145 Primary cinder/0* active idle 10.1.0.56 glance-mysql-router/0* active idle 10.1.0.85 glance-mysql/0* active idle 10.1.0.183 Primary glance/0* active idle 10.1.0.251 gnocchi-mysql-router/0* active idle 10.1.0.196 gnocchi-mysql/0* active idle 10.1.0.213 Primary gnocchi/0* blocked idle 10.1.0.55 (ceph) integration missing grafana-agent/0* active idle 10.1.0.2 heat-mysql-router/0* active idle 10.1.0.54 heat-mysql/0* active idle 10.1.0.9 Primary heat/0* active idle 10.1.0.138 horizon-mysql-router/0* active idle 10.1.0.248 horizon-mysql/0* active idle 10.1.0.185 Primary horizon/0* active idle 10.1.0.35 keystone-mysql-router/0* active idle 10.1.0.243 keystone-mysql/0* active idle 10.1.0.104 Primary keystone/0* active idle 10.1.0.223 neutron-mysql-router/0* active idle 10.1.0.135 neutron-mysql/0* active idle 10.1.0.79 Primary neutron/0* active idle 10.1.0.23 nova-api-mysql-router/0* active idle 10.1.0.93 nova-cell-mysql-router/0* active idle 10.1.0.165 nova-mysql-router/0* active idle 10.1.0.143 nova-mysql/0* active idle 10.1.0.226 Primary nova/0* active idle 10.1.0.147 openstack-exporter/0* active idle 10.1.0.100 ovn-central/0* active idle 10.1.0.222 ovn-relay/0* active idle 10.1.0.82 placement-mysql-router/0* active idle 10.1.0.224 placement-mysql/0* active idle 10.1.0.148 Primary placement/0* active idle 10.1.0.78 rabbitmq/0* active idle 10.1.0.238 traefik-public/0* active idle 10.1.0.96 Serving at 172.16.1.204 traefik/0* active idle 10.1.0.151 Serving at 172.16.1.203 vault/0* blocked idle 10.1.0.178 Please initialize Vault or integrate with an auto-unseal provider Offer Application Charm Rev Connected Endpoint Interface Role ceilometer ceilometer ceilometer-k8s 62 1/1 ceilometer-service ceilometer provider cert-distributor keystone keystone-k8s 213 2/2 send-ca-cert certificate_transfer provider certificate-authority certificate-authority self-signed-certificates 228 1/1 certificates tls-certificates provider cinder-ceph cinder-ceph cinder-ceph-k8s 97 1/1 ceph-access cinder-ceph-key provider keystone-credentials keystone keystone-k8s 213 1/1 identity-credentials keystone-credentials provider keystone-endpoints keystone keystone-k8s 213 1/1 identity-service keystone provider nova nova nova-k8s 109 1/1 nova-service nova provider ovn-relay ovn-relay ovn-relay-k8s 97 1/1 ovsdb-cms-relay ovsdb-cms provider rabbitmq rabbitmq rabbitmq-k8s 34 1/1 amqp rabbitmq provider 
Enter fullscreen mode Exit fullscreen mode

Ou via Canonical Kubernetes (qui supporte OpenStack) :

ubuntu@microstack:~$ sudo k8s kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:6443 CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/coredns:udp-53/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ubuntu@microstack:~$ sudo k8s kubectl get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/cilium-87pxh 1/1 Running 0 62m kube-system pod/cilium-operator-6f7f8cf67-5vfsx 1/1 Running 0 63m kube-system pod/ck-storage-rawfile-csi-controller-0 2/2 Running 0 63m kube-system pod/ck-storage-rawfile-csi-node-5vbjd 4/4 Running 0 63m kube-system pod/coredns-598bfdf87d-qt2j4 1/1 Running 0 63m kube-system pod/metrics-server-7ff9f4d4c9-jqb9x 1/1 Running 0 63m metallb-system pod/metallb-controller-7bb5f6c9b4-pbzdb 1/1 Running 0 63m metallb-system pod/metallb-speaker-dxg5x 1/1 Running 0 62m observability pod/alertmanager-0 2/2 Running 0 13m observability pod/catalogue-0 2/2 Running 0 13m observability pod/grafana-0 3/3 Running 0 13m observability pod/loki-0 3/3 Running 0 12m observability pod/modeloperator-88fc49d74-tjcnm 1/1 Running 0 14m observability pod/prometheus-0 2/2 Running 0 13m observability pod/traefik-0 2/2 Running 0 13m openstack pod/aodh-0 6/6 Running 0 18m openstack pod/aodh-mysql-0 2/2 Running 0 22m openstack pod/aodh-mysql-router-0 2/2 Running 0 19m openstack pod/ceilometer-0 3/3 Running 0 22m openstack pod/certificate-authority-0 1/1 Running 0 61m openstack pod/cinder-0 3/3 Running 0 56m openstack pod/cinder-ceph-0 2/2 Running 0 56m openstack pod/cinder-ceph-mysql-router-0 2/2 Running 0 56m openstack pod/cinder-mysql-0 2/2 Running 0 60m openstack pod/cinder-mysql-router-0 2/2 Running 0 56m openstack pod/glance-0 2/2 Running 0 56m openstack pod/glance-mysql-0 2/2 Running 0 61m openstack pod/glance-mysql-router-0 2/2 Running 0 56m openstack pod/gnocchi-0 3/3 Running 0 19m openstack pod/gnocchi-mysql-0 2/2 Running 0 22m openstack pod/gnocchi-mysql-router-0 2/2 Running 0 19m openstack pod/grafana-agent-0 2/2 Running 0 10m openstack pod/heat-0 4/4 Running 0 29m openstack pod/heat-mysql-0 2/2 Running 0 30m openstack pod/heat-mysql-router-0 2/2 Running 0 29m openstack pod/horizon-0 2/2 Running 0 55m openstack pod/horizon-mysql-0 2/2 Running 0 61m openstack pod/horizon-mysql-router-0 2/2 Running 0 55m openstack pod/keystone-0 2/2 Running 0 55m openstack pod/keystone-mysql-0 2/2 Running 0 61m openstack pod/keystone-mysql-router-0 2/2 Running 0 55m openstack pod/modeloperator-56b4d68fb7-tznnv 1/1 Running 0 62m openstack pod/neutron-0 2/2 Running 0 55m openstack pod/neutron-mysql-0 2/2 Running 0 60m openstack pod/neutron-mysql-router-0 2/2 Running 0 55m openstack pod/nova-0 5/5 Running 0 56m openstack pod/nova-api-mysql-router-0 2/2 Running 0 56m openstack pod/nova-cell-mysql-router-0 2/2 Running 0 56m openstack pod/nova-mysql-0 2/2 Running 0 60m openstack pod/nova-mysql-router-0 2/2 Running 0 56m openstack pod/openstack-exporter-0 2/2 Running 0 22m openstack pod/ovn-central-0 4/4 Running 0 61m openstack pod/ovn-relay-0 2/2 Running 0 61m openstack pod/placement-0 2/2 Running 0 55m openstack pod/placement-mysql-0 2/2 Running 0 61m openstack pod/placement-mysql-router-0 2/2 Running 0 55m openstack pod/rabbitmq-0 2/2 Running 0 61m openstack pod/traefik-0 2/2 Running 0 61m openstack pod/traefik-public-0 2/2 Running 0 60m openstack pod/vault-0 2/2 Running 0 27m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 63m kube-system service/ck-storage-rawfile-csi-controller ClusterIP None <none> <none> 63m kube-system service/ck-storage-rawfile-csi-node ClusterIP 10.152.183.237 <none> 9100/TCP 63m kube-system service/coredns ClusterIP 10.152.183.37 <none> 53/UDP,53/TCP 63m kube-system service/hubble-peer ClusterIP 10.152.183.23 <none> 443/TCP 63m kube-system service/metrics-server ClusterIP 10.152.183.119 <none> 443/TCP 63m metallb-system service/metallb-webhook-service ClusterIP 10.152.183.110 <none> 443/TCP 63m observability service/alertmanager ClusterIP 10.152.183.36 <none> 9093/TCP,9094/TCP 13m observability service/alertmanager-endpoints ClusterIP None <none> <none> 13m observability service/catalogue ClusterIP 10.152.183.190 <none> 80/TCP 14m observability service/catalogue-endpoints ClusterIP None <none> <none> 14m observability service/grafana ClusterIP 10.152.183.170 <none> 3000/TCP 13m observability service/grafana-endpoints ClusterIP None <none> <none> 13m observability service/loki ClusterIP 10.152.183.201 <none> 3100/TCP 13m observability service/loki-endpoints ClusterIP None <none> <none> 13m observability service/modeloperator ClusterIP 10.152.183.59 <none> 17071/TCP 14m observability service/prometheus ClusterIP 10.152.183.147 <none> 9090/TCP 13m observability service/prometheus-endpoints ClusterIP None <none> <none> 13m observability service/traefik ClusterIP 10.152.183.198 <none> 65535/TCP 13m observability service/traefik-endpoints ClusterIP None <none> <none> 13m observability service/traefik-lb LoadBalancer 10.152.183.60 172.16.1.205 80:30845/TCP,443:31176/TCP 13m openstack service/aodh ClusterIP 10.152.183.238 <none> 8042/TCP 20m openstack service/aodh-endpoints ClusterIP None <none> <none> 20m openstack service/aodh-mysql ClusterIP 10.152.183.134 <none> 3306/TCP,33060/TCP 23m openstack service/aodh-mysql-endpoints ClusterIP None <none> <none> 22m openstack service/aodh-mysql-primary ClusterIP 10.152.183.176 <none> 3306/TCP 19m openstack service/aodh-mysql-replicas ClusterIP 10.152.183.74 <none> 3306/TCP 19m openstack service/aodh-mysql-router ClusterIP 10.152.183.200 <none> 6446/TCP,6447/TCP,65535/TCP 20m openstack service/aodh-mysql-router-endpoints ClusterIP None <none> <none> 20m openstack service/ceilometer ClusterIP 10.152.183.88 <none> 65535/TCP 23m openstack service/ceilometer-endpoints ClusterIP None <none> <none> 22m openstack service/certificate-authority ClusterIP 10.152.183.220 <none> 65535/TCP 61m openstack service/certificate-authority-endpoints ClusterIP None <none> <none> 61m openstack service/cinder ClusterIP 10.152.183.160 <none> 8776/TCP 58m openstack service/cinder-ceph ClusterIP 10.152.183.202 <none> 65535/TCP 58m openstack service/cinder-ceph-endpoints ClusterIP None <none> <none> 57m openstack service/cinder-ceph-mysql-router ClusterIP 10.152.183.244 <none> 6446/TCP,6447/TCP,65535/TCP 57m openstack service/cinder-ceph-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/cinder-endpoints ClusterIP None <none> <none> 57m openstack service/cinder-mysql ClusterIP 10.152.183.77 <none> 3306/TCP,33060/TCP 61m openstack service/cinder-mysql-endpoints ClusterIP None <none> <none> 61m openstack service/cinder-mysql-primary ClusterIP 10.152.183.113 <none> 3306/TCP 55m openstack service/cinder-mysql-replicas ClusterIP 10.152.183.127 <none> 3306/TCP 55m openstack service/cinder-mysql-router ClusterIP 10.152.183.92 <none> 6446/TCP,6447/TCP,65535/TCP 57m openstack service/cinder-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/glance ClusterIP 10.152.183.187 <none> 9292/TCP 58m openstack service/glance-endpoints ClusterIP None <none> <none> 57m openstack service/glance-mysql ClusterIP 10.152.183.163 <none> 3306/TCP,33060/TCP 61m openstack service/glance-mysql-endpoints ClusterIP None <none> <none> 61m openstack service/glance-mysql-primary ClusterIP 10.152.183.245 <none> 3306/TCP 55m openstack service/glance-mysql-replicas ClusterIP 10.152.183.246 <none> 3306/TCP 55m openstack service/glance-mysql-router ClusterIP 10.152.183.116 <none> 6446/TCP,6447/TCP,65535/TCP 58m openstack service/glance-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/gnocchi ClusterIP 10.152.183.81 <none> 65535/TCP 20m openstack service/gnocchi-endpoints ClusterIP None <none> <none> 20m openstack service/gnocchi-mysql ClusterIP 10.152.183.35 <none> 3306/TCP,33060/TCP 23m openstack service/gnocchi-mysql-endpoints ClusterIP None <none> <none> 22m openstack service/gnocchi-mysql-primary ClusterIP 10.152.183.146 <none> 3306/TCP 19m openstack service/gnocchi-mysql-replicas ClusterIP 10.152.183.153 <none> 3306/TCP 19m openstack service/gnocchi-mysql-router ClusterIP 10.152.183.72 <none> 6446/TCP,6447/TCP,65535/TCP 20m openstack service/gnocchi-mysql-router-endpoints ClusterIP None <none> <none> 20m openstack service/grafana-agent ClusterIP 10.152.183.169 <none> 3500/TCP,3600/TCP 10m openstack service/grafana-agent-endpoints ClusterIP None <none> <none> 10m openstack service/heat ClusterIP 10.152.183.151 <none> 8004/TCP 30m openstack service/heat-endpoints ClusterIP None <none> <none> 29m openstack service/heat-mysql ClusterIP 10.152.183.222 <none> 3306/TCP,33060/TCP 30m openstack service/heat-mysql-endpoints ClusterIP None <none> <none> 30m openstack service/heat-mysql-primary ClusterIP 10.152.183.98 <none> 3306/TCP 29m openstack service/heat-mysql-replicas ClusterIP 10.152.183.80 <none> 3306/TCP 29m openstack service/heat-mysql-router ClusterIP 10.152.183.249 <none> 6446/TCP,6447/TCP,65535/TCP 30m openstack service/heat-mysql-router-endpoints ClusterIP None <none> <none> 29m openstack service/horizon ClusterIP 10.152.183.234 <none> 65535/TCP 57m openstack service/horizon-endpoints ClusterIP None <none> <none> 57m openstack service/horizon-mysql ClusterIP 10.152.183.131 <none> 3306/TCP,33060/TCP 61m openstack service/horizon-mysql-endpoints ClusterIP None <none> <none> 61m openstack service/horizon-mysql-primary ClusterIP 10.152.183.126 <none> 3306/TCP 55m openstack service/horizon-mysql-replicas ClusterIP 10.152.183.145 <none> 3306/TCP 55m openstack service/horizon-mysql-router ClusterIP 10.152.183.140 <none> 6446/TCP,6447/TCP,65535/TCP 57m openstack service/horizon-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/keystone ClusterIP 10.152.183.63 <none> 5000/TCP 57m openstack service/keystone-endpoints ClusterIP None <none> <none> 57m openstack service/keystone-mysql ClusterIP 10.152.183.48 <none> 3306/TCP,33060/TCP 61m openstack service/keystone-mysql-endpoints ClusterIP None <none> <none> 61m openstack service/keystone-mysql-primary ClusterIP 10.152.183.159 <none> 3306/TCP 55m openstack service/keystone-mysql-replicas ClusterIP 10.152.183.114 <none> 3306/TCP 55m openstack service/keystone-mysql-router ClusterIP 10.152.183.108 <none> 6446/TCP,6447/TCP,65535/TCP 57m openstack service/keystone-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/modeloperator ClusterIP 10.152.183.26 <none> 17071/TCP 62m openstack service/neutron ClusterIP 10.152.183.212 <none> 9696/TCP 57m openstack service/neutron-endpoints ClusterIP None <none> <none> 57m openstack service/neutron-mysql ClusterIP 10.152.183.75 <none> 3306/TCP,33060/TCP 60m openstack service/neutron-mysql-endpoints ClusterIP None <none> <none> 60m openstack service/neutron-mysql-primary ClusterIP 10.152.183.55 <none> 3306/TCP 55m openstack service/neutron-mysql-replicas ClusterIP 10.152.183.34 <none> 3306/TCP 55m openstack service/neutron-mysql-router ClusterIP 10.152.183.197 <none> 6446/TCP,6447/TCP,65535/TCP 57m openstack service/neutron-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/nova ClusterIP 10.152.183.104 <none> 8774/TCP 58m openstack service/nova-api-mysql-router ClusterIP 10.152.183.189 <none> 6446/TCP,6447/TCP,65535/TCP 57m openstack service/nova-api-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/nova-cell-mysql-router ClusterIP 10.152.183.178 <none> 6446/TCP,6447/TCP,65535/TCP 58m openstack service/nova-cell-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/nova-endpoints ClusterIP None <none> <none> 57m openstack service/nova-mysql ClusterIP 10.152.183.24 <none> 3306/TCP,33060/TCP 61m openstack service/nova-mysql-endpoints ClusterIP None <none> <none> 61m openstack service/nova-mysql-primary ClusterIP 10.152.183.130 <none> 3306/TCP 56m openstack service/nova-mysql-replicas ClusterIP 10.152.183.186 <none> 3306/TCP 56m openstack service/nova-mysql-router ClusterIP 10.152.183.152 <none> 6446/TCP,6447/TCP,65535/TCP 58m openstack service/nova-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/openstack-exporter ClusterIP 10.152.183.100 <none> 9180/TCP 23m openstack service/openstack-exporter-endpoints ClusterIP None <none> <none> 22m openstack service/ovn-central ClusterIP 10.152.183.194 <none> 6641/TCP,6642/TCP 62m openstack service/ovn-central-endpoints ClusterIP None <none> <none> 61m openstack service/ovn-relay LoadBalancer 10.152.183.149 172.16.1.201 6642:32293/TCP 62m openstack service/ovn-relay-endpoints ClusterIP None <none> <none> 62m openstack service/placement ClusterIP 10.152.183.199 <none> 8778/TCP 57m openstack service/placement-endpoints ClusterIP None <none> <none> 57m openstack service/placement-mysql ClusterIP 10.152.183.83 <none> 3306/TCP,33060/TCP 61m openstack service/placement-mysql-endpoints ClusterIP None <none> <none> 61m openstack service/placement-mysql-primary ClusterIP 10.152.183.188 <none> 3306/TCP 55m openstack service/placement-mysql-replicas ClusterIP 10.152.183.165 <none> 3306/TCP 55m openstack service/placement-mysql-router ClusterIP 10.152.183.248 <none> 6446/TCP,6447/TCP,65535/TCP 57m openstack service/placement-mysql-router-endpoints ClusterIP None <none> <none> 57m openstack service/rabbitmq LoadBalancer 10.152.183.150 172.16.1.202 5672:31615/TCP,15672:31040/TCP 61m openstack service/rabbitmq-endpoints ClusterIP None <none> <none> 61m openstack service/traefik ClusterIP 10.152.183.125 <none> 65535/TCP 61m openstack service/traefik-endpoints ClusterIP None <none> <none> 61m openstack service/traefik-lb LoadBalancer 10.152.183.221 172.16.1.203 80:32485/TCP,443:31534/TCP 61m openstack service/traefik-public ClusterIP 10.152.183.54 <none> 65535/TCP 61m openstack service/traefik-public-endpoints ClusterIP None <none> <none> 61m openstack service/traefik-public-lb LoadBalancer 10.152.183.115 172.16.1.204 80:30599/TCP,443:30927/TCP 60m openstack service/vault ClusterIP 10.152.183.78 <none> 8200/TCP 27m openstack service/vault-endpoints ClusterIP None <none> <none> 27m 
Enter fullscreen mode Exit fullscreen mode

Une fois que MicroStack a été déployé, vous avez la possibilité de gérer les charges de travail manuellement (c’est-à-dire via la CLI d’openstack) ou avec Juju :

Manage workloads with Juju | Canonical

Mais je vais utiliser Pulumi. Pulumi est une plateforme moderne d’infrastructure en tant que code (IaC) qui permet aux utilisateurs de gérer et de fournir une infrastructure en nuage à l’aide de langages de programmation (notamment TypeScript, JavaScript, Python, Go, .NET, Java et YAML).

Pulumi - Infrastructure as Code, Secrets Management, and AI

Pulumi adopte une approche déclarative pour définir l’infrastructure. Les utilisateurs spécifient l’état souhaité de leur infrastructure, et Pulumi gère la création, la mise à jour et la suppression des ressources pour atteindre cet état. Cette approche est plus intuitive que la programmation impérative, où chaque étape pour atteindre l’état souhaité doit être explicitement décrite.

L’interface de ligne de commande de Pulumi est le principal outil de gestion et de déploiement de l’infrastructure et je l’installe à cette étape :

(base) ubuntu@microstack:~$ curl -fsSL https://get.pulumi.com | sh === Installing Pulumi 3.144.1 === + Downloading https://github.com/pulumi/pulumi/releases/download/v3.144.1/pulumi-v3.144.1-linux-x64.tar.gz... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 80.1M 100 80.1M 0 0 103M 0 --:--:-- --:--:-- --:--:-- 433M + Extracting to /home/ubuntu/.pulumi/bin + Adding /home/ubuntu/.pulumi/bin to $PATH in /home/ubuntu/.bashrc === Pulumi is now installed! 🍹 === + Please restart your shell or add /home/ubuntu/.pulumi/bin to your $PATH + Get started with Pulumi: https://www.pulumi.com/docs/quickstart (base) ubuntu@microstack:~$ source .bashrc (base) ubuntu@microstack:~$ pulumi Pulumi - Modern Infrastructure as Code To begin working with Pulumi, run the `pulumi new` command: $ pulumi new This will prompt you to create a new project for your cloud and language of choice. The most common commands from there are: - pulumi up : Deploy code and/or resource changes - pulumi stack : Manage instances of your project - pulumi config : Alter your stack's configuration or secrets - pulumi destroy : Tear down your stack's resources entirely For more information, please visit the project page: https://www.pulumi.com/docs/ Usage: pulumi [command] Stack Management Commands: new Create a new Pulumi project config Manage configuration stack Manage stacks and view stack state console Opens the current stack in the Pulumi Console import Import resources into an existing stack refresh Refresh the resources in a stack state Edit the current stack's state install Install packages and plugins for the current program or policy pack. Deployment Commands: up Create or update the resources in a stack destroy Destroy all existing resources in the stack preview Show a preview of updates to a stack's resources cancel Cancel a stack's currently running update, if any Environment Commands: env Manage environments Pulumi Cloud Commands: login Log in to the Pulumi Cloud logout Log out of the Pulumi Cloud whoami Display the current logged-in user org Manage Organization configuration Policy Management Commands: policy Manage resource policies Plugin Commands: plugin Manage language and resource provider plugins schema Analyze package schemas package Work with Pulumi packages Other Commands: version Print Pulumi's version number about Print information about the Pulumi environment. gen-completion Generate completion scripts for the Pulumi CLI Experimental Commands: convert Convert Pulumi programs from a supported source program into other supported languages watch Continuously update the resources in a stack logs Show aggregated resource logs for a stack Flags: --color string Colorize output. Choices are: always, never, raw, auto (default "auto") -C, --cwd string Run pulumi as if it had been started in another directory --disable-integrity-checking Disable integrity checking of checkpoint files -e, --emoji Enable emojis in the output -Q, --fully-qualify-stack-names Show fully-qualified stack names -h, --help help for pulumi --logflow Flow log settings to child processes (like plugins) --logtostderr Log to stderr instead of to files --memprofilerate int Enable more precise (and expensive) memory allocation profiles by setting runtime.MemProfileRate --non-interactive Disable interactive mode for all commands --profiling string Emit CPU and memory profiles and an execution trace to '[filename].[pid].{cpu,mem,trace}', respectively --tracing file: Emit tracing to the specified endpoint. Use the file: scheme to write tracing data to a local file -v, --verbose int Enable verbose logging (e.g., v=3); anything >3 is very verbose Use `pulumi [command] --help` for more information about a command. 
Enter fullscreen mode Exit fullscreen mode

J’utilise le système de fichiers de la machine afin de stocker les fichiers d’état localement :

(base) ubuntu@microstack:~$ pulumi login --local Logged in to microstack as ubuntu (file://~) 
Enter fullscreen mode Exit fullscreen mode

J’utilise Miniconda pour rapidement disposer d’un environnement Python prêt à l’emploi :

Miniconda - Anaconda documentation

ubuntu@microstack:~$ mkdir -p ~/miniconda3 ubuntu@microstack:~$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh ubuntu@microstack:~$ bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 ubuntu@microstack:~$ rm ~/miniconda3/miniconda.sh PREFIX=/home/ubuntu/miniconda3 Unpacking payload ... Installing base environment... Preparing transaction: ...working... done Executing transaction: ...working... done installation finished. ubuntu@microstack:~$ source ~/miniconda3/bin/activate (base) ubuntu@microstack:~$ conda init --all no change /home/ubuntu/miniconda3/condabin/conda no change /home/ubuntu/miniconda3/bin/conda no change /home/ubuntu/miniconda3/bin/conda-env no change /home/ubuntu/miniconda3/bin/activate no change /home/ubuntu/miniconda3/bin/deactivate no change /home/ubuntu/miniconda3/etc/profile.d/conda.sh no change /home/ubuntu/miniconda3/etc/fish/conf.d/conda.fish no change /home/ubuntu/miniconda3/shell/condabin/Conda.psm1 no change /home/ubuntu/miniconda3/shell/condabin/conda-hook.ps1 no change /home/ubuntu/miniconda3/lib/python3.12/site-packages/xontrib/conda.xsh no change /home/ubuntu/miniconda3/etc/profile.d/conda.csh modified /home/ubuntu/.bashrc modified /home/ubuntu/.zshrc modified /home/ubuntu/.config/fish/config.fish modified /home/ubuntu/.xonshrc modified /home/ubuntu/.tcshrc ==> For changes to take effect, close and re-open your current shell. <== (base) ubuntu@microstack:~$ source .bashrc (base) ubuntu@microstack:~$ type pip pip is /home/ubuntu/miniconda3/bin/pip 
Enter fullscreen mode Exit fullscreen mode

Création d’un environnement avec le fournisseur OpenStack et son template en Python pour Pulumi :

(base) ubuntu@microstack:~$ mkdir test (base) ubuntu@microstack:~$ cd test (base) ubuntu@microstack:~/test$ pulumi new openstack-python This command will walk you through creating a new Pulumi project. Enter a value or leave blank to accept the (default), and press <ENTER>. Press ^C at any time to quit. Project name (test): Project description (A minimal OpenStack Python Pulumi program): Created project 'test' Stack name (dev): Enter your passphrase to protect config/secrets: Re-enter your passphrase to confirm: Created stack 'dev' The toolchain to use for installing dependencies and running the program pip Installing dependencies... Creating virtual environment... Finished creating virtual environment Updating pip, setuptools, and wheel in virtual environment... Requirement already satisfied: pip in ./venv/lib/python3.12/site-packages (24.3.1) Collecting setuptools Downloading setuptools-75.6.0-py3-none-any.whl.metadata (6.7 kB) Collecting wheel Downloading wheel-0.45.1-py3-none-any.whl.metadata (2.3 kB) Downloading setuptools-75.6.0-py3-none-any.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 29.2 MB/s eta 0:00:00 Downloading wheel-0.45.1-py3-none-any.whl (72 kB) Installing collected packages: wheel, setuptools Successfully installed setuptools-75.6.0 wheel-0.45.1 Finished updating Installing dependencies in virtual environment... Collecting pulumi<4.0.0,>=3.0.0 (from -r requirements.txt (line 1)) Downloading pulumi-3.144.1-py3-none-any.whl.metadata (12 kB) Collecting pulumi-openstack<4.0.0,>=3.0.0 (from -r requirements.txt (line 2)) Downloading pulumi_openstack-3.15.2-py3-none-any.whl.metadata (9.2 kB) Collecting protobuf~=4.21 (from pulumi<4.0.0,>=3.0.0->-r requirements.txt (line 1)) Downloading protobuf-4.25.5-cp37-abi3-manylinux2014_x86_64.whl.metadata (541 bytes) Collecting grpcio~=1.66.2 (from pulumi<4.0.0,>=3.0.0->-r requirements.txt (line 1)) Downloading grpcio-1.66.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.9 kB) Collecting dill~=0.3 (from pulumi<4.0.0,>=3.0.0->-r requirements.txt (line 1)) Downloading dill-0.3.9-py3-none-any.whl.metadata (10 kB) Collecting six~=1.12 (from pulumi<4.0.0,>=3.0.0->-r requirements.txt (line 1)) Downloading six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) Collecting semver~=2.13 (from pulumi<4.0.0,>=3.0.0->-r requirements.txt (line 1)) Downloading semver-2.13.0-py2.py3-none-any.whl.metadata (5.0 kB) Collecting pyyaml~=6.0 (from pulumi<4.0.0,>=3.0.0->-r requirements.txt (line 1)) Downloading PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB) Collecting debugpy~=1.8.7 (from pulumi<4.0.0,>=3.0.0->-r requirements.txt (line 1)) Downloading debugpy-1.8.11-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.1 kB) Collecting parver>=0.2.1 (from pulumi-openstack<4.0.0,>=3.0.0->-r requirements.txt (line 2)) Downloading parver-0.5-py3-none-any.whl.metadata (2.7 kB) Collecting arpeggio>=1.7 (from parver>=0.2.1->pulumi-openstack<4.0.0,>=3.0.0->-r requirements.txt (line 2)) Downloading Arpeggio-2.0.2-py2.py3-none-any.whl.metadata (2.4 kB) Collecting attrs>=19.2 (from parver>=0.2.1->pulumi-openstack<4.0.0,>=3.0.0->-r requirements.txt (line 2)) Downloading attrs-24.3.0-py3-none-any.whl.metadata (11 kB) Downloading pulumi-3.144.1-py3-none-any.whl (294 kB) Downloading pulumi_openstack-3.15.2-py3-none-any.whl (551 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 551.5/551.5 kB 19.0 MB/s eta 0:00:00 Downloading debugpy-1.8.11-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.2/4.2 MB 99.4 MB/s eta 0:00:00 Downloading dill-0.3.9-py3-none-any.whl (119 kB) Downloading grpcio-1.66.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 10.8 MB/s eta 0:00:00 Downloading parver-0.5-py3-none-any.whl (15 kB) Downloading protobuf-4.25.5-cp37-abi3-manylinux2014_x86_64.whl (294 kB) Downloading PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (767 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 767.5/767.5 kB 120.7 MB/s eta 0:00:00 Downloading semver-2.13.0-py2.py3-none-any.whl (12 kB) Downloading six-1.17.0-py2.py3-none-any.whl (11 kB) Downloading Arpeggio-2.0.2-py2.py3-none-any.whl (55 kB) Downloading attrs-24.3.0-py3-none-any.whl (63 kB) Installing collected packages: arpeggio, six, semver, pyyaml, protobuf, grpcio, dill, debugpy, attrs, pulumi, parver, pulumi-openstack Successfully installed arpeggio-2.0.2 attrs-24.3.0 debugpy-1.8.11 dill-0.3.9 grpcio-1.66.2 parver-0.5 protobuf-4.25.5 pulumi-3.144.1 pulumi-openstack-3.15.2 pyyaml-6.0.2 semver-2.13.0 six-1.17.0 Finished installing dependencies Finished installing dependencies Your new project is ready to go! To perform an initial deployment, run `pulumi up` (base) ubuntu@microstack:~/test$ ls Pulumi.dev.yaml Pulumi.yaml __main__.py requirements.txt venv (base) ubuntu@microstack:~/test$ cat Pulumi.yaml name: test description: A minimal OpenStack Python Pulumi program runtime: name: python options: toolchain: pip virtualenv: venv config: pulumi:tags: value: pulumi:template: openstack-python 
Enter fullscreen mode Exit fullscreen mode

Je modifie le principal fichier Python pour déployer une stack avec une nouvelle instance Ubuntu de test :

(base) ubuntu@microstack:~/test$ cat __main__.py """An OpenStack Python Pulumi program""" import pulumi from pulumi_openstack import compute # Create an OpenStack resource (Compute Instance) instance = compute.Instance('test-pulumi', flavor_name='m1.small', key_pair="sunbeam", image_name='ubuntu') # Export the IP of the instance pulumi.export('instance_ip', instance.access_ip_v4) 
Enter fullscreen mode Exit fullscreen mode

Et je déploie ma stack avec Pulumi en utilisant les variables d’environnement du tenant de démonstration :

(base) ubuntu@microstack:~/test$ source ../demo-openrc (base) ubuntu@microstack:~/test$ pulumi up Enter your passphrase to unlock config/secrets (set PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE to remember): Enter your passphrase to unlock config/secrets Previewing update (dev): Type Name Plan Info + pulumi:pulumi:Stack test-dev create 1 warning + └─ openstack:compute:Instance test-pulumi create Diagnostics: pulumi:pulumi:Stack (test-dev): warning: provider config warning: Users not using loadbalancer resources can ignore this message. Support for neutron-lbaas will be removed on next major release. Octavia will be the only supported method for loadbalancer resources. Users using octavia will have to remove 'use_octavia' option from the provider configuration block. Users using neutron-lbaas will have to migrate/upgrade to octavia. Outputs: instance_ip: output<string> Resources: + 2 to create Do you want to perform this update? yes Updating (dev): Type Name Status Info + pulumi:pulumi:Stack test-dev created (15s) 1 warning + └─ openstack:compute:Instance test-pulumi created (15s) Diagnostics: pulumi:pulumi:Stack (test-dev): warning: provider config warning: Users not using loadbalancer resources can ignore this message. Support for neutron-lbaas will be removed on next major release. Octavia will be the only supported method for loadbalancer resources. Users using octavia will have to remove 'use_octavia' option from the provider configuration block. Users using neutron-lbaas will have to migrate/upgrade to octavia. Outputs: instance_ip: "192.168.0.227" Resources: + 2 created Duration: 16s 
Enter fullscreen mode Exit fullscreen mode

L’instance est créée …


(base) ubuntu@microstack:~/test$ openstack server list +--------------------------------------+---------------------+--------+-----------------------------------------+--------+----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+---------------------+--------+-----------------------------------------+--------+----------+ | 6ec4c753-92c5-4221-a76d-8045638efd32 | test-pulumi-54bdcae | ACTIVE | demo-network=192.168.0.227 | ubuntu | m1.small | | efe46971-56f4-4da4-9c6e-eebee2795b72 | instance1 | ACTIVE | demo-network=172.16.2.31, 192.168.0.166 | ubuntu | m1.tiny | +--------------------------------------+---------------------+--------+-----------------------------------------+--------+----------+ (base) ubuntu@microstack:~/test$ openstack server show test-pulumi-54bdcae --fit +-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hostname | test-pulumi-54bdcae | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | None | | OS-EXT-SRV-ATTR:kernel_id | None | | OS-EXT-SRV-ATTR:launch_index | None | | OS-EXT-SRV-ATTR:ramdisk_id | None | | OS-EXT-SRV-ATTR:reservation_id | None | | OS-EXT-SRV-ATTR:root_device_name | None | | OS-EXT-SRV-ATTR:user_data | None | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2024-12-24T10:36:58.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | demo-network=192.168.0.227 | | config_drive | | | created | 2024-12-24T10:36:53Z | | description | test-pulumi-54bdcae | | flavor | description=, disk='30', ephemeral='0', , id='m1.small', is_disabled=, is_public='True', location=, name='m1.small', original_name='m1.small', | | | ram='2048', rxtx_factor=, swap='0', vcpus='1' | | hostId | 021ebc639163d77a5eb8018996d0b8aad50066a8552682313f3f293f | | host_status | None | | id | 6ec4c753-92c5-4221-a76d-8045638efd32 | | image | ubuntu (ff3ccb3b-f44f-4b50-a030-20267c302d75) | | key_name | sunbeam | | locked | False | | locked_reason | None | | name | test-pulumi-54bdcae | | progress | 0 | | project_id | 8b373f844efd47c8b38c4f1bcdcfba2a | | properties | | | security_groups | name='default' | | server_groups | [] | | status | ACTIVE | | tags | | | trusted_image_certificates | None | | updated | 2024-12-24T10:36:59Z | | user_id | 114709b3342c45f295d116c63c51884a | | volumes_attached | | +-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+ 
Enter fullscreen mode Exit fullscreen mode

Et on s’y connecte en y adjoignant une adresse IP flottante :

(base) ubuntu@microstack:~$ ssh -i snap/openstack/637/sunbeam ubuntu@172.16.2.124 Welcome to Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-127-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/pro System information as of Tue Dec 24 10:42:48 UTC 2024 System load: 0.0 Processes: 89 Usage of /: 5.0% of 28.89GB Users logged in: 0 Memory usage: 9% IPv4 address for ens3: 192.168.0.227 Swap usage: 0% Expanded Security Maintenance for Applications is not enabled. 0 updates can be applied immediately. Enable ESM Apps to receive additional future security updates. See https://ubuntu.com/esm or run: sudo pro status The list of available updates is more than a week old. To check for new updates run: sudo apt update New release '24.04.1 LTS' available. Run 'do-release-upgrade' to upgrade to it. Last login: Tue Dec 24 10:42:48 2024 from 172.16.2.1 To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. 
Enter fullscreen mode Exit fullscreen mode

Une instance facilement supprimée via la stack déployée par Pulumi :

(base) ubuntu@microstack:~/test$ pulumi destroy Enter your passphrase to unlock config/secrets (set PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE to remember): Enter your passphrase to unlock config/secrets Previewing destroy (dev): Type Name Plan - pulumi:pulumi:Stack test-dev delete - └─ openstack:compute:Instance test-pulumi delete Outputs: - instance_ip: "192.168.0.227" Resources: - 2 to delete Do you want to perform this destroy? yes Destroying (dev): Type Name Status - pulumi:pulumi:Stack test-dev deleted (0.00s) - └─ openstack:compute:Instance test-pulumi deleted (10s) Outputs: - instance_ip: "192.168.0.227" Resources: - 2 deleted Duration: 11s The resources in the stack have been deleted, but the history and configuration associated with the stack are still maintained. If you want to remove the stack completely, run `pulumi stack rm dev`. (base) ubuntu@microstack:~/test$ openstack server list +--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+ | efe46971-56f4-4da4-9c6e-eebee2795b72 | instance1 | ACTIVE | demo-network=172.16.2.31, 192.168.0.166 | ubuntu | m1.tiny | +--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+ (base) ubuntu@microstack:~/test$ pulumi stack rm dev This will permanently remove the 'dev' stack! Please confirm that this is what you'd like to do by typing `dev`: dev Stack 'dev' has been removed! (base) ubuntu@microstack:~/test$ pulumi stack ls NAME LAST UPDATE RESOURCE COUNT 
Enter fullscreen mode Exit fullscreen mode

La pile d’observabilité Canonical (COS) a été déployée précedemment. MicroStack propagera automatiquement les métriques et les tableaux de bord par défaut, vous permettant de surveiller sans effort l’état de votre déploiement de Sunbeam à un ou plusieurs nœuds sans avoir besoin d’une configuration supplémentaire via Grafana (dont on a obtenu les identifiants via Juju) :

Et le tout (dans ce cas précis) pour une consommation conséquente …

Pour des déploiements plus avancés, MicroStack supporte également les clusters multi-nœuds. Vous pouvez suivre le tutoriel détaillé sur la documentation officielle de Canonical pour déployer un cluster OpenStack multi-nœuds via Sunbeam et MAAS :

Comme on a pule voir, MicroStack est conçu pour être hautemement personnalisable , permettant l’intégration de divers backends de stockage tels que Cinder (stockage en bloc) et Swift (stockage d’objets).

Object Storage | Canonical

Les entreprises peuvent adapter la plateforme à leurs besoins uniques en intégrant des plug-ins et des extensions spécifiques. MicroStack offre une solution de cloud privée légère, facile à installer et à utiliser, ce qui en fait un choix idéal pour les organisations cherchant à déployer rapidement un environnement cloud sans la complexité traditionnelle associée à OpenStack. Avec sa flexibilité et sa capacité de customisation, MicroStack est une solution robuste et adaptable pour une variété de besoins en infrastructure cloud y compris pour le contexte de l’Edge Computing …

Top comments (0)