DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Déployer et exposer globalement une application multi-clusters via K8GB et Liqo …


https://docs.liqo.io/en/v0.5.2/examples/global-ingress.html

L’équilibrage global de la charge des serveurs (Global server load balancing, GSLB) est une méthode de distribution du trafic Internet vers un réseau de serveurs à travers le monde, créant ainsi une expérience utilisateur plus rapide et plus fiable comme l’explique très bien cet article :

[What is Global Server Load Balancing? Definition & FAQs | Avi Networks](https://avinetworks.com/glossary/global-server-load-balancing-2/#:~:text=Global%20server%20load%20balancing%20(GSLB,servers%20that%20are%20dispersed%20geographically.)

La question est de savoir s’il est possible de bénéficier d’un répartiteur de charge global au niveau de plusieurs clusters Kubernetes et c’est ce que propose le projet open source K8GB qui offre la capacité d’équilibrer la charge des requêtes HTTP sur plusieurs clusters Kubernetes.

En bref, la possibilité de diriger les requêtes HTTP vers un équilibreur de charge local (instances de contrôleur Kubernetes Ingress) en fonction de la santé des services (Pods) dans plusieurs clusters Kubernetes potentiellement dispersés géographiquement, que ce soit sur site ou dans le cloud. Avec des options supplémentaires concernant les critères à utiliser (round robin, pondération, actif/passif, etc.) lors de la détermination du meilleur équilibreur de charge local/instance d’entrée à résoudre …

k8gb

L’objectif de ce projet est de fournir une implémentation d’un GSLB natif cloud qui répond aux exigences suivantes :

  • Est léger en termes de besoins en ressources et de complexité d’exécution
  • Fonctionne bien dans un cluster Kubernetes
  • Pour obtenir une haute disponibilité/redondance, nous devrions être en mesure d’exécuter plusieurs instances (potentiellement sur plusieurs centres de données ou clouds) avec un état partagé
  • Utilisez d’autres projets open source éprouvés et populaires, le cas échéant. Ne pas réinventer la roue là où ce n’est pas nécessaire …
  • Autoriser les utilisateurs finaux à définir leur configuration GSLB via les moyens natifs de Kubernetes (annotations de ressources, CRD, etc.)
  • Fournir une observabilité de la santé opérationnelle de la solution …

Je pars dans la découverte de cette solution avec la création d’une instance Ubuntu 22.04 LTS dans DigitalOcean encore une fois :

doctl compute droplet create \ --image ubuntu-22-04-x64 \ --size s-8vcpu-16gb \ --region ams3 \ --vpc-uuid 2812643c-3dd8-484a-b9f4-1fce42903731 \ k8gb 
Enter fullscreen mode Exit fullscreen mode

Pour une rapide démo, je clone le dépôt GitHub du projet sur l’instance en récupérant au passage le binaire de k3d (K3s in Docker). Docker est déjà installé …

root@k8gb:~# curl -fsSL https://get.docker.com | sh - # Executing docker install script, commit: 4f282167c425347a931ccfd95cc91fab041d414f + sh -c apt-get update -qq >/dev/null + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null + sh -c mkdir -p /etc/apt/keyrings && chmod -R 0755 /etc/apt/keyrings + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg + sh -c chmod a+r /etc/apt/keyrings/docker.gpg + sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" > /etc/apt/sources.list.d/docker.list + sh -c apt-get update -qq >/dev/null + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-scan-plugin >/dev/null + version_gte 20.10 + [-z] + return 0 + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce-rootless-extras >/dev/null + sh -c docker version Client: Docker Engine - Community Version: 20.10.21 API version: 1.41 Go version: go1.18.7 Git commit: baeda1f Built: Tue Oct 25 18:01:58 2022 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.21 API version: 1.41 (minimum version 1.12) Go version: go1.18.7 Git commit: 3056208 Built: Tue Oct 25 17:59:49 2022 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.9 GitCommit: 1c90a442489720eec95342e1789ee8a5e1b9536f runc: Version: 1.1.4 GitCommit: v1.1.4-0-g5fd4c4d docker-init: Version: 0.19.0 GitCommit: de40ad0 ================================================================================ To run Docker as a non-privileged user, consider setting up the Docker daemon in rootless mode for your user: dockerd-rootless-setuptool.sh install Visit https://docs.docker.com/go/rootless/ to learn about rootless mode. To run the Docker daemon as a fully privileged service, but granting non-root users access, refer to https://docs.docker.com/go/daemon-access/ WARNING: Access to the remote API on a privileged Docker daemon is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' documentation for details: https://docs.docker.com/go/attack-surface/ ================================================================================ root@k8gb:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES root@k8gb:~# git clone https://github.com/k8gb-io/k8gb Cloning into 'k8gb'... remote: Enumerating objects: 11114, done. remote: Counting objects: 100% (12/12), done. remote: Compressing objects: 100% (12/12), done. remote: Total 11114 (delta 2), reused 2 (delta 0), pack-reused 11102 Receiving objects: 100% (11114/11114), 10.55 MiB | 15.73 MiB/s, done. Resolving deltas: 100% (5805/5805), done. root@k8gb:~# cd k8gb/ root@k8gb:~/k8gb# curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash Preparing to install k3d into /usr/local/bin k3d installed into /usr/local/bin/k3d Run 'k3d --help' to see what you can do with it. root@k8gb:~/k8gb# root@k8gb:~/k8gb# k3d --help https://k3d.io/ k3d is a wrapper CLI that helps you to easily create k3s clusters inside docker. Nodes of a k3d cluster are docker containers running a k3s image. All Nodes of a k3d cluster are part of the same docker network. Usage: k3d [flags] k3d [command] Available Commands: cluster Manage cluster(s) completion Generate completion scripts for [bash, zsh, fish, powershell | psh] config Work with config file(s) help Help about any command image Handle container images. kubeconfig Manage kubeconfig(s) node Manage node(s) registry Manage registry/registries version Show k3d and default k3s version Flags: -h, --help help for k3d --timestamps Enable Log timestamps --trace Enable super verbose output (trace logging) --verbose Enable verbose output (debug logging) --version Show k3d and default k3s version Use "k3d [command] --help" for more information about a command. root@k8gb:~/k8gb# snap install kubectl --classic snap "kubectl" is already installed, see 'snap help refresh' root@k8gb:~/k8gb# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 11156 100 11156 0 0 51694 0 --:--:-- --:--:-- --:--:-- 51888 Downloading https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm 
Enter fullscreen mode Exit fullscreen mode

Les différents outils de base mis en place, lancement de la démo qui créé localement trois clusters k3s. Plus précisemment, deux clusters k3s locaux via k3d, exposition du service CoreDNS associé pour le trafic DNS UDP, et installation de K8GB avec des applications de test et deux exemples de ressources GSLB par-dessus.

Cette configuration est adaptée aux scénarios locaux et fonctionne sans dépendre d’un fournisseur DNS externe.

root@k8gb:~/k8gb# make deploy-full-local-setup Creating 3 k8s clusters make create-local-cluster CLUSTER_NAME=edge-dns make[1]: Entering directory '/root/k8gb' Create local cluster edge-dns k3d cluster create -c k3d/edge-dns.yaml INFO[0000] Using config file k3d/edge-dns.yaml (k3d.io/v1alpha4#simple) INFO[0000] Prep: Network INFO[0000] Re-using existing network 'k3d-action-bridge-network' (ee0b6bae78a46fd6d885e84a0af19bc4cfc93c3af1aee9c0815e198b1baeaf4a) INFO[0000] Created image volume k3d-edgedns-images INFO[0000] Starting new tools node... INFO[0000] Starting Node 'k3d-edgedns-tools' INFO[0001] Creating node 'k3d-edgedns-server-0' INFO[0001] Using the k3d-tools node to gather environment information INFO[0001] HostIP: using network gateway 172.18.0.1 address INFO[0001] Starting cluster 'edgedns' INFO[0001] Starting servers... INFO[0001] Starting Node 'k3d-edgedns-server-0' INFO[0009] All agents already running. INFO[0009] All helpers already running. INFO[0009] Injecting records for hostAliases (incl. host.k3d.internal) and for 1 network members into CoreDNS configmap... INFO[0011] Cluster 'edgedns' created successfully! INFO[0011] You can now use it like this: kubectl cluster-info make[1]: Leaving directory '/root/k8gb' make[1]: Entering directory '/root/k8gb' Create local cluster test-gslb1 k3d cluster create -c k3d/test-gslb1.yaml INFO[0000] Using config file k3d/test-gslb1.yaml (k3d.io/v1alpha4#simple) INFO[0000] Prep: Network INFO[0000] Re-using existing network 'k3d-action-bridge-network' (ee0b6bae78a46fd6d885e84a0af19bc4cfc93c3af1aee9c0815e198b1baeaf4a) INFO[0000] Created image volume k3d-test-gslb1-images INFO[0000] Starting new tools node... INFO[0000] Starting Node 'k3d-test-gslb1-tools' INFO[0001] Creating node 'k3d-test-gslb1-server-0' INFO[0001] Creating node 'k3d-test-gslb1-agent-0' INFO[0001] Using the k3d-tools node to gather environment information INFO[0001] HostIP: using network gateway 172.18.0.1 address INFO[0001] Starting cluster 'test-gslb1' INFO[0001] Starting servers... INFO[0001] Starting Node 'k3d-test-gslb1-server-0' INFO[0009] Starting agents... INFO[0009] Starting Node 'k3d-test-gslb1-agent-0' INFO[0014] All helpers already running. INFO[0014] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap... INFO[0016] Cluster 'test-gslb1' created successfully! INFO[0016] You can now use it like this: kubectl cluster-info make[1]: Leaving directory '/root/k8gb' make[1]: Entering directory '/root/k8gb' Create local cluster test-gslb2 k3d cluster create -c k3d/test-gslb2.yaml INFO[0000] Using config file k3d/test-gslb2.yaml (k3d.io/v1alpha4#simple) INFO[0000] Prep: Network INFO[0000] Re-using existing network 'k3d-action-bridge-network' (ee0b6bae78a46fd6d885e84a0af19bc4cfc93c3af1aee9c0815e198b1baeaf4a) INFO[0000] Created image volume k3d-test-gslb2-images INFO[0000] Starting new tools node... INFO[0000] Starting Node 'k3d-test-gslb2-tools' INFO[0001] Creating node 'k3d-test-gslb2-server-0' INFO[0001] Creating node 'k3d-test-gslb2-agent-0' INFO[0001] Using the k3d-tools node to gather environment information INFO[0001] HostIP: using network gateway 172.18.0.1 address INFO[0001] Starting cluster 'test-gslb2' INFO[0001] Starting servers... INFO[0001] Starting Node 'k3d-test-gslb2-server-0' INFO[0009] Starting agents... INFO[0009] Starting Node 'k3d-test-gslb2-agent-0' INFO[0013] All helpers already running. INFO[0013] Injecting records for hostAliases (incl. host.k3d.internal) and for 5 network members into CoreDNS configmap... INFO[0016] Cluster 'test-gslb2' created successfully! INFO[0016] You can now use it like this: kubectl cluster-info make[1]: Leaving directory '/root/k8gb' make deploy-stable-version DEPLOY_APPS=true make[1]: Entering directory '/root/k8gb' Deploying EdgeDNS kubectl --context k3d-edgedns apply -f deploy/edge/ secret/ddns-key created deployment.apps/edge created service/bind created configmap/zone created make[2]: Entering directory '/root/k8gb' Deploy local cluster test-gslb1 kubectl config use-context k3d-test-gslb1 Switched to context "k3d-test-gslb1". Create namespace kubectl apply -f deploy/namespace.yaml namespace/k8gb created Deploy GSLB operator from v0.10.0 make deploy-k8gb-with-helm make[3]: Entering directory '/root/k8gb' # create rfc2136 secret kubectl -n k8gb create secret generic rfc2136 --from-literal=secret=96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8= || true secret/rfc2136 created helm repo add --force-update k8gb https://www.k8gb.io "k8gb" has been added to your repositories cd chart/k8gb && helm dependency update walk.go:74: found symbolic link in path: /root/k8gb/chart/k8gb/LICENSE resolves to /root/k8gb/LICENSE. Contents of linked file included and used Getting updates for unmanaged Helm repositories... ...Successfully got an update from the "https://absaoss.github.io/coredns-helm" chart repository Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "k8gb" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 1 charts Downloading coredns from repo https://absaoss.github.io/coredns-helm Deleting outdated charts helm -n k8gb upgrade -i k8gb k8gb/k8gb -f "" \ --set k8gb.clusterGeoTag='eu' --set k8gb.extGslbClustersGeoTags='us' \ --set k8gb.reconcileRequeueSeconds=10 \ --set k8gb.dnsZoneNegTTL=10 \ --set k8gb.imageTag=v0.10.0 \ --set k8gb.log.format=simple \ --set k8gb.log.level=debug \ --set rfc2136.enabled=true \ --set k8gb.edgeDNSServers[0]=172.18.0.1:1053 \ --set externaldns.image=absaoss/external-dns:rfc-ns1 \ --wait --timeout=2m0s Release "k8gb" does not exist. Installing it now. NAME: k8gb LAST DEPLOYED: Fri Nov 11 19:54:26 2022 NAMESPACE: k8gb STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: done _ ___ _ | | _( _ ) ___| |__ | |/ / _ \ / _` | '_ \ | < (_) | (_| | |_) | |_|\_\ ___/ \__ , |_.__/ & all dependencies are installed |___/ 1. Check if your DNS Zone is served by K8GB CoreDNS $ kubectl -n k8gb run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools --command -- /usr/bin/dig @k8gb-coredns SOA . +short If everything is fine then you are expected to see similar output: 
Enter fullscreen mode Exit fullscreen mode

ns1.dns. hostmaster.dns. 1616173200 7200 1800 86400 3600

make[3]: Leaving directory '/root/k8gb' Deploy Ingress helm repo add --force-update nginx-stable https://kubernetes.github.io/ingress-nginx "nginx-stable" has been added to your repositories helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "k8gb" chart repository ...Successfully got an update from the "nginx-stable" chart repository Update Complete. ⎈Happy Helming!⎈ helm -n k8gb upgrade -i nginx-ingress nginx-stable/ingress-nginx \ --version 4.0.15 -f deploy/ingress/nginx-ingress-values.yaml Release "nginx-ingress" does not exist. Installing it now. NAME: nginx-ingress LAST DEPLOYED: Fri Nov 11 19:54:37 2022 NAMESPACE: k8gb STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace k8gb get services -o wide -w nginx-ingress-ingress-nginx-controller' An example Ingress that makes use of the controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: foo spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - backend: service: name: exampleService port: number: 80 path: / # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tls If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1 kind: Secret metadata: name: example-tls namespace: foo data: tls.crt: <base64 encoded cert> tls.key: <base64 encoded key> type: kubernetes.io/tls make[3]: Entering directory '/root/k8gb' Deploy GSLB cr kubectl apply -f deploy/crds/test-namespace.yaml namespace/test-gslb created sed -i 's/cloud\.example\.com/cloud.example.com/g' "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" kubectl apply -f "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" gslb.k8gb.absa.oss/test-gslb created git checkout -- "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" sed -i 's/cloud\.example\.com/cloud.example.com/g' "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" kubectl apply -f "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" gslb.k8gb.absa.oss/test-gslb-failover created git checkout -- "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" Deploy podinfo kubectl apply -f deploy/test-apps service/unhealthy-app created deployment.apps/unhealthy-app created helm repo add podinfo https://stefanprodan.github.io/podinfo "podinfo" has been added to your repositories helm upgrade --install frontend --namespace test-gslb -f deploy/test-apps/podinfo/podinfo-values.yaml \ --set ui.message="` kubectl -n k8gb describe deploy k8gb | awk '/CLUSTER_GEO_TAG/ { printf $2 }'`" \ --set image.repository="ghcr.io/stefanprodan/podinfo" \ podinfo/podinfo \ --version 5.1.1 Release "frontend" does not exist. Installing it now. NAME: frontend LAST DEPLOYED: Fri Nov 11 19:54:41 2022 NAMESPACE: test-gslb STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: echo "Visit http://127.0.0.1:8080 to use your application" kubectl -n test-gslb port-forward deploy/frontend-podinfo 8080:9898 make[3]: Leaving directory '/root/k8gb' Wait until Ingress controller is ready kubectl -n k8gb wait --for=condition=Ready pod -l app.kubernetes.io/name=ingress-nginx --timeout=600s pod/nginx-ingress-ingress-nginx-controller-7cs7g condition met pod/nginx-ingress-ingress-nginx-controller-qx5db condition met test-gslb1 deployed! make[2]: Leaving directory '/root/k8gb' make[2]: Entering directory '/root/k8gb' Deploy local cluster test-gslb2 kubectl config use-context k3d-test-gslb2 Switched to context "k3d-test-gslb2". Create namespace kubectl apply -f deploy/namespace.yaml namespace/k8gb created Deploy GSLB operator from v0.10.0 make deploy-k8gb-with-helm make[3]: Entering directory '/root/k8gb' # create rfc2136 secret kubectl -n k8gb create secret generic rfc2136 --from-literal=secret=96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8= || true secret/rfc2136 created helm repo add --force-update k8gb https://www.k8gb.io "k8gb" has been added to your repositories cd chart/k8gb && helm dependency update walk.go:74: found symbolic link in path: /root/k8gb/chart/k8gb/LICENSE resolves to /root/k8gb/LICENSE. Contents of linked file included and used Getting updates for unmanaged Helm repositories... ...Successfully got an update from the "https://absaoss.github.io/coredns-helm" chart repository Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "k8gb" chart repository ...Successfully got an update from the "podinfo" chart repository ...Successfully got an update from the "nginx-stable" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 1 charts Downloading coredns from repo https://absaoss.github.io/coredns-helm Deleting outdated charts helm -n k8gb upgrade -i k8gb k8gb/k8gb -f "" \ --set k8gb.clusterGeoTag='us' --set k8gb.extGslbClustersGeoTags='eu' \ --set k8gb.reconcileRequeueSeconds=10 \ --set k8gb.dnsZoneNegTTL=10 \ --set k8gb.imageTag=v0.10.0 \ --set k8gb.log.format=simple \ --set k8gb.log.level=debug \ --set rfc2136.enabled=true \ --set k8gb.edgeDNSServers[0]=172.18.0.1:1053 \ --set externaldns.image=absaoss/external-dns:rfc-ns1 \ --wait --timeout=2m0s Release "k8gb" does not exist. Installing it now. NAME: k8gb LAST DEPLOYED: Fri Nov 11 19:55:03 2022 NAMESPACE: k8gb STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: done _ ___ _ | | _( _ ) ___| |__ | |/ / _ \ / _` | '_ \ | < (_) | (_| | |_) | |_|\_\ ___/ \__ , |_.__/ & all dependencies are installed |___/ 1. Check if your DNS Zone is served by K8GB CoreDNS $ kubectl -n k8gb run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools --command -- /usr/bin/dig @k8gb-coredns SOA . +short If everything is fine then you are expected to see similar output: 
Enter fullscreen mode Exit fullscreen mode

ns1.dns. hostmaster.dns. 1616173200 7200 1800 86400 3600

make[3]: Leaving directory '/root/k8gb' Deploy Ingress helm repo add --force-update nginx-stable https://kubernetes.github.io/ingress-nginx "nginx-stable" has been added to your repositories helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "k8gb" chart repository ...Successfully got an update from the "podinfo" chart repository ...Successfully got an update from the "nginx-stable" chart repository Update Complete. ⎈Happy Helming!⎈ helm -n k8gb upgrade -i nginx-ingress nginx-stable/ingress-nginx \ --version 4.0.15 -f deploy/ingress/nginx-ingress-values.yaml Release "nginx-ingress" does not exist. Installing it now. NAME: nginx-ingress LAST DEPLOYED: Fri Nov 11 19:55:14 2022 NAMESPACE: k8gb STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace k8gb get services -o wide -w nginx-ingress-ingress-nginx-controller' An example Ingress that makes use of the controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: foo spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - backend: service: name: exampleService port: number: 80 path: / # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tls If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1 kind: Secret metadata: name: example-tls namespace: foo data: tls.crt: <base64 encoded cert> tls.key: <base64 encoded key> type: kubernetes.io/tls make[3]: Entering directory '/root/k8gb' Deploy GSLB cr kubectl apply -f deploy/crds/test-namespace.yaml namespace/test-gslb created sed -i 's/cloud\.example\.com/cloud.example.com/g' "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" kubectl apply -f "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" gslb.k8gb.absa.oss/test-gslb created git checkout -- "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" sed -i 's/cloud\.example\.com/cloud.example.com/g' "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" kubectl apply -f "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" gslb.k8gb.absa.oss/test-gslb-failover created git checkout -- "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" Deploy podinfo kubectl apply -f deploy/test-apps service/unhealthy-app created deployment.apps/unhealthy-app created helm repo add podinfo https://stefanprodan.github.io/podinfo "podinfo" already exists with the same configuration, skipping helm upgrade --install frontend --namespace test-gslb -f deploy/test-apps/podinfo/podinfo-values.yaml \ --set ui.message="` kubectl -n k8gb describe deploy k8gb | awk '/CLUSTER_GEO_TAG/ { printf $2 }'`" \ --set image.repository="ghcr.io/stefanprodan/podinfo" \ podinfo/podinfo \ --version 5.1.1 Release "frontend" does not exist. Installing it now. NAME: frontend LAST DEPLOYED: Fri Nov 11 19:55:20 2022 NAMESPACE: test-gslb STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: echo "Visit http://127.0.0.1:8080 to use your application" kubectl -n test-gslb port-forward deploy/frontend-podinfo 8080:9898 make[3]: Leaving directory '/root/k8gb' Wait until Ingress controller is ready kubectl -n k8gb wait --for=condition=Ready pod -l app.kubernetes.io/name=ingress-nginx --timeout=600s pod/nginx-ingress-ingress-nginx-controller-tvqn5 condition met pod/nginx-ingress-ingress-nginx-controller-nnw6p condition met test-gslb2 deployed! make[2]: Leaving directory '/root/k8gb' make[1]: Leaving directory '/root/k8gb' 
Enter fullscreen mode Exit fullscreen mode

Les différents clusters sont présents localement :

root@k8gb:~# k3d cluster list NAME SERVERS AGENTS LOADBALANCER edgedns 1/1 0/0 false test-gslb1 1/1 1/1 false test-gslb2 1/1 1/1 false root@k8gb:~# kubectl cluster-info --context k3d-edgedns && kubectl cluster-info --context k3d-test-gslb1 && kubectl cluster-info --context k3d-test-gslb2 Kubernetes control plane is running at https://0.0.0.0:43459 CoreDNS is running at https://0.0.0.0:43459/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Kubernetes control plane is running at https://0.0.0.0:36415 CoreDNS is running at https://0.0.0.0:36415/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Kubernetes control plane is running at https://0.0.0.0:43505 CoreDNS is running at https://0.0.0.0:43505/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 
Enter fullscreen mode Exit fullscreen mode

Une brique de monitoring avec Prometheus est fournie également :

root@k8gb:~/k8gb# make deploy-prometheus 
Enter fullscreen mode Exit fullscreen mode

Le cluster test-gslb1 expose le DNS externe sur le port par défaut :5053 tandis que test-gslb2 sur le port :5054.

Le cluster edgedns exécute BIND et agit en tant que EdgeDNS détenant la zone déléguée pour notre configuration de test et les réponses sur le port :1053.

root@k8gb:~# dig @localhost -p 1053 roundrobin.cloud.example.com +short +tcp ;; Connection to ::1#1053(::1) for roundrobin.cloud.example.com failed: connection refused. ;; Connection to ::1#1053(::1) for roundrobin.cloud.example.com failed: connection refused. ;; Connection to ::1#1053(::1) for roundrobin.cloud.example.com failed: connection refused. 172.18.0.4 172.18.0.6 172.18.0.3 172.18.0.5 root@k8gb:~# for c in k3d-test-gslb{1,2}; do kubectl get no -ocustom-columns="NAME:.metadata.name,IP:status.addresses[0].address" --context $c; done NAME IP k3d-test-gslb1-server-0 172.18.0.3 k3d-test-gslb1-agent-0 172.18.0.4 NAME IP k3d-test-gslb2-server-0 172.18.0.5 k3d-test-gslb2-agent-0 172.18.0.6 
Enter fullscreen mode Exit fullscreen mode

et je peux faire le test via un header spécifique pour les URLs de test :

root@k8gb:~# curl localhost:80 -H "Host:roundrobin.cloud.example.com" && curl localhost:81 -H "Host:roundrobin.cloud.example.com" { "hostname": "frontend-podinfo-7cf64f696-952m9", "version": "5.1.1", "revision": "", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "eu", "goos": "linux", "goarch": "amd64", "runtime": "go1.15.6", "num_goroutine": "9", "num_cpu": "8" }{ "hostname": "frontend-podinfo-59888d86d6-fm77n", "version": "5.1.1", "revision": "", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "us", "goos": "linux", "goarch": "amd64", "runtime": "go1.15.6", "num_goroutine": "9", "num_cpu": "8" 
Enter fullscreen mode Exit fullscreen mode

Les deux clusters ont podinfo installé au sommet où chaque cluster a été étiqueté pour servir une région différente. Dans cette démonstration, nous allons accéder à podinfo par wget -qO — failover.cloud.example.com et selon que podinfo est exécuté dans le cluster, il renvoie uniquement eu ou us.

root@k8gb:~/k8gb# make init-failover kubectl config use-context k3d-test-gslb2 Switched to context "k3d-test-gslb2". kubectl apply -f "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" gslb.k8gb.absa.oss/test-gslb-failover unchanged kubectl config use-context k3d-test-gslb1 Switched to context "k3d-test-gslb1". kubectl apply -f "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" gslb.k8gb.absa.oss/test-gslb-failover unchanged make start-test-app make[1]: Entering directory '/root/k8gb' kubectl scale deployment frontend-podinfo -n test-gslb --replicas=2 deployment.apps/frontend-podinfo scaled make[1]: Leaving directory '/root/k8gb' root@k8gb:~/k8gb# make test-failover { "hostname": "frontend-podinfo-7cf64f696-c8s5s", "version": "5.1.1", "revision": "", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "eu", "goos": "linux", "goarch": "amd64", "runtime": "go1.15.6", "num_goroutine": "8", "num_cpu": "8" }pod "busybox" deleted root@k8gb:~/k8gb# make stop-test-app kubectl scale deployment frontend-podinfo -n test-gslb --replicas=0 deployment.apps/frontend-podinfo scaled root@k8gb:~/k8gb# make test-failover { "hostname": "frontend-podinfo-59888d86d6-fm77n", "version": "5.1.1", "revision": "", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "us", "goos": "linux", "goarch": "amd64", "runtime": "go1.15.6", "num_goroutine": "13", "num_cpu": "8" }pod "busybox" deleted root@k8gb:~/k8gb# make start-test-app kubectl scale deployment frontend-podinfo -n test-gslb --replicas=2 deployment.apps/frontend-podinfo scaled root@k8gb:~/k8gb# make test-failover { "hostname": "frontend-podinfo-7cf64f696-jn54t", "version": "5.1.1", "revision": "", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "eu", "goos": "linux", "goarch": "amd64", "runtime": "go1.15.6", "num_goroutine": "9", "num_cpu": "8" }pod "busybox" deleted 
Enter fullscreen mode Exit fullscreen mode

La partie Round Robin peut également être testée avec Podinfo qui retournera la région us ou *eu *

root@k8gb:~/k8gb# make test-round-robin { "hostname": "frontend-podinfo-7cf64f696-shlbd", "version": "5.1.1", "revision": "", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "eu", "goos": "linux", "goarch": "amd64", "runtime": "go1.15.6", "num_goroutine": "9", "num_cpu": "8" }pod "busybox" deleted root@k8gb:~/k8gb# make test-round-robin { "hostname": "frontend-podinfo-59888d86d6-fm77n", "version": "5.1.1", "revision": "", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "us", "goos": "linux", "goarch": "amd64", "runtime": "go1.15.6", "num_goroutine": "12", "num_cpu": "8" }pod "busybox" deleted 
Enter fullscreen mode Exit fullscreen mode

Suppression de l’environnement de test pour la suite …

root@k8gb:~/k8gb# make destroy-full-local-setup 
Enter fullscreen mode Exit fullscreen mode

Liqo est un projet open-source qui permet des topologies multi-clusters Kubernetes dynamiques et transparentes, prenant en charge des infrastructures hétérogènes sur site, dans le cloud et en périphérie.

Que fournit-il ?

  • Peering : établissement automatique, de pair à pair, de relations de consommation de ressources et de services entre des clusters indépendants et hétérogènes

  • VPN complexes et d’autorités de certification : tout est auto-négocié de manière transparente pour vous.

  • Déchargement transparent des charges de travail vers des clusters distants. Le multi-clusters est rendu natif et transparent : réduisez tout un cluster distant en un nœud virtuel conforme aux approches et outils standard de Kubernetes.

  • Connectivité transparente entre pods et pods et entre pods et services, quelles que soient les configurations et les plugins CNI sous-jacents.

  • Accédez nativement aux services exportés par les clusters distants et répartissez les composants applicatifs interconnectés sur plusieurs infrastructures, tout le trafic inter-clusters passant par des tunnels réseau sécurisés.

  • Storage fabric : prise en charge de l’exécution à distance de charges de travail à état, selon l’approche de la gravité des données. Étendez de manière transparente les techniques de déploiement de haute disponibilité standard (par exemple, pour les bases de données) aux scénarios multi-clusters, pour des garanties accrues. Tout cela sans la complexité de la gestion de plusieurs répliques indépendantes de clusters et d’applications.

Je récupère le client liqoctl localement ainsi qu’une copie du dépôt sur GitHub :

root@k8gb:~# liqoctl liqoctl is a CLI tool to install and manage Liqo. Liqo is a platform to enable dynamic and decentralized resource sharing across Kubernetes clusters, either on-prem or managed. Liqo allows to run pods on a remote cluster seamlessly and without any modification of Kubernetes and the applications. With Liqo it is possible to extend the control and data plane of a Kubernetes cluster across the cluster's boundaries, making multi-cluster native and transparent: collapse an entire remote cluster to a local virtual node, enabling workloads offloading, resource management and cross-cluster communication compliant with the standard Kubernetes approach. Usage: liqoctl [command] Available Commands: completion Generate the autocompletion script for the specified shell generate Generate data/commands to perform additional operations help Help about any command install Install/upgrade Liqo in the selected cluster move Move an object to a different cluster offload Offload a resource to remote clusters peer Enable a peering towards a remote cluster status Show the status of Liqo uninstall Uninstall Liqo from the selected cluster unoffload Unoffload a resource from remote clusters unpeer Disable a peering towards a remote cluster version Print the liqo CLI version and the deployed Liqo version Flags: --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for liqoctl --kubeconfig string Path to the kubeconfig file to use for CLI requests --user string The name of the kubeconfig user to use -v, --verbose Enable verbose logs (default false) Use "liqoctl [command] --help" for more information about a command. root@k8gb:~# git clone https://github.com/liqotech/liqo Cloning into 'liqo'... remote: Enumerating objects: 28194, done. remote: Counting objects: 100% (261/261), done. remote: Compressing objects: 100% (170/170), done. remote: Total 28194 (delta 132), reused 173 (delta 81), pack-reused 27933 Receiving objects: 100% (28194/28194), 34.98 MiB | 23.34 MiB/s, done. Resolving deltas: 100% (18901/18901), done. root@k8gb:~# cd liqo/examples/global-ingress/ root@k8gb:~/liqo/examples/global-ingress# ls manifests setup.sh 
Enter fullscreen mode Exit fullscreen mode

Le script de configuration dans le dépôt GitHub en exemple crée trois clusters k3s et déploie l’application d’infrastructure appropriée au-dessus d’eux, comme détaillé ci-dessous :

  • edgedns : ce cluster sera utilisé pour déployer le service DNS. Dans un environnement de production, il doit s’agir d’un service DNS externe (par exemple, AWS Route53). Il inclut le serveur Bind (manifestes dans le dossier manifestes/edge).
  • gslb-eu et gslb-us : ces clusters seront utilisés pour déployer l’application.

Ils comprennent :

  • ExternalDNS : il est responsable de la configuration des entrées DNS.
  • Ingress Nginx : il est responsable de la gestion du trafic local ingress.
  • K8GB : il configure l’entrée multi-clusters.
  • Liqo : il permet à l’application de s’étendre sur plusieurs clusters, et se charge de refléter les ressources requises.
root@k8gb:~/liqo/examples/global-ingress# ./setup.sh SUCCESS No cluster "edgedns" is running. SUCCESS No cluster "gslb-eu" is running. SUCCESS No cluster "gslb-us" is running. SUCCESS Cluster "edgedns" has been created. SUCCESS Cluster "gslb-eu" has been created. SUCCESS Cluster "gslb-us" has been created. SUCCESS Bind server has been deployed. SUCCESS K8gb has been installed on cluster. SUCCESS Ingress-nginx has been installed on cluster. SUCCESS Liqo has been installed on cluster "gslb-eu". SUCCESS K8gb has been installed on cluster. SUCCESS Ingress-nginx has been installed on cluster. SUCCESS Liqo has been installed on cluster "gslb-us". 
Enter fullscreen mode Exit fullscreen mode

Liqo étant installé je peux procéder au peering entre les clusters :

root@k8gb:~/liqo/examples/global-ingress# export KUBECONFIG_DNS=$(k3d kubeconfig write edgedns) export KUBECONFIG=$(k3d kubeconfig write gslb-eu) export KUBECONFIG_US=$(k3d kubeconfig write gslb-us) root@k8gb:~/liqo/examples/global-ingress# PEER_US=$(liqoctl generate peer-command --only-command --kubeconfig $KUBECONFIG_US) root@k8gb:~/liqo/examples/global-ingress# echo "$PEER_US" | bash INFO Peering enabled INFO Authenticated to cluster "gslb-us" INFO Outgoing peering activated to the remote cluster "gslb-us" INFO Network established to the remote cluster "gslb-us" INFO Node created for remote cluster "gslb-us" INFO Peering successfully established root@k8gb:~/liqo/examples/global-ingress# kubectl get foreignclusters NAME TYPE OUTGOING PEERING INCOMING PEERING NETWORKING AUTHENTICATION AGE gslb-us OutOfBand Established None Established Established 18s 
Enter fullscreen mode Exit fullscreen mode

On peut voir un nouveau noeud virtuel (liqo-gslb-us) dans le cluster gslb-eu :

root@k8gb:~/liqo/examples/global-ingress# kubectl get node --selector=liqo.io/type=virtual-node NAME STATUS ROLES AGE VERSION liqo-gslb-us Ready agent 93s v1.22.6+k3s1 
Enter fullscreen mode Exit fullscreen mode

Maintenant que le peering Liqo est établi, et que le nœud virtuel est prêt, il est possible de procéder au déploiement de l’application de démonstration podinfo. Cette application sert une page web montrant différentes informations, y compris le nom du pod, ce qui permet d’identifier facilement la réplique qui génère la réponse HTTP :

root@k8gb:~/liqo/examples/global-ingress# kubectl create namespace podinfo namespace/podinfo created root@k8gb:~/liqo/examples/global-ingress# liqoctl offload namespace podinfo --namespace-mapping-strategy EnforceSameName INFO Offloading of namespace "podinfo" correctly enabled INFO Offloading completed successfully root@k8gb:~/liqo/examples/global-ingress# helm upgrade --install podinfo --namespace podinfo \ podinfo/podinfo -f manifests/values/podinfo.yaml Release "podinfo" does not exist. Installing it now. NAME: podinfo LAST DEPLOYED: Fri Nov 11 21:46:59 2022 NAMESPACE: podinfo STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: http://liqo.cloud.example.com/ root@k8gb:~/liqo/examples/global-ingress# kubectl get ingress -n podinfo NAME CLASS HOSTS ADDRESS PORTS AGE podinfo nginx liqo.cloud.example.com 172.21.0.3,172.21.0.4 80 42s 
Enter fullscreen mode Exit fullscreen mode

Chaque installation locale de K8GB crée une ressource Gslb avec les informations Ingress et la stratégie donnée (RoundRobin dans ce cas), et ExternalDNS remplit les enregistrements DNS en conséquence :

root@k8gb:~/liqo/examples/global-ingress# kubectl get gslbs.k8gb.absa.oss -n podinfo podinfo -o yaml apiVersion: k8gb.absa.oss/v1beta1 kind: Gslb metadata: annotations: k8gb.io/strategy: roundRobin meta.helm.sh/release-name: podinfo meta.helm.sh/release-namespace: podinfo creationTimestamp: "2022-11-11T21:47:01Z" finalizers: - k8gb.absa.oss/finalizer generation: 2 name: podinfo namespace: podinfo ownerReferences: - apiVersion: networking.k8s.io/v1 blockOwnerDeletion: true controller: true kind: Ingress name: podinfo uid: e4fe5d03-5aa6-4883-b204-5e63ef8a0b31 resourceVersion: "1858" uid: a450617c-7922-4272-865b-4cbc7e09ce03 spec: ingress: ingressClassName: nginx rules: - host: liqo.cloud.example.com http: paths: - backend: service: name: podinfo port: number: 9898 path: / pathType: ImplementationSpecific strategy: dnsTtlSeconds: 30 splitBrainThresholdSeconds: 300 type: roundRobin status: geoTag: eu healthyRecords: liqo.cloud.example.com: - 172.21.0.3 - 172.21.0.4 - 172.21.0.5 - 172.21.0.6 serviceHealth: liqo.cloud.example.com: Healthy root@k8gb:~/liqo/examples/global-ingress# kubectl get gslbs.k8gb.absa.oss -n podinfo podinfo -o yaml --kubeconfig $KUBECONFIG_US apiVersion: k8gb.absa.oss/v1beta1 kind: Gslb metadata: annotations: k8gb.io/strategy: roundRobin meta.helm.sh/release-name: podinfo meta.helm.sh/release-namespace: podinfo creationTimestamp: "2022-11-11T21:47:01Z" finalizers: - k8gb.absa.oss/finalizer generation: 2 name: podinfo namespace: podinfo ownerReferences: - apiVersion: networking.k8s.io/v1 blockOwnerDeletion: true controller: true kind: Ingress name: podinfo uid: 7519e30f-0a46-4c23-86b4-5970c9c030bb resourceVersion: "1692" uid: 1c3422c8-8c4f-4d2c-a659-9f0907694ab0 spec: ingress: ingressClassName: nginx rules: - host: liqo.cloud.example.com http: paths: - backend: service: name: podinfo port: number: 9898 path: / pathType: ImplementationSpecific strategy: dnsTtlSeconds: 30 splitBrainThresholdSeconds: 300 type: roundRobin status: geoTag: us healthyRecords: liqo.cloud.example.com: - 172.21.0.5 - 172.21.0.6 - 172.21.0.3 - 172.21.0.4 serviceHealth: liqo.cloud.example.com: Healthy 
Enter fullscreen mode Exit fullscreen mode

Dans les deux clusters, les ressources Gslb sont à peu près identiques ; elles ne diffèrent que pour le champ geoTag : “ eu ” ou “ us ” dans ce cas …

root@k8gb:~/liqo/examples/global-ingress# kubectl get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-96cc4f57d-ss69w 1/1 Running 0 18m k8gb pod/k8gb-6ff7b8d894-mqv4g 1/1 Running 0 18m k8gb pod/external-dns-6bd44c4856-47k5q 1/1 Running 0 18m k8gb pod/nginx-ingress-ingress-nginx-controller-68jkc 1/1 Running 0 18m k8gb pod/nginx-ingress-ingress-nginx-controller-w6jgj 1/1 Running 0 18m liqo pod/liqo-network-manager-7f775cdbf9-q2dcb 1/1 Running 0 18m liqo pod/liqo-route-tmqfl 1/1 Running 0 18m liqo pod/liqo-controller-manager-db9cf996d-qvdtl 1/1 Running 0 18m liqo pod/liqo-route-h5mk7 1/1 Running 0 18m liqo pod/liqo-gateway-59b8745db9-t9gtw 1/1 Running 0 18m liqo pod/liqo-crd-replicator-68cb95df4f-9r2cm 1/1 Running 0 18m liqo pod/liqo-proxy-54bf9dbb8d-h48vm 1/1 Running 0 18m liqo pod/liqo-auth-6f6d6946fb-pqdj9 1/1 Running 0 18m liqo pod/liqo-metric-agent-94f8b96fc-62p7s 1/1 Running 0 18m k8gb pod/k8gb-coredns-5bc6689949-pv8bs 1/1 Running 0 18m liqo-tenant-gslb-us-726836 pod/virtual-kubelet-5657cbd4bb-p79jv 1/1 Running 0 15m podinfo pod/podinfo-68f5575b95-wqqbg 1/1 Running 0 10m podinfo pod/podinfo-68f5575b95-vlbgp 1/1 Running 0 10m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.30.0.1 <none> 443/TCP 19m kube-system service/kube-dns ClusterIP 10.30.0.10 <none> 53/UDP,53/TCP,9153/TCP 19m k8gb service/k8gb-coredns ClusterIP 10.30.154.126 <none> 53/UDP 18m liqo service/liqo-network-manager ClusterIP 10.30.183.107 <none> 6000/TCP 18m liqo service/liqo-metric-agent ClusterIP 10.30.194.131 <none> 443/TCP 18m liqo service/liqo-auth NodePort 10.30.38.82 <none> 443:31758/TCP 18m liqo service/liqo-controller-manager ClusterIP 10.30.137.139 <none> 9443/TCP 18m liqo service/liqo-proxy ClusterIP 10.30.181.12 <none> 8118/TCP 18m liqo service/liqo-gateway NodePort 10.30.49.231 <none> 5871:31366/UDP 18m podinfo service/podinfo ClusterIP 10.30.150.228 <none> 9898/TCP,9999/TCP 10m root@k8gb:~/liqo/examples/global-ingress# k3d cluster list NAME SERVERS AGENTS LOADBALANCER edgedns 1/1 0/0 false gslb-eu 1/1 1/1 false gslb-us 1/1 1/1 false root@k8gb:~/liqo/examples/global-ingress# kubectl get po,svc -A --kubeconfig $KUBECONFIG_US NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-96cc4f57d-88b9s 1/1 Running 0 21m k8gb pod/external-dns-768c97f5c8-dqstb 1/1 Running 0 20m k8gb pod/k8gb-864c495ddb-9jv2b 1/1 Running 0 20m k8gb pod/nginx-ingress-ingress-nginx-controller-g26vh 1/1 Running 0 20m k8gb pod/nginx-ingress-ingress-nginx-controller-jrjjl 1/1 Running 0 20m liqo pod/liqo-route-796x7 1/1 Running 0 20m liqo pod/liqo-crd-replicator-68cb95df4f-zbdsb 1/1 Running 0 20m liqo pod/liqo-proxy-54bf9dbb8d-l77mr 1/1 Running 0 20m liqo pod/liqo-metric-agent-94f8b96fc-rc8vz 1/1 Running 0 20m liqo pod/liqo-controller-manager-d54767454-z2zg8 1/1 Running 0 20m liqo pod/liqo-gateway-59b8745db9-4n6gv 1/1 Running 0 20m liqo pod/liqo-network-manager-7f775cdbf9-7tn7j 1/1 Running 0 20m liqo pod/liqo-route-zcqmc 1/1 Running 0 20m k8gb pod/k8gb-coredns-5bc6689949-twc27 1/1 Running 0 20m liqo pod/liqo-auth-95c87b89b-nk9jb 1/1 Running 0 20m podinfo pod/podinfo-68f5575b95-wqqbg 1/1 Running 0 13m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.30.0.1 <none> 443/TCP 21m kube-system service/kube-dns ClusterIP 10.30.0.10 <none> 53/UDP,53/TCP,9153/TCP 21m k8gb service/k8gb-coredns ClusterIP 10.30.180.140 <none> 53/UDP 20m liqo service/liqo-network-manager ClusterIP 10.30.133.122 <none> 6000/TCP 20m liqo service/liqo-metric-agent ClusterIP 10.30.157.244 <none> 443/TCP 20m liqo service/liqo-proxy ClusterIP 10.30.27.174 <none> 8118/TCP 20m liqo service/liqo-auth NodePort 10.30.128.127 <none> 443:31335/TCP 20m liqo service/liqo-controller-manager ClusterIP 10.30.138.46 <none> 9443/TCP 20m liqo service/liqo-gateway NodePort 10.30.156.207 <none> 5871:30349/UDP 20m podinfo service/podinfo ClusterIP 10.30.48.175 <none> 9898/TCP,9999/TCP 13m 
Enter fullscreen mode Exit fullscreen mode

Comme podinfo est un service HTTP, on peut le contacter en utilisant la commande curl avec l’option -v pour comprendre lequel des nœuds est visé. Le serveur DNS est utilisé afin de résoudre le nom d’hôte vers l’adresse IP du service …

root@k8gb:~/liqo/examples/global-ingress# HOSTNAME="liqo.cloud.example.com" K8GB_COREDNS_IP=$(kubectl get svc k8gb-coredns -n k8gb -o custom-columns='IP:spec.clusterIP' --no-headers) kubectl run -it --rm curl --restart=Never --image=curlimages/curl:7.82.0 --command \ --overrides "{\"spec\":{\"dnsConfig\":{\"nameservers\":[\"${K8GB_COREDNS_IP}\"]},\"dnsPolicy\":\"None\"}}" \ -- curl $HOSTNAME -v * Trying 172.21.0.3:80... * Connected to liqo.cloud.example.com (172.21.0.3) port 80 (#0) > GET / HTTP/1.1 > Host: liqo.cloud.example.com > User-Agent: curl/7.82.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Fri, 11 Nov 2022 21:55:06 GMT < Content-Type: application/json; charset=utf-8 < Content-Length: 392 < Connection: keep-alive < X-Content-Type-Options: nosniff < { "hostname": "podinfo-68f5575b95-vlbgp", "version": "6.2.3", "revision": "8615cb75d926ea0ba5353b1d56867868c737bf5e", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "greetings from podinfo v6.2.3", "goos": "linux", "goarch": "amd64", "runtime": "go1.19.3", "num_goroutine": "8", "num_cpu": "8" * Connection #0 to host liqo.cloud.example.com left intact }pod "curl" deleted root@k8gb:~/liqo/examples/global-ingress# kubectl run -it --rm curl --restart=Never --image=curlimages/curl:7.82.0 --command \ --overrides "{\"spec\":{\"dnsConfig\":{\"nameservers\":[\"${K8GB_COREDNS_IP}\"]},\"dnsPolicy\":\"None\"}}" \ -- curl $HOSTNAME -v * Trying 172.21.0.4:80... * Connected to liqo.cloud.example.com (172.21.0.4) port 80 (#0) > GET / HTTP/1.1 > Host: liqo.cloud.example.com > User-Agent: curl/7.82.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Fri, 11 Nov 2022 21:55:31 GMT < Content-Type: application/json; charset=utf-8 < Content-Length: 392 < Connection: keep-alive < X-Content-Type-Options: nosniff < { "hostname": "podinfo-68f5575b95-wqqbg", "version": "6.2.3", "revision": "8615cb75d926ea0ba5353b1d56867868c737bf5e", "color": "#34577c", "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", "message": "greetings from podinfo v6.2.3", "goos": "linux", "goarch": "amd64", "runtime": "go1.19.3", "num_goroutine": "8", "num_cpu": "8" * Connection #0 to host liqo.cloud.example.com left intact }pod "curl" deleted 
Enter fullscreen mode Exit fullscreen mode

En lançant ce pod plusieurs fois, on peut voir différentes adresses IP et différents pods frontaux en réponse (comme défini dans la politique du GSLB en outre ici K8GB) …

Il est possible de coupler K8GB avec un autre projet open source : Admiralty

En effet, la combinaison de K8GB et d’Admiralty offre de puissantes capacités globales en mode multi-clusters. Admiralty va programmer globalement les ressources. Et K8GB assurera l’équilibrage au niveau mondial comme le montre cet exemple =>

Integration with Admiralty

La capacité d’équilibrer la charge des requêtes HTTP sur plusieurs clusters Kubernetes, s’exécutant dans plusieurs centres de données/clouds est une exigence clé pour un système résilient.

Et il ne semble pas exister beaucoup de solution open source GSLB (Global Server Load Balancer) existante qui prenne en charge cette exigence de manière native dans le cloud et conviviale pour Kubernetes. D’ailleurs, les projets suivants représentent des exemples d’autres mises en œuvre de la GSLB qui pourraient être exploités ou utilisés comme référence :

Open Source =>

Commercial =>

À suivre !

Top comments (0)