It is very easy today to establish a connection between a container in Kubernetes and a relational database server, just create a SQL user and open a TCP connection. In cloud computing, in the case of Scaleway Elements, the equivalent is connecting a container in a Scaleway Kubernetes Kapsule cluster to a Scaleway Relational Database instance (RDB).
Important points should be taken into account in setting up this connectivity.
Which network topology to choose? How to authenticate and authorize the connection to the RDB instance? Can I publicly expose a RDB instance?
Which architecture could be the most efficient, maintainable and scalable?
Scenarios
RDB service supports the following scenarios for accessing the RDB instance:
- A virtual instance in the same project
- A virtual instance in a different project
A client application through the internet
The private network scenario is actually not possible. A feature request is opened and it's ongoing.
The scenarios that concern us are the first two:
- A Kapsule cluster and a RDB instance in the same project
- A Kapsule cluster and a RDB instance in a different project
Let's discover the possible architectures that could be used to implement each scenario.
Direct communication with public IP and authorized networks
In this architecture, our RDB instance is isolated on Scaleway network and accessible through public IP address to only Kapsule cluster that requires access to it. Pods have access to RDB database using Username/Password.
Public Gateway is not available yet for Kubernetes Kapsule and RDB instance, see featurer request, so you need to whitelist all the Kubernetes nodes IPs.
Direct communication in separate projects
In this architecture, our RDB instance is isolated on its own project accessible through public IP address to only Kapsule cluster that requires access to it. Pods have access to RDB database using Username/Password.
Each architecture has its own advantages and disadvantages but all apply project isolation best practices for securing sensitive data in RDB.
Let's implement the scenario 1.
Prerequisites
- Download, install, and configure the Scaleway CLI.
- Terraform
- Kubectl
- Kustomize
Architecture
The overall architecture that we will implement during this article is as follows:
Objectives
During this section of the workshop:
With Terraform
- We will create a Kapsule cluster
- a higly available RDB instance
- a virtual instance to act as a bastion to access RDB from outside
- a security group for the virtual instance
With Kubectl
- We will deploy a metabase application
- A load balancer with Treafik 2
- a SSL certificate
The article is divided into four sections:
- Configuring the bastion
- Creating a Kubernetes Kapsule cluster using Terraform
- Securing sensitive data in Scaleway RDB
- Securing the connectivity between a Kubernetes Kapsule application and a RDB database
Configuring the bastion
In this section we will deploy the following SCW resources:
- a virtual instance to act as a bastion to access RDB from outside
- a security group to allow only authorized users to access the virtual instance
Security Groups
Security Groups allows us to restrict the inbound and outbound network traffic to and from a virtual instance. In our case, we implement the following rule:
- A rule to restrict access to the virtual instance from the SSH port to only authorized networks.
Create a terraform file infra/plan/sg.tf
resource "scaleway_instance_security_group" "bastion" { name = "bastion" inbound_default_policy = "drop" outbound_default_policy = "accept" dynamic "inbound_rule" { for_each = var.authorized_source_ranges content { action = "accept" port = "22" ip = inbound_rule.value } } }
Bastion
In order to access the RDB instance from outside, we need to create a bastion host. We can achieve that by creating a virtual instance. During the initialization of the instance we install the PostgreSQL client.
We also create a SSH key to connect to the virtual instance. You can remove this resource if you want to manage the keys outside of terraform.
Create a terraform file infra/plan/bastion.tf
resource "scaleway_instance_ip" "bastion" {} resource "scaleway_instance_server" "bastion" { depends_on = [scaleway_account_ssh_key.bastion] name = "bastion" type = var.bastion_instance_type image = "ubuntu_xenial" security_group_id = scaleway_instance_security_group.bastion.id ip_id = scaleway_instance_ip.bastion.id tags = ["bastion", var.env] user_data = { env = var.env cloud-init = file("${path.module}/cloud-init.sh") } } resource "scaleway_account_ssh_key" "bastion" { name = "bastion" public_key = var.bastion_public_ssh_key }
Create a script file cloud-init.sh
#!/bin/sh apt update apt install -y unzip # install awscli curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install # install psql wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list apt update apt install -y postgresql-client-13
Let's configure our terraform.
Create a terraform file infra/plan/variable.tf
variable "zone" { type = string } variable "region" { type = string } variable "env" { type = string } variable "authorized_source_ranges" { type = list(string) description = "Addresses or CIDR blocks which are allowed to connect to Virtual Instance." } variable "bastion_instance_type" { type = string } variable "bastion_public_ssh_key" { type = string }
Add a infra/plan/version.tf
file
terraform { required_providers { scaleway = { source = "scaleway/scaleway" version = "2.1.0" } } required_version = ">= 0.13" }
Add a infra/plan/provider.tf
file
provider "scaleway" { zone = var.zone region = var.region }
Add a infra/plan/backend.tf
terraform { backend "s3" { } }
And a ìnfra/plan/output.tf
output "bastion_ip" { value = scaleway_instance_ip.bastion.address }
Now, export the following variables and create a bucket to save your terraform states.
cat <<EOF >~/.aws/credentials [default] aws_access_key_id=<SCW_ACCESS_KEY> aws_secret_access_key=<SCW_SECRET_KEY> region=fr-par EOF export SCW_ACCESS_KEY=<SCW_ACCESS_KEY> export SCW_SECRET_KEY=<SCW_SECRET_KEY> export SCW_REGION=fr-par sed -i "s/<SCW_ACCESS_KEY>/$SCW_ACCESS_KEY/g; s/<SCW_SECRET_KEY>/${SCW_SECRET_KEY}/g;" ~/.aws/credentials export ENV=dev aws s3api create-bucket --bucket company-$ENV-terraform-backend --endpoint-url https://s3.$SCW_REGION.scw.cloud aws s3api put-bucket-versioning --bucket company-$ENV-terraform-backend --versioning-configuration Status=Enabled --endpoint-url https://s3.$SCW_REGION.scw.cloud
Create a infra/plan/config/dev/terraform.tfvars
:
zone = "fr-par-1" region = "fr-par" env = "dev" authorized_source_ranges = ["<AUTHORIZED_NETWORK>"] bastion_instance_type = "STARDUST1-S" bastion_public_ssh_key = "<BASTION_PUBLIC_SSH_KEY>"
Create a infra/plan/config/dev/s3.backend
and deploy the infrastructure:
bucket = "company-dev-terraform-backend" key = "terraform.tfstate" region = "fr-par" endpoint = "https://s3.fr-par.scw.cloud" skip_credentials_validation = true skip_region_validation = true
cd infra/plan sed -i "s,<AUTHORIZED_NETWORK>, $(curl -s http://checkip.amazonaws.com/),g; s,<BASTION_PUBLIC_SSH_KEY>,$(cat <PATH_TO_SSH_PUB>/id_rsa.pub),g;" terraform.tfvars terraform init --backend-config=config/$ENV/s3.backend -reconfigure terraform validate terraform apply -var-file=config/$ENV/terraform.tfvars
Let's check if all the resources have been created and are working correctly
Reserved IP
Security Groups
Virtual instance
Let's check the connection
ssh -i <PRIVATE_KEY_PATH> root@$(terraform output --raw bastion_ip) Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-159-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Thu Nov 4 21:13:14 UTC 2021 System load: 0.09 Processes: 83 Usage of /: 19.4% of 8.86GB Users logged in: 0 Memory usage: 15% IP address for ens2: 10.69.86.243 Swap usage: 0%
Creating a Kubernetes Kapsule cluster using Terraform
In the section we created our network stack. In this part we will create and configure the Kubernetes Kapsule cluster.
The following resources will be created:
- Kubernetes Kapsule cluster
- Kubernetes Kapsule pools
- Traefik2 Load balancer
Kubernetes Kapsule cluster
Our Kubernetes Kapsule cluster is hosted in the Scaleway network. Each node of a pool has its own public IP, there is no such a mecanism that permit to access the private IPs of the cluster from outside. So all of the communications between RDB and Kapsule will be throught the public internet.
Scaleway is working on a feature to attach a Kapsule cluster to a private network. See the feature request.
Create the terraform file infra/plan/kapsule.tf
:
resource "scaleway_k8s_cluster" "kapsule" { name = "kapsule-${var.env}" description = "${var.env} cluster" version = var.kapsule_cluster_version cni = "calico" tags = [var.env] autoscaler_config { disable_scale_down = false scale_down_delay_after_add = "5m" estimator = "binpacking" expander = "random" ignore_daemonsets_utilization = true balance_similar_node_groups = true expendable_pods_priority_cutoff = -5 } auto_upgrade { enable = true maintenance_window_start_hour = 4 maintenance_window_day = "sunday" } } resource "scaleway_k8s_pool" "default" { cluster_id = scaleway_k8s_cluster.kapsule.id name = "default" node_type = var.kapsule_pool_node_type size = var.kapsule_pool_size autoscaling = true autohealing = true min_size = var.kapsule_pool_min_size max_size = var.kapsule_pool_max_size }
Complete the file infra/plan/variable.tf
:
variable "kapsule_cluster_version" { type = string } variable "kapsule_pool_size" { type = number } variable "kapsule_pool_min_size" { type = number } variable "kapsule_pool_max_size" { type = number } variable "kapsule_pool_node_type" { type = string }
Complete the file infra/plan/config/$ENV/terraform.tfvars
:
kapsule_cluster_version = "1.22" kapsule_pool_size = 2 kapsule_pool_min_size = 2 kapsule_pool_max_size = 4 kapsule_pool_node_type = "gp1-xs"
Let's deploy our cluster
cd infra/plan terraform apply -var-file=config/$ENV/terraform.tfvars
Let's check if the cluster and the pools have been created and are working correctly:
Kubernetes Kapsule Cluster
Kubernetes Kapsule Pool
Kubernetes Kapsule NodePools
Enable the Traefik load balancer using the Scaleway CLI:
scw k8s cluster update $(scw k8s cluster list | grep kapsule-${ENV} | awk '{ print $1 }') ingress=traefik2 region=$SCW_REGION
Let's check if Traefik 2 has been enabled
Get the Kube config file and test the cluster access:
mkdir -p ~/.kube/$ENV scw k8s kubeconfig get $(scw k8s cluster list | grep kapsule-${ENV} | awk '{ print $1 }') region=$SCW_REGION > ~/.kube/$ENV/config export KUBECONFIG=~/.kube/$ENV/config kubectl get nodes NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME scw-kapsule-dev-default-ad56eb1786504a99957ea8 Ready <none> 100m v1.22.3 10.66.32.141 212.47.252.20 Ubuntu 20.04.1 LTS fc08d0ff0a 5.4.0-80-generic containerd://1.5.5 scw-kapsule-dev-default-c4056d33bede4f3883fade Ready <none> 100m v1.22.3 10.66.242.89 51.15.205.158 Ubuntu 20.04.1 LTS fc08d0ff0a 5.4.0-80-generic containerd://1.5.5
To expose Traefik 2 with a Scaleway LoadBalancer, create the file infra/k8s/traefik-loadbalancer.yml
:
apiVersion: v1 kind: Service metadata: name: traefik-ingress namespace: kube-system labels: k8s.scw.cloud/ingress: traefik2 spec: type: LoadBalancer ports: - port: 80 name: http targetPort: 8000 - port: 443 name: https targetPort: 8443 selector: app.kubernetes.io/name: traefik
Deploy the configuration:
kubectl create -f infra/k8s/traefik-loadbalancer.yml
Verify that the LoadBalancer has been deployed correctly:
kubectl -n kube-system get svc traefik-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE traefik-ingress LoadBalancer 10.43.99.132 51.159.74.228 80:31912/TCP,443:30860/TCP 16s
See Exposing services in Scaleway Kubernetes for more details.
To avoid losing the IP, let's reserve this one:
export TRAEFIK_EXTERNAL_IP=$(kubectl get svc traefik-ingress -n kube-system -o json | jq -r .status.loadBalancer.ingress[0].ip) kubectl patch svc traefik-ingress -n kube-system --type merge --patch "{\"spec\":{\"loadBalancerIP\": \"$TRAEFIK_EXTERNAL_IP\"}}"
Securing sensitive data in Scaleway RDB
Our Kapsule Kubernetes cluster is now active. In this section we will configure the RDB Instance.
The following resources will be created:
- A highly available RDB Instance
- A database and a user for metabase
- ACLs to restrict the traffic to only Kubernetes Kapsule pools nodes and Bastion public IP
RDB instance
- The RDB Instance used is a PostgreSQL database server
- The
Multiples zones
option is enabled to ensure high-availability - Automated backup is enabled
- We create a database and a user for later
Create a terraform file infra/plan/rdb.tf
resource "random_string" "db_name_suffix" { length = 4 special = false upper = false } resource "scaleway_rdb_instance" "rdb" { name = "postgresql-${var.env}" node_type = var.rdb_instance_node_type volume_type = var.rdb_instance_volume_type engine = var.rdb_instance_engine is_ha_cluster = var.rdb_is_ha_cluster disable_backup = var.rdb_disable_backup volume_size_in_gb = var.rdb_instance_volume_size_in_gb user_name = "root" password = var.rdb_user_root_password } resource "scaleway_rdb_database" "metabase" { instance_id = scaleway_rdb_instance.rdb.id name = "metabase" } resource "scaleway_rdb_user" "metabase" { instance_id = scaleway_rdb_instance.rdb.id name = "metabase" password = var.rdb_user_metabase_password is_admin = false }
Complete the file infra/plan/variable.tf
:
variable "rdb_is_ha_cluster" { type = bool } variable "rdb_disable_backup" { type = bool } variable "rdb_instance_node_type" { type = string } variable "rdb_instance_engine" { type = string } variable "rdb_instance_volume_size_in_gb" { type = string } variable "rdb_user_root_password" { type = string } variable "rdb_user_metabase_password" { type = string } variable "rdb_instance_volume_type" { type = string }
RDB ACLs
Each node in the Kubernetes Kapsule has an attached ephemeral IP. For now, we cannot reserve the node IP or use Public Gateway with Kubernetes Kapsule. If we put the actual node IPs manually, the IP can change if an autoscaling occurs. A temporary solution is to create a Kubernetes cronjob that refresh the RDB ACLs each minute with the current node IPs.
Complete the file infra/plan/rdb.tf
with the following resources:
resource "scaleway_rdb_acl" "rdb" { instance_id = scaleway_rdb_instance.rdb.id dynamic "acl_rules" { for_each = concat(var.rdb_acl_rules, [{ ip = "${scaleway_instance_ip.bastion.address}/32" description = "Bastion IP" }]) content { ip = acl_rules.value["ip"] description = acl_rules.value["description"] } } }
Complete the ìnfra/plan/output.tf
output "rdb_endpoint_ip" { value = scaleway_rdb_instance.rdb.endpoint_ip } output "rdb_endpoint_port" { value = scaleway_rdb_instance.rdb.endpoint_port }
Complete the file infra/plan/variable.tf
:
variable "rdb_acl_rules" { type = list(object({ ip = string description = string })) }
Complete the file infra/plan/config/$ENV/terraform.tfvars
:
rdb_instance_node_type = "db-gp-xs" rdb_instance_engine = "PostgreSQL-13" rdb_is_ha_cluster = true rdb_disable_backup = false rdb_instance_volume_type = "bssd" rdb_instance_volume_size_in_gb = "50" rdb_user_root_password = "<RDB_ROOT_USER_PWD>" rdb_user_metabase_password = "<RDB_METABASE_USER_PWD>" rdb_acl_rules = [{ ip = "<KAPSULE_NODEPOOL_IP_1>" description = "Kapsule dev node 1" }, { ip = "<KAPSULE_NODEPOOL_IP_2>" description = "Kapsule dev node 2" }]
Let's deploy our RDB instance
cd infra/plan KAPSULE_NODEPOOL_IP_1=$(scw k8s node list cluster-id=$(scw k8s cluster list | grep kapsule-${ENV} | awk '{ print $1 }') | awk '{ print $4 }' | sed -n '2 p') KAPSULE_NODEPOOL_IP_2=$(scw k8s node list cluster-id=$(scw k8s cluster list | grep kapsule-${ENV} | awk '{ print $1 }') | awk '{ print $4 }' | sed -n '3 p') sed -i "s/<RDB_ROOT_USER_PWD>/$RDB_ROOT_USER_PWD/g; s/<RDB_METABASE_USER_PWD>/${RDB_METABASE_USER_PWD}/g; s,<KAPSULE_NODEPOOL_IP_1>,${KAPSULE_NODEPOOL_IP_1}/32,g; s,<KAPSULE_NODEPOOL_IP_2>,${KAPSULE_NODEPOOL_IP_2}/32,g; " config/$ENV/terraform.tfvars terraform apply -var-file=config/$ENV/terraform.tfvars
The secret should be stored in a secret storage like Vault. You can deploy a vault in your Kapsule or in a virtual instance and retrieve the secret in terraform using the vault provider. This method avoid to store the password in the terraform state.
Add the connect privilege to the metabase
RDB user:
scw rdb privilege set instance-id=$(scw rdb instance list | grep postgresql-${ENV} | awk '{ print $1 }') database-name=metabase user-name=metabase permission=all
Let's check if all the resources have been created and are working correctly:
Let's check the connection
ssh -i <PRIVATE_KEY_PATH> root@$(terraform output --raw bastion_ip)
Run the psql
command:
psql -h <RDB_ENDPOINT_IP> -p <RDB_ENDPOINT_PORT> -U metabase -d metabase psql (13.3 (Ubuntu 13.3-1.pgdg16.04+1)) SSL connection (protocol: TLSv1.2, cipher: <CIPHER>, bits: 256, compression: off) Type "help" for help. metabase=>
Securing the connectivity between a Kubernetes Kapsule application and a RDB database
Our RDB instance is now available. In this section, we put them all together and deploy Metabase to Kubernetes and connect it to the RDB database. Our objectives are to:
- Deploy the metabase application.
- Create a DNS zone and a DNS record for the metabase application.
- Create the SSL certificates
Kustomize files
Let's create the Kustomize base
files.
Create the metabase deployment infra/k8s/base/deployment.yaml
:
apiVersion: apps/v1 kind: Deployment metadata: name: metabase labels: app: metabase spec: selector: matchLabels: app: metabase replicas: 1 template: metadata: labels: app: metabase spec: containers: - name: metabase image: metabase/metabase imagePullPolicy: IfNotPresent
Create the metabase service infra/k8s/base/service.yaml
:
apiVersion: v1 kind: Service metadata: name: metabase labels: app: metabase spec: ports: - port: 80 targetPort: 3000 protocol: TCP selector: app: metabase
Create a secret to save the RDB user password infra/k8s/base/database-secret.yaml
:
apiVersion: v1 kind: Secret metadata: name: metabase type: Opaque data: password: metabase
Create the ingress file infra/k8s/base/ingress.yaml
:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: metabase spec: rules: - host: metabase.<DOMAIN_NAME> http: paths: - pathType: Prefix path: / backend: service: name: metabase port: number: 8000
Replace by your domain name.
Create the kustomization file infra/k8s/base/kustomization.yaml
:
namespace: metabase resources: - deployment.yaml - service.yaml - database-secret.yaml - ingress.yaml
Now let's create the files for the dev
environment:
infra/k8s/envs/dev/database-secret.patch.yaml
:
apiVersion: v1 kind: Secret metadata: name: metabase type: Opaque data: password: <MB_DB_PASS>
infra/k8s/envs/dev/deployment.patch.yaml
:
apiVersion: apps/v1 kind: Deployment metadata: name: metabase labels: app: metabase spec: selector: matchLabels: app: metabase replicas: 1 template: metadata: labels: app: metabase spec: containers: - name: metabase image: metabase/metabase imagePullPolicy: IfNotPresent env: - name: MB_DB_TYPE value: postgres - name: MB_DB_HOST value: "<MB_DB_HOST>" - name: MB_DB_PORT value: "<MB_DB_PORT>" - name: MB_DB_DBNAME value: metabase - name: MB_DB_USER value: metabase - name: MB_DB_PASS valueFrom: secretKeyRef: name: metabase key: password
infra/k8s/envs/dev/ingress.patch.yaml
:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: metabase annotations: kubernetes.io/tls-acme: "true" cert-manager.io/cluster-issuer: letsencrypt-prod traefik.ingress.kubernetes.io/router.tls: "true" spec: rules: - host: metabase.dev.<DOMAIN_NAME> http: paths: - pathType: Prefix path: / backend: service: name: metabase port: number: 80 tls: - secretName: metabase-tls hosts: - metabase.dev.<DOMAIN_NAME>
Replace by your domain name.
infra/k8s/envs/dev/kustomization.yaml
:
namespace: metabase resources: - ../../base patchesStrategicMerge: - database-secret.patch.yaml - deployment.patch.yaml - ingress.patch.yaml
Let's prepare the kubernetes files.
cd infra/k8s/envs/dev RDB_METABASE_USER_PWD=$(echo -n $RDB_METABASE_USER_PWD | base64 -w 0 ) sed -i "s/<MB_DB_PASS>/$RDB_METABASE_USER_PWD/g" database-secret.patch.yaml sed -i "s/<MB_DB_HOST>/$(terraform output --raw rdb_endpoint_ip)/g; s/<MB_DB_PORT>/$(terraform output --raw rdb_endpoint_port)/g;" deployment.patch.yaml
Create the DNS record
To add an external domain name to Scaleway, please follow the official documentation: How to add an external domain to DNS
.
Create the DNS zone:
scw dns zone create domain=<DOMAIN_NAME> subdomain=dev Domain <DOMAIN_NAME> Subdomain dev Ns.0 ns0.dom.scw.cloud Ns.1 ns1.dom.scw.cloud NsDefault.0 ns0.dom.scw.cloud NsDefault.1 ns1.dom.scw.cloud Status active UpdatedAt now ProjectID <PROJECT_ID>
Create the DNS record:
scw dns record add dev.<DOMAIN_NAME> name=metabase data=$TRAEFIK_EXTERNAL_IP type=A DATA NAME PRIORITY TTL TYPE COMMENT ID 51.159.74.228 metabase 0 300 A - <DNS_RECORD_ID>
Create the SSL certificates
To make metabase more secure, we need to deploy Cert-manager to create Let's Encrypt TLS certificates:
More information in the documentation: Deploying Cert Manager.
Use the command below to install cert-manager and its needed CRD (Custom Resource Definitions):
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.6.0/cert-manager.yaml customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created namespace/cert-manager created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Create a cluster issuer that allow you to specify:
- the Let’s Encrypt server, if you want to replace the production environment with the staging one.
- the mail used by Let’s Encrypt to warn you about certificate expiration.
Create the file infra/k8s/cluster-issuer.yaml
:
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # You must replace this email address with your own. # Let's Encrypt will use this to contact you about expiring # certificates, and issues related to your account. email: <MAILING_LIST> server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: issuer-account-key # Add a single challenge solver, HTTP01 solvers: - http01: ingress: class: traefik
kubectl create -f infra/k8s/cluster-issuer.yaml kubectl get ClusterIssuer letsencrypt-prod NAME READY AGE letsencrypt-prod True 32s
Now we can deploy metabase:
cd infra/k8s/envs/dev kubectl create namespace metabase kubectl config set-context --current --namespace=metabase kustomize build . | kubectl apply -f - kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE cm-acme-http-solver-n8x5s traefik metabase.dev.<DOMAIN_NAME> 80 1s metabase traefik metabase.dev.<DOMAIN_NAME> 80, 443 5s kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE metabase traefik metabase.dev.<DOMAIN_NAME> 80, 443 2m16s kubectl get deploy metabase NAME READY UP-TO-DATE AVAILABLE AGE metabase 1/1 1 1 2m kubectl get svc metabase NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metabase ClusterIP 10.39.118.166 <none> 80/TCP 2m40s kubectl get secret metabase NAME TYPE DATA AGE metabase Opaque 1 7m30s
Let's check if the connection to the database has been completed:
kubectl logs deploy/metabase [..] 2021-11-04 19:30:15,524 INFO db.setup :: Verifying postgres Database Connection ... 2021-11-04 19:30:16,029 INFO db.setup :: Successfully verified PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) application database connection. ✅ 2021-11-04 19:30:16,030 INFO db.setup :: Running Database Migrations... [..] 2021-11-04 19:30:56,208 INFO metabase.core :: Metabase Initialization COMPLETE
After a few minutes of waiting, you will see that the metabase is accessible via HTTPS!
Ok then!
Conclusion
Congratulations! You have completed this long workshop. In this series we have:
- Created a highly-available Scaleway RDB instance
- Configured a Scaleway Kubernetes Kapsule cluster with fine-grained access control to RDB instance
- Tested the connectivity between a Kubernetes container and a RDB database.
- Secured the access to the metabase application
That's it!
Clean
To clean the resoures, run the following commands:
terraform destroy -var-file=config/$ENV/terraform.tfvars scw dns record delete dev.<DOMAIN_NAME> name=metabase data=$TRAEFIK_EXTERNAL_IP type=A
Final Words
I will update the article as soon as private network and/or public gateway are available for Kapsule and RDB.
If you have any questions or feedback, please feel free to leave a comment.
Otherwise, I hope I have helped you answer some of the hard questions about connecting Kapsule cluster to RDB instance.
By the way, do not hesitate to share with peers 😊
Thanks for reading!
Top comments (3)
Great article. I just wonder what tool did you use to draw the above architecture?
Hello Ahmedou
Thank you for your message !
draw.io !
So beautiful. Thanks Chabane!