Background
It's very rare for organizations to have or provide a dedicated Kubernetes cluster to each tenant, as it's generally more cost-effective to share a cluster. Sharing clusters reduces expenses and streamlines management. Sharing clusters, however, also comes with difficulties like managing noisy neighbors and ensuring security.
Clusters can be shared in many ways. In some cases, different applications may run in the same cluster. In other cases, multiple instances of the same application may run in the same cluster, one for each end user. All these types of sharing are frequently described using the umbrella term multi-tenancy.
While Kubernetes does not have first-class concepts of end users or tenants, it provides several features to help manage different tenancy requirements. These are discussed below.
A common form of multi-tenancy is to share a cluster between multiple teams within an organization, each of whom may operate one or more workloads. In this scenario, members of the teams often have direct access to Kubernetes resources via tools such as kubectl, or indirect access through GitOps controllers or other types of release automation tools. There is often some level of trust between members of different teams, but Kubernetes policies such as RBAC, quotas, and network policies are essential to safely and fairly share clusters.
The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor running multiple instances of a workload for customers. This business model is so strongly associated with this deployment style that many people call it "SaaS tenancy." In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from their perspective and is only used by the vendor to manage the workloads. Cost optimization is frequently a critical concern, and Kubernetes policies are used to ensure that the workloads are strongly isolated from each other.
Tenant Isolation
There are several ways to design and build multi-tenant solutions with Kubernetes at its control plane, data plane, or at both levels. Each of these methods comes with its own set of tradeoffs that impact the isolation level, implementation effort, operational complexity, and cost of service.
Kubernetes Control plane isolation ensures that different tenants cannot access or affect each other's Kubernetes API resources, and Flomesh Service Mesh (FSM) Ingress controller provides isolated Ingress controllers for Kubernetes Namespaces.
In Kubernetes, a Namespace provides a mechanism for isolating groups of API resources within a single cluster. This isolation has two key dimensions:
Object names within a namespace can overlap with names in other namespaces, similar to files in folders. This allows tenants to name their resources without having to consider what other tenants are doing.
Many Kubernetes security policies are scoped to namespaces. For example, RBAC Roles and Network Policies are namespace-scoped resources. Using RBAC, Users and Service Accounts can be restricted to a namespace.
In a multi-tenant environment, a Namespace helps segment a tenant's workload into a logical and distinct management unit. A common practice is to isolate every workload in its namespace, even if multiple workloads are operated by the same tenant. This ensures that each workload has its own identity and can be configured with an appropriate security policy.
In this article, we will learn how to use the Flomesh Service Mesh Ingress controller to create physical isolation of Ingress controllers when hosting multiple tenants in your Kubernetes cluster.
Flomesh Service Mesh (FSM)
FSM is an open-source product from Flomesh for Kubernetes north-south traffic, gateway API controller, and multi-cluster management. FSM uses Pipy, a programmable proxy at its core, and provides an Ingress controller, Gateway API controller, load balancer, cross-cluster service registration discovery, and more.
FSM Ingress Controller supports a multi-tenancy model via its concept of NamespacedIngress
CRD where it deploys a physically isolated Ingress Controller for requested Namespace.
For example, the YAML below defines an Ingress Controller that monitors on port 100 and also creates a LoadBalancer type service for it that listens on port 100.
apiVersion: flomesh.io/v1alpha1 kind: NamespacedIngress metadata: name: namespaced-ingress-100 namespace: test-100 spec: serviceType: LoadBalancer ports: - name: http port: 100 protocol: TCP
Install FSM
FSM provides a standard Helm chart, which can be installed via the Helm CLI.
$ helm repo add fsm https://flomesh-io.github.io/fsm $ helm repo update $ helm install fsm fsm/fsm --namespace flomesh --create-namespace --set fsm.ingress.namespaced=true
Verify that all pods are up and running properly.
$ kubectl get po -n flomesh NAME READY STATUS RESTARTS AGE fsm-manager-6857f96858-sjksm 1/1 Running 0 55s fsm-repo-59bbbfdc5f-w7vg6 1/1 Running 0 55s fsm-bootstrap-8576c5ff4f-7qr7k 1/1 Running 0 55s fsm-cluster-connector-local-8f8fb87f6-h7z9j 1/1 Running 0 32s
Create Sample Application
In this demo, we will be deploying httpbin
service under a namespace httpbin
.
# Create Namespace kubectl create ns httpbin # Deploy sample kubectl apply -f https://raw.githubusercontent.com/flomesh-io/osm-edge-docs/main/manifests/samples/httpbin/httpbin.yaml -n httpbin
Creating a standalone Ingress Controller
The next step is to create a separate Ingress Controller for the namespace httpbin
.
$ kubectl apply -f - <<EOF apiVersion: flomesh.io/v1alpha1 kind: NamespacedIngress metadata: name: namespaced-ingress-httpbin namespace: httpbin spec: serviceType: LoadBalancer http: port: name: http port: 81 nodePort: 30081 resources: limits: cpu: 500m memory: 200Mi requests: cpu: 100m memory: 20Mi EOF
After executing the above command, you will see an Ingress Controller running successfully under the namespace httpbin.
kubectl get po -n httpbin -l app=fsm-ingress-pipy NAME READY STATUS RESTARTS AGE fsm-ingress-pipy-httpbin-5594ffcfcc-zl5gl 1/1 Running 0 58s
At this point, there should be a corresponding Service under this namespace.
$ kubectl get svc -n httpbin -l app.kubernetes.io/component=controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fsm-ingress-pipy-httpbin LoadBalancer 10.43.62.120 192.168.1.11 81:30081/TCP 2m49s
Once you have the Ingress Controller, it's time to create the Ingress resource.
kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: httpbin namespace: httpbin spec: ingressClassName: pipy rules: - host: httpbin.org http: paths: - path: / pathType: Prefix backend: service: name: httpbin port: number: 14001 EOF
Now we have created Ingress resource, let's do a quick curl
to see if things are working as expected.
For my local setup demo, LoadBalancer IP is 192.168.1.11
, your IP might be different. So ensure you are performing a curl
against your setup ExternalIP.
curl -sI http://192.168.1.11:81/get -H "Host: httpbin.org" HTTP/1.1 200 OK server: gunicorn/19.9.0 date: Mon, 03 Oct 2022 12:02:04 GMT content-type: application/json content-length: 239 access-control-allow-origin: * access-control-allow-credentials: true connection: keep-alive
Conclusion
In this blog post, you learned about Kubernetes multi-tenancy, features provided by Kubernetes, tenancy isolation levels, and how to use Flomesh Service Mesh (FSM) Ingress controller to set up isolated Ingress controller for namespaces.
Flomesh Service Mesh(FSM) from Flomesh is Kubernetes North-South traffic manager, provides Ingress controllers, Gateway API, Load Balancer, and cross-cluster service registration and service discovery. FSM uses Pipy - a programmable network proxy, as its data plane and is suitable for cloud, edge, and IoT.
Top comments (0)