You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 22, 2025. It is now read-only.

33
33
34
34
<br/>
35
35
@@ -38,7 +38,7 @@ This Solution provides a replacement, using an NGINX Server, and a new K8s Contr
38
38
1. Install NGINX Ingress Controller in your Cluster
39
39
2. Install NGINX Cafe Demo Application in your Cluster
40
40
3. Install NGINX Plus on the Loadbalancer Server(s)
41
-
4. Configure NGINX Plus for MultiCluster Load Balancing
41
+
4. Configure NGINX Plus for HTTP MultiCluster Load Balancing
42
42
5. Install NKL NGINX Kubernetes LB Controller in your Cluster
43
43
6. Install NKL LoadBalancer or NodePort Service manifest
44
44
7. Test out NKL
@@ -54,7 +54,7 @@ This Solution provides a replacement, using an NGINX Server, and a new K8s Contr
54
54
- Demo application, this install guide uses the NGINX Cafe example, found here: https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/complete-example
55
55
- A bare metal Linux server or VM for the external NGINX LB Server, connected to a network external to the cluster. Two of these will be required if High Availability is needed, as shown here.
56
56
- NGINX Plus software loaded on the LB Server(s). This install guide follows the instructions for installing NGINX Plus on Centos 7, located here: https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/
57
-
- The NGINX Kubernetes Loadbalancer (NKL) Controller, new software from Nginx for this Solution.
57
+
- The NGINX Kubernetes Loadbalancer (NKL) Controller, new software from NGINX for this Solution.
58
58
59
59
<br/>
60
60
@@ -78,16 +78,19 @@ A standard K8s cluster is all that is required, two or more Clusters if you want
78
78
79
79
<br/>
80
80
81
-
The NGINX Ingress Controller in this Solution is the destination target for traffic (north-south) that is being sent to the cluster(s). The installation of the actual Ingress Controller is outside the scope of this guide, but the links to the docs are included for your reference. `The NIC installation using Manifests must follow the documents exactly as written,` as this Solution depends on the `nginx-ingress` namespace and service objects. **Only the very last step is changed.**
81
+
The NGINX Ingress Controller in this Solution is the destination target for traffic (north-south) that is being sent to the cluster(s). The installation of the actual Ingress Controller is outside the scope of this guide, but the links to the docs are included for your reference. The NIC installation using Manifests must follow the documents exactly as written, as this Solution depends on the `nginx-ingress` namespace and service objects. **Only the very last step is changed.**
82
82
83
-
**NOTE:** This Solution only works with `nginx-ingress from NGINX`. It will not work with the K8s Community version of Ingress, called ingress-nginx.
83
+
**NOTE:** This Solution only works with `nginx-ingress` from NGINX. It will not work with the K8s Community version of Ingress, called ingress-nginx.
84
84
85
85
If you are unsure which Ingress Controller you are running, check out the blog on nginx.com:
>>Important! Do not complete the very last step in the NIC deployment with Manifests, `do not deploy the loadbalancer.yaml or nodeport.yaml Service file!` You will apply a different loadbalancer or nodeport Service manifest later, after the NKL Controller is up and running. `The nginx-ingress Service file must be changed` - it is not the default file.
89
92
90
-
>Important! Do not complete the very last step in the NIC deployment with Manifests, `do not deploy the loadbalancer.yaml or nodeport.yaml Service file!` You will apply a different loadbalancer or nodeport Service manifest later, after the NKL Controller is up and running. `The Service file must be changed` - it is not the default file.
This is not part of the actual Solution, but it is useful to have a well-known application running in the cluster, as a known-good target for test commands. The example provided here is used by the Solution to demonstrate proper traffic flows.
103
+
This is not a component of the actual Solution, but it is useful to have a well-known application running in the cluster, as a known-good target for test commands. The example provided here is used by the Solution to demonstrate proper traffic flows.
101
104
102
105
Note: If you choose a different Application to test with, `the NGINX configurations and health checks provided here may not work,` and will need to be modified to work correctly.
103
106
107
+
<br/>
108
+
104
109
- Use the provided Cafe Demo manifests in the docs/cafe-demo folder:
105
110
106
111
```bash
@@ -117,7 +122,7 @@ Note: If you choose a different Application to test with, `the NGINX configurati
117
122
118
123
https://hub.docker.com/r/nginxinc/ingress-demo
119
124
120
-
**IMPORTANT** - Do not use the `cafe-ingress.yaml` file. Rather, use the `cafe-virtualserver.yaml` file that is provided here. It uses the NGINX Plus CRDs to define a VirtualServer, and the related Virtual Server Routes needed. If you are using NGINX OSS Ingress Controller, you will need to use the appropriate manifests, which is not covered in this Solution.
125
+
**IMPORTANT** - Do not use the `cafe-ingress.yaml` file. Rather, use the `cafe-virtualserver.yaml` file that is provided here. It uses the NGINX Plus CRDs to define a VirtualServer, and the related VirtualServer Routes needed. If you are using NGINX OSS Ingress Controller, you will need to use the appropriate manifests, which is not covered in this Solution.
121
126
122
127
<br/>
123
128
@@ -138,7 +143,7 @@ This Solution followed the `Installation of NGINX Plus on Centos/Redhat/Oracle`
>NOTE: This solution will only work with NGINX Plus, as NGINX OpenSource does not have the API that is used in this Solution. Installation on unsupported Linux Distros is not recommended.
146
+
>NOTE: This Solution will only work with NGINX Plus, as NGINX OpenSource does not have the API that is used in this Solution. Installation on unsupported Linux Distros is not recommended.
142
147
143
148
If you need a license for NGINX Plus, a 30-day Trial license is available here:
144
149
@@ -193,7 +198,7 @@ After a new installation of NGINX Plus, make the following configuration changes
- High Availability: If you have 2 or more NGINX Plus LB Servers, you can use Zone Sync to synchronize the KeyValue SplitRatio data between the NGINX Servers automatically. Use the `zonesync.conf` example file provided, change the IP addresses to match your NGINX LB Servers. Place this file in /etc/nginx/stream folder on each LB Server, and reload NGINX. Note: This example does not provide any security for the Zone Sync traffic, secure as necessary with TLS or IP allowlist.
488
+
- High Availability: If you have 2 or more NGINX Plus LB Servers, you can use Zone Sync to synchronize the KeyValue SplitRatio data between the NGINX LB Servers automatically. Use the `zonesync.conf` example file provided, change the IP addresses to match your NGINX LB Servers. Place this file in /etc/nginx/stream folder on each LB Server, and reload NGINX. Note: This example does not provide any security for the Zone Sync traffic, secure as necessary with TLS or IP allowlist.
483
489
484
490
```bash
485
491
cat zonesync.conf
486
492
487
493
# NGINX K8sLB Zone Sync configuration, for KeyVal split
Watching the NGINX Plus Dashboard, Cluster Tab, you will see messages sent/received if Zone Sync is operating correctly:
515
+
- Checking the NGINX Plus Dashboard, Cluster Tab, you will see the Status of the `split` Zone as Green, and messages sent/received if Zone Sync is operating correctly:
510
516
511
517

512
518
@@ -522,23 +528,25 @@ Watching the NGINX Plus Dashboard, Cluster Tab, you will see messages sent/recei
522
528
523
529
### This is the new K8s Controller from NGINX, which is configured to watch the k8s environment, the `nginx-ingress` Service object, and send API updates to the NGINX LB Server(s) when there are changes. It only requires three things:
524
530
525
-
- New kubernetes namespace and RBAC
526
-
- NKL ConfigMap, to configure the Controller
527
-
- NKL Deployment, to deploy and run the Controller
531
+
1. New Kubernetes namespace and RBAC
532
+
2. NKL ConfigMap, to configure the Controller
533
+
3. NKL Deployment, to deploy and run the Controller
528
534
529
-
Create the new `nkl` K8s namespace:
535
+
<br/>
536
+
537
+
- Create the new `nkl` K8s namespace:
530
538
531
539
```bash
532
540
kubectl create namespace nkl
533
541
```
534
542
535
-
Apply the manifests for NKL's Secret, Service, ClusterRole, and ClusterRoleBinding:
543
+
-Apply the manifests for NKL's Secret, Service, ClusterRole, and ClusterRoleBinding:
Modify the ConfigMap manifest to match your Network environment. Change the `nginx-hosts` IP address to match your NGINX LB Server IP. If you have 2 or more LB Servers, separate them with a comma. Important! - keep the port number for the Plus API endpoint, and the `/api` URL as shown.
549
+
-Modify the ConfigMap manifest to match your Network environment. Change the `nginx-hosts` IP address to match your NGINX LB Server IP. If you have 2 or more LB Servers, separate them with a comma. Important! - keep the port number for the Plus API endpoint, and the `/api` URL as shown.
542
550
543
551
```yaml
544
552
apiVersion: v1
@@ -557,7 +565,7 @@ Apply the updated ConfigMap:
557
565
kubectl apply -f nkl-configmap.yaml
558
566
```
559
567
560
-
Deploy the NKL Controller:
568
+
-Deploy the NKL Controller:
561
569
562
570
```bash
563
571
kubectl apply -f nkl-deployment.yaml
@@ -611,19 +619,22 @@ Instead, use the `loadbalancer-cluster1.yaml` or `nodeport-cluster1.yaml` manife
611
619
# NKL LoadBalancer Service file
612
620
# Spec -ports name must be in the format of
613
621
# nkl-<upstream-block-name>
622
+
# The nginxinc.io Annotation must be added
614
623
# externalIPs are set to Nginx LB Servers
615
-
# Chris Akker, Jan 2023
624
+
# Chris Akker, Apr 2023
616
625
#
617
626
apiVersion: v1
618
627
kind: Service
619
628
metadata:
620
629
name: nginx-ingress
621
630
namespace: nginx-ingress
631
+
annotations:
632
+
nginxinc.io/nkl-cluster1-https: "http"# Must be added
- Orange is the LoadBalancer Service `External-IP`, which are your Nginx LB Server IPs
657
+
- Orange is the LoadBalancer Service `External-IP`, which are your Nginx LB Server IP(s).
647
658
- Blue is the `NodePort mapping` created by K8s. The new NKL Controller updates the Nginx LB Server upstreams with these, shown on the dashboard.
648
659
649
660
<br/>
@@ -667,24 +678,25 @@ Review the new `nodeport-cluster1.yaml` Service defintion file:
667
678
668
679
```yaml
669
680
# NKL Nodeport Service file
670
-
# Chris Akker, Apr 2023
671
681
# NodePort -ports name must be in the format of
672
-
#
673
-
## nkl-<upstream-block-name> ##
674
-
#
682
+
# nkl-<upstream-block-name>
683
+
# The nginxinc.io Annotation must be added
684
+
#Chris Akker, Apr 2023
675
685
#
676
686
apiVersion: v1
677
687
kind: Service
678
688
metadata:
679
689
name: nginx-ingress
680
690
namespace: nginx-ingress
691
+
annotations:
692
+
nginxinc.io/nkl-cluster1-https: "http"# Must be added
681
693
spec:
682
694
type: NodePort
683
695
ports:
684
696
- port: 443
685
697
targetPort: 443
686
698
protocol: TCP
687
-
name: nkl-cluster1-https# This must match NGINX upstream name
699
+
name: nkl-cluster1-https
688
700
selector:
689
701
app: nginx-ingress
690
702
@@ -759,7 +771,7 @@ Open a browser tab to https://cafe.example.com/coffee.
759
771
760
772
The Dashboard's `HTTP Upstreams Requests counters` will increase as you refresh the browser page.
761
773
762
-
Using a Terminal and `./kube Context set for Cluster1`, delete the `nginx-ingress loadbalancer service` or `nginx-ingress nodeport service` definition.
774
+
-Using a Terminal and `./kube Context set for Cluster1`, delete the `nginx-ingress loadbalancer service` or `nginx-ingress nodeport service` definition.
763
775
764
776
```bash
765
777
kubectl delete -f loadbalancer-cluster1.yaml
@@ -781,19 +793,25 @@ Legend:
781
793
782
794
If you refresh the cafe.example.com browser page, 1/2 of the requests will respond with `502 Bad Gateway`. There are NO upstreams in Cluster1 for NGINX to send the requests to!
783
795
784
-
Add the `nginx-ingress` Service back to Cluster1:
796
+
---
785
797
786
-
```
798
+
- Add the `nginx-ingress` Service back to Cluster1:
799
+
800
+
```bash
787
801
kubectl apply -f loadbalancer-cluster1.yaml
788
802
```
789
803
or
790
-
```
804
+
```bash
791
805
kubectl apply -f nodeport-cluster1.yaml
792
806
```
793
807
794
-
Verify the nginx-ingress Service is re-created. Notice the the Port Numbers have changed!
808
+
- Verify the nginx-ingress Service is re-created. Notice the the Port Numbers have changed!
809
+
810
+
```bash
811
+
kubectl get svc nginx-ingress -n nginx-ingress
812
+
```
795
813
796
-
`The NKL Controller detects this change, and modifies the LB Server(s) upstreams to match.` The Dashboard will show you the new Port numbers, matching the new NodePorts. The NKL logs show these messages, confirming the changes:
814
+
`The NKL Controller detects this change, and modifies the LB Server(s) upstreams to match.` The Dashboard will show you the new Port numbers, matching the new LoadBalancer or NodePort definitions. The NKL logs show these messages, confirming the changes:
0 commit comments