I'm running a Kubernetes bare metal install and I'm trying to make my test nginx application (simply created with kubectl create deployment nginx --image=nginx) visible remotely from all nodes. The idea being I can then use a bare metal HAProxy installation to route the traffic appropriately.
From everything I've read this configuration should work and allow access via the port across nodes. Additionally, performing a netstat does seem to show that the nodeport is listening on all nodes -
user@kube2:~$ netstat -an | grep :30196 tcp6 0 0 :::30196 :::* LISTEN My service.yaml file -
apiVersion: v1 kind: Service metadata: name: test-svc namespace: default spec: type: NodePort externalTrafficPolicy: Cluster ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx My node networking configuration -
kube1 - 192.168.1.130 (master) kube2 - 192.168.1.131 kube3 - 192.168.1.132 My service running -
user@kube1:~$ kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18m <none> test-svc NodePort 10.103.126.143 <none> 80:30196/TCP,443:32580/TCP 14m app=nginx However, despite all the above, my service is only accessible on the node it is running on (kube3/192.168.1.132). Any ideas why this would be or am I just understanding Kubernetes?
I'd had a look at load balancers and ingress but what doesn't make sense is if I routed all traffic to my master to distribute (kube1), what if kube1 went down? Surely I need a load balancer to target my load balancer?!
Hope someone can help!
Thanks, Chris.