1

I'm running a Kubernetes bare metal install and I'm trying to make my test nginx application (simply created with kubectl create deployment nginx --image=nginx) visible remotely from all nodes. The idea being I can then use a bare metal HAProxy installation to route the traffic appropriately.

From everything I've read this configuration should work and allow access via the port across nodes. Additionally, performing a netstat does seem to show that the nodeport is listening on all nodes -

user@kube2:~$ netstat -an | grep :30196 tcp6 0 0 :::30196 :::* LISTEN 

My service.yaml file -

apiVersion: v1 kind: Service metadata: name: test-svc namespace: default spec: type: NodePort externalTrafficPolicy: Cluster ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx 

My node networking configuration -

kube1 - 192.168.1.130 (master) kube2 - 192.168.1.131 kube3 - 192.168.1.132 

My service running -

user@kube1:~$ kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18m <none> test-svc NodePort 10.103.126.143 <none> 80:30196/TCP,443:32580/TCP 14m app=nginx 

However, despite all the above, my service is only accessible on the node it is running on (kube3/192.168.1.132). Any ideas why this would be or am I just understanding Kubernetes?

I'd had a look at load balancers and ingress but what doesn't make sense is if I routed all traffic to my master to distribute (kube1), what if kube1 went down? Surely I need a load balancer to target my load balancer?!

Hope someone can help!

Thanks, Chris.

3 Answers 3

1

If you want to expose service to outside cluster use service type either LoadBalancer or ingree. However is you use LoadBalancer approach has its own limitation. You cannot configure a LoadBalancer to terminate HTTPS traffic, virtual hosts or path-based routing. In Kubernetes 1.2 a separate resource called Ingress is introduced for this purpose. Here is example of LoadBalancer.

apiVersion: v1 kind: Service metadata: labels: app: nginx-app name: nginx-svc namespace: default spec: type: LoadBalancer # use LoadBalancer as type here ports: - port: 80 selector: app: nginx-app $ kubectl get services -l app=nginx-app -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx-svc LoadBalancer <ip> a54a62300696611e88ba00af02406931-1787163476.myserver.com 80:31196/TCP 9m app=nginx-app 

Post that test url

$curl a54a62300696611e88ba00af02406931-1787163476.myserver.com 
1
  • Thanks @asktyagi. I looked into load balancer but it seems tricky to setup on bare metal. They are primarily designed towards existing cloud services? The only thing I can think to do is use NodePorts and have an external load balancer. Also, I was having problems where not all nodes would respond however since moving from Ubuntu to CentOS I seem to have resolved this issue Commented Jul 4, 2019 at 20:33
0

In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.

Defining a NodePort in Kubernetes:

apiVersion: v1 kind: Service metadata: name: nginx-service-np labels: name: nginx-service-np spec: type: NodePort ports: - port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082 targetPort: 8080 # Application port nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/ protocol: TCP name: http selector: app: nginx 

See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).

The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:

  • Load Balance traffic, external o internal
  • Control failures, retries, routing
  • Apply limits and monitor network traffic between services
  • Secure communication

See Installing Istio in Kubernetes under VirtualBox (without Minikube).

2
  • Thanks Javier. So in terms of actually just having a single IP address for my service, I'd need an external load balancer, is that right? Thanks, Chris. Commented Jul 4, 2019 at 20:31
  • Yes Chris, you will need an external load balancer. You can install an nginx outside the cluster and configure it as a proxy for the nodeports. Just make sure to health check the endpoints. Commented Jul 5, 2019 at 0:13
0

Yet another option is to expose Nginx Ingress controller over NodePort (although not recommended for Production clusters). NodePort type still gives you the LoadBalancing capabilities, and to which specific Pod (backing the Service endpoints) the traffic should be sent, you control with 'service.spec.sessionAffinity' and Container Probes.

If you would have more than 1 replica of nginx Pod in your Deployment spec (example here), you could control pod to node assignment via pod affinity and anti-affinity feature.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.