3

I have deployed kubernetes cluster (with only one Node as master) onto a ec2 instance. After this, I created a nginx deployment and exposed the service using "Type" as NodePort. The nginx service is available on ec2 privateIP:31336 and also able to access via ec2 publicIP:31336 from my computer.

At this stage , I am having follwing questions: 1) what to do in next step in order to access the http service from outside of the cluster i.e., a successful "curl ec2publicIP:80" operation? Any guide would be extremly helpful.

Note: - My ec2 security rule is configured to allow http traffic. - After logging into nginx pod, I'm able to ping google.com but the apt-get update gets timeout. - I have updated the IP forwading in my EC2 instance.

2) What would be the best and simple option among NodePort, ingress controller or ELB as type for kube services.

3) Also, where does the IPtables fits into it. Can it be avoided manually chnaging it's rule by using any of the above or other tools/pkgs which will take care of the networking ?

Your response would be highly appreciated.

nginx-deployment.yaml:

apiVersion: apps/v1 kind: Deployment metadata: name: demo-nginx spec: selector: matchLabels: run: demo-nginx replicas: 1 template: metadata: labels: run: demo-nginx spec: containers: - name: demo-nginx image: k8s.gcr.io/nginx:1.7.9 ports: - containerPort: 80

nginx-services.yaml:

apiVersion: v1 kind: Service metadata: name: demo-nginx labels: run: demo-nginx spec: ports: - port: 80 protocol: TCP selector: run: demo-nginx type: NodePort

1 Answer 1

2

I guess you want to create a Kubernetes Service that will sit in front of your Pod. The Pods listen on random ports and the Services are the load balancers that translate the random ports to a known external ports (e.g. 80 or 443).

Also you don't want to run Pods on their own. Better run them as part of a Deployment that will take care of restarting them if they die.

Here is a very simple single-Pod Deployment with a Service implemented as AWS ELB. It all sits in its own namespace:

kind: Namespace apiVersion: v1 metadata: name: demo labels: name: demo --- apiVersion: apps/v1 kind: Deployment metadata: name: demo-deployment namespace: demo labels: app: demo spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - name: demo image: 123456789012.dkr.ecr.ap-southeast-2.amazonaws.com/demo:latest # <<< Update ports: - containerPort: 80 name: backend-http env: - name: SOME_API value: https://example.com/some-api --- kind: Service apiVersion: v1 metadata: name: demo namespace: demo annotations: # The backend talks over HTTP. service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http spec: type: LoadBalancer selector: app: demo ports: - name: elb-http protocol: TCP port: 80 targetPort: backend-http 

As you will notice it's referring to port 80 inside the template even though in reality it will be some random number assigned by k8s. But the Pod thinks it listens on port 80 so that's what we refer to in the template.

You can deploy it with kubectl apply and it will create the whole lot.

Hope that helps :)

8
  • I am using the deployment only. Commented Dec 19, 2018 at 14:41
  • elb is stucked at "pending state". Does the service type 'elb' requires any stipulation? Commented Dec 19, 2018 at 17:36
  • @tanmoy ELB may take some time to create. Has it finished yet? Commented Dec 19, 2018 at 19:05
  • No...forever it's in pending state. But this post suggest there are some stipulation to be met. Commented Dec 19, 2018 at 19:34
  • #MLu I'm still struggling to get the ingress controller up with ELB but it's not working. Do you have any code that I can follow? Commented Dec 28, 2018 at 14:00

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.