29

Currently I'm working on a small hobby project which I'll make open source once it's ready. This service is running on Google Container Engine. I chose GCE to avoid configuration hassle, the costs are affordable and to learn new stuff.

My pods are running fine and I created a service with type LoadBalancer to expose the service on port 80 and 443. This works perfectly.

However, I discovered that for each LoadBalancer service, a new Google Compute Engine load balancer is created. This load balancer pretty expensive and really over done for a hobby project on a single instance.

To cut the costs I'm looking for a way to expose the ports without the load balancer.

What i've tried so far:

Is there a way to expose port 80 and 443 for a single instance on Google Container Engine without a load balancer?

5 Answers 5

12

Yep, through externalIPs on the service. Example service I've used:

apiVersion: v1 kind: Service metadata: name: bind labels: app: bind version: 3.0.0 spec: ports: - port: 53 protocol: UDP selector: app: bind version: 3.0.0 externalIPs: - a.b.c.d - a.b.c.e 

Please be aware that the IPs listed in the config file must be the internal IP on GCE.

9
  • Thanks! But I think I missed something. The service is deployed but unable from the internet. I set the correct firewall rules. The service is displaying the correct externalIp Commented Sep 16, 2016 at 15:28
  • Sorry for late reply, forgot that I spent time on the exact same issue. The IPs listed need to be the internal IP, not external (At least on GCE). Commented Sep 18, 2016 at 20:25
  • Thanks, that was the solution! Unfortunately I'm not allowed to upvote yet... I dropped this comment to let you know that this answer combined with the comment above (which was the key) solved my issue! Commented Sep 19, 2016 at 6:14
  • 2
    Would you (or @RubenErnst) mind expanding on the answer a bit? In particular, "the IPs listed on GCE must be the intrenal IP." Which IP do you mean? Are you able to get this working with a static IP assigned to your single node cluster? Commented Apr 18, 2017 at 0:02
  • 1
    I agree with @Brett. This is not a full-fledged solution, yet. The internal node IPs are ephemeral. Eventually, the website/webservice will go down because the service will point to a node IP that no longer exists. Until this can be made to work with a true, static external IP, it is a ticking bomb. Commented Dec 27, 2019 at 2:49
4

In addition to ConnorJC's great and working solution: The same solution is also described in this question: Kubernetes - can I avoid using the GCE Load Balancer to reduce cost?

The "internalIp" refers to the compute instance's (a.k.a. the node's) internal ip (as seen on Google Cloud Platform -> Google Compute Engine -> VM Instances)

This comment gives a hint at why the internal and not the external ip should be configured.

Furthermore, after having configured the service for ports 80 and 443, I had to create a firewall rule allowing traffic to my instance node:

gcloud compute firewall-rules create your-name-for-this-fw-rule --allow tcp:80,tcp:443 --source-ranges=0.0.0.0/0 

After this setup, I could access my service through http(s)://externalIp

1
  • Using the node internal IP did the trick. 👍 Such confusion with the naming! Commented Feb 23, 2019 at 0:40
3

If you only have exactly one pod, you can use hostNetwork: true to achieve this:

apiVersion: apps/v1beta1 kind: Deployment metadata: name: caddy spec: replicas: 1 template: metadata: labels: app: caddy spec: hostNetwork: true # <--------- containers: - name: caddy image: your_image env: - name: STATIC_BACKEND # example env in my custom image value: $(STATIC_SERVICE_HOST):80 

Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

This solution is better than using service's externalIP as it bypass kube-proxy, and you will receive the correct source IP.

2

To synthesize @ConnorJC @derMikey's answers into exactly what worked for me:

Given a cluster pool running on the Compute Engine Instance:

# gcloud compute instances list gce vm name: gke-my-app-cluster-pool-blah` internal ip: 10.123.0.1 external ip: 34.56.7.001 # will be publically exposed 

I made the service:

apiVersion: v1 kind: Service metadata: labels: app: my-app name: my-app-service spec: clusterIP: 10.22.222.222 externalIPs: - 10.123.0.1 # the instance internal ip ports: - port: 80 protocol: TCP selector: app: my-app type: ClusterIP 

and then opened the firewall for all(?) ips in the project:

gcloud compute firewall-rules create open-my-app --allow tcp:80,tcp:443 --source-ranges=0.0.0.0/0 

and then my-app was accessible via the GCE Instance Public IP 34.56.7.001 (not the cluster ip)

0

I prefer not to use the cloud load balancers, until necessary, because of cost and vendor lock-in.

Instead I use this: https://kubernetes.github.io/ingress-nginx/deploy/

It's a pod that runs a load balancer for you. That page has GKE specific installation notes.

1
  • 1
    I have some bad news for you. nginx-ingress creates a load balancer by default when you install it. I'm here because I did that, and want to cut cost. Commented Dec 27, 2019 at 2:46

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.