1

I'm attempting to update the image for my Deployment. To this I am executing kubectl edit deployment web and am changing the spec.template.spec.containers.image property from:

gcr.io/my-project-id-1234/app:v1 

To:

gcr.io/my-project-id-1234/app:v2 

From the logs, I know the deployment updates fine. The problem I'm having is with the TLS ingress; here is my configuration:

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: tls-ingress spec: tls: - secretName: tls-secrets backend: serviceName: web servicePort: 80 

And here is the result of kubectl describe ing web prior to the update:

$ kubectl describe ing Name: tls-ingress Namespace: default Address: 105.78.154.212 Default backend: web:80 (10.0.2.3:8000) TLS: tls-secrets terminates Rules: Host Path Backends ---- ---- -------- Annotations: backends: {"k8s-be-32171":"HEALTHY"} forwarding-rule: k8s-fw-default-tls-ingress https-forwarding-rule: k8s-fws-default-tls-ingress https-target-proxy: k8s-tps-default-tls-ingress static-ip: k8s-fw-default-tls-ingress target-proxy: k8s-tp-default-tls-ingress url-map: k8s-um-default-tls-ingress 

Before the update everything works correctly. But shortly traffic stops being successfully routed to my cluster. Describing the ingress now returns:

Name: tls-ingress Namespace: default Address: 105.78.154.212 Default backend: web:80 (10.0.2.3:8000) TLS: tls-secrets terminates Rules: Host Path Backends ---- ---- -------- Annotations: static-ip: k8s-fw-default-tls-ingress target-proxy: k8s-tp-default-tls-ingress url-map: k8s-um-default-tls-ingress backends: {"k8s-be-32171":"UNHEALTHY"} forwarding-rule: k8s-fw-default-tls-ingress https-forwarding-rule: k8s-fws-default-tls-ingress https-target-proxy: k8s-tps-default-tls-ingress 

How do I properly update the Ingress when updating my Deployment like so?

2 Answers 2

1

The Ingress points at a Service. The Service points at a set of Pods having some labels. The Deployment defines those labels on the Pods. Here's a list of what to troubleshoot:

  1. Confirm the label selector on your Service matches the labels on the Pods your Deployment is creating. Otherwise the Pods created by the Deployment won't be selected for the Service and your Ingress will be pointing at nothing.

  2. Confirm the Service is exposed as a NodePort. Otherwise the external Load Balancer from Google won't be able to reach inside your cluster.

  3. Confirm the Pods are running/healthy. It's possible for a Deploymentto be updated but the Pods be unhealthy or in a CrashLoop. It's important for the application to respond with a 200 status code for GET /.

  4. Create a firewall rule for the health-checks:

    gcloud compute firewall-rules create allow-130-211-0-0-22 \ --source-ranges 130.211.0.0/22 \ --allow tcp:30000-32767 
1
  • In the end, my app was to aggressively trying to upgrade to HTTPS and I was responding with a 301 instead of a 200 on / Commented May 9, 2016 at 18:35
0

The ingress is reporting {"k8s-be-32171":"UNHEALTHY"}, that indicates the health check is failed on the backend service.

Make sure the app pods is up and running, check with

kubectl get pods <pod-name>

and

kubectl get deployment <deplyment-name>

If the app pods are ready and running, the ingress should connected to the new pods rs automatically but it may take a while depends on the health check parameters.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.