0

this is probably fairly simple, but I'm kind of lost here so any help would be appreciated.

I followed the instructions to set up a private docker registry over here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

The instructions worked, and the following deployment of a pod worked from the repo:

apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: esrhost/node-hello imagePullSecrets: - name: regcred 

Now, when I try to replicate this in a Helm chart, I'm getting the following error:

me@me:~/projects/helm-test/init-test$ helm install --name node-hello . --set service.type=NodePort Error: release node-hello failed: Deployment in version "v1beta2" cannot be handled as a Deployment: v1beta2.Deployment: Spec: v1beta2.DeploymentSpec: Template: v1.PodTemplateSpec: Spec: v1.PodSpec: ImagePullSecrets: []v1.LocalObjectReference: readObjectStart: expect { or n, parsing 696 ...ecrets":["... at {"apiVersion":"apps/v1beta2","kind":"Deployment","metadata":{"labels":{"app":"init-test","chart":"init-test-0.1.0","heritage":"Tiller","release":"node-hello"},"name":"node-hello-init-test","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"init-test","release":"node-hello"}},"template":{"metadata":{"labels":{"app":"init-test","release":"node-hello"}},"spec":{"containers":[{"image":"esrhost/node-hello:stable","imagePullPolicy":null,"livenessProbe":{"httpGet":{"path":"/","port":"http"}},"name":"init-test","ports":[{"containerPort":8080,"name":"http","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/","port":"http"}},"resources":null}],"imagePullSecrets":["regcred {}"]}}}} 

The issue is definitely with imagePullSecrets.

My values.yaml is the following:

 replicaCount: 1 image: repository: esrhost/node-hello tag: stable pullpolicy: ifnotpresent # .Values.image.repoSecret repoSecret: regcred service: type: ClusterIP port: 8080 ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" path: / hosts: - chart-example.local tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local resources: {} nodeSelector: {} tolerations: [] affinity: {} 

Basically the only non-default thing I have is my repository, and the repoSecret variable which I'm using in the following template deployment.yaml file:

apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{ template "init-test.fullname" . }} labels: app: {{ template "init-test.name" . }} chart: {{ template "init-test.chart" . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ template "init-test.name" . }} release: {{ .Release.Name }} template: metadata: labels: app: {{ template "init-test.name" . }} release: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: 8080 protocol: TCP livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http resources: imagePullSecrets: - {{ .Values.image.repoSecret }} {{ toYaml .Values.resources | indent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{ toYaml . | indent 8 }} {{- end }} 

As you can see I've added spec->template->spec->imagePullSecrets and assigned the .Values.image.repoSecret variable.

I cannot for the life of me figure out what is causing this error. As far as I'm aware secrets are being propagated from deployments to pods, so it shouldn't make a difference that I've assigned it there.

TL;DR - Added a docker registry to K8s - works with kubectl. Added the same to a default nginx Helm chart - doesn't work. Confuse.

1 Answer 1

2

I assume that you use Kubernetes cluster version 1.9.* or even higher.

Therefore, you have to edit your deployment.yaml file and replace apiVersion: apps/v1beta2 with apiVersion: apps/v1, as old apiVersion had been deprecated starting from Kubernetes v1.9.0 according to the versions ChangeLog.

You can check supported api-versions within your Kubernetes cluster using the next command:

kubectl api-versions 

There was a similar issue discussed here.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.