DEV Community

Ankit Bansal
Ankit Bansal

Posted on • Originally published at ankitbansalblog.Medium on

Building custom image in kubernetes cluster with kaniko

One of the challenge we faced while migrating infrastructure from custom orchestration engine to Kubernetes was how to build image. As part of our own orchestration engine, we used to build image on the fly and publish it to registry. Since, kubernetes requires image to be already built and published, this brought a need of another layer on top of it that can build images and publish to registry.

When working within your local environment, this issue doesn't crop up often since you will usually have a docker daemon locally to build and publish image. However, when working as a platform, it won't work since there can be multiple people building image concurrently and thus you will need to think about the whole infrastructure needed for building image e.g. servers requirement, autoscaling, security etc. This is something we wanted to avoid and searching for various options, we came across kaniko which provided capability to build image with in the cluster.

Though this was one of the usecase, there are other scenarios where folks would like to take this route and avoid setting up docker on their local machines. In this article, will discuss step by step process to build and publish image using kaniko.

Creating Build Context

To build image using kaniko, first step is to create build context and publish it to one of the compatible storage. Build context contains your source code, dockerfile etc. Essentially it is the source folder you use to run “docker build” command. You need to zip it and publish it to compatible storage. At the time of writing this, kaniko only had support for GCS/S3 storages.

Unfortunately, we were on openstack, so an additional step was to create a kubernetes pod that downloads application zip from openstack, and create a context file including our dockerfile and application zip and finally pushes it to S3.

Setting up credentials

There are two credentials required for image creation. One is to download context from storage and other is to push image in docker registry. For AWS, you can create credentials file using your key id and access. This file can then be used to create secret.

[default] aws_access_key_id=SECRET_ID aws_secret_access_key=SECRET_TOKEN 
Enter fullscreen mode Exit fullscreen mode

kubectl create secret generic aws-secret –from-file=credentials

Other credential needed is to push image file in the registry. Any registry can be used to publish image, we were using dockerhub for this. Create a config file containing base64 encoded credentials and use it to create configmap.

{ "auths": { "https://index.docker.io/v1/": { "auth": "BASE_64_AUTH" }, "HttpHeaders": { "User-Agent": "Docker-Client/18.06.1-ce (linux)" } } } 
Enter fullscreen mode Exit fullscreen mode

kubectl create configmap docker-config –from-file=config.json

Creating deployment to publish image

Kaniko doesn’t use docker daemon to create image. Instead it has its own mechanism of reading dockerfile line by line and creating snapshot once done. It has it’s own published image in gcr.io with name executor to accomplish this.

Since it’s a one time activity to build and publish image, we create kubernetes job to accomplish the same. You need to mount aws-secret and docker-config for authentication purpose. There are two environment variables needed: AWS_REGION to provide region name in which context is present and DOCKER_CONFIG to specify docker credentials path. Kaniko will ensure to ignore these folders while creating snapshot.

apiVersion: batch/v1 kind: Job metadata: name: image-publisher spec: template: spec: containers: - name: image-publisher image: gcr.io/kaniko-project/executor:latest args: ["--dockerfile=dockerfile", "--context=s3://imagetestbucket123/context.tar.gz", "--destination=index.docker.io/ankitbansal/httpserver:1.0"] volumeMounts: - name: aws-secret mountPath: /root/.aws/ - name: docker-config mountPath: /kaniko/.docker/ env: - name: AWS_REGION value: us-east-2 - name: DOCKER_CONFIG value: /kaniko/.docker restartPolicy: Never volumes: - name: aws-secret secret: secretName: aws-secret1 - name: docker-config configMap: name: docker-config backoffLimit: 2 
Enter fullscreen mode Exit fullscreen mode

kubectl create -f job.yaml

You can tail pod logs and see the image getting created and published:

INFO[0000] Downloading base image ruby:2.4.5-jessie INFO[0002] Unpacking rootfs as cmd RUN mkdir -p /u01/app/ && mkdir -p /u01/data/ && mkdir -p /u01/logs/ && groupadd myuser && groupadd builds && useradd -m -b /home -g myuser -G builds myuser && chown -R myuser:myuser /u01/ && chgrp -hR builds /usr/local requires it. INFO[0020] Taking snapshot of full filesystem... INFO[0033] Skipping paths under /kaniko, as it is a whitelisted directory INFO[0033] Skipping paths under /root/.aws, as it is a whitelisted directory INFO[0033] Skipping paths under /var/run, as it is a whitelisted directory INFO[0033] Skipping paths under /dev, as it is a whitelisted directory INFO[0033] Skipping paths under /proc, as it is a whitelisted directory INFO[0033] Skipping paths under /sys, as it is a whitelisted directory INFO[0038] ENV APP_HOME=/u01/app INFO[0038] WORKDIR /u01/app INFO[0038] cmd: workdir INFO[0038] Changed working directory to /u01/app INFO[0038] Creating directory /u01/app INFO[0038] Taking snapshot of files... INFO[0038] EXPOSE 8080 INFO[0038] cmd: EXPOSE INFO[0038] Adding exposed port: 8080/tcp INFO[0038] RUN mkdir -p /u01/app/ && mkdir -p /u01/data/ && mkdir -p /u01/logs/ && groupadd myuser && groupadd builds && useradd -m -b /home -g myuser -G builds myuser && chown -R myuser:myuser /u01/ && chgrp -hR builds /usr/local INFO[0038] cmd: /bin/sh INFO[0038] args: [-c mkdir -p /u01/app/ && mkdir -p /u01/data/ && mkdir -p /u01/logs/ && groupadd myuser && groupadd builds && useradd -m -b /home -g myuser -G builds myuser && chown -R myuser:myuser /u01/ && chgrp -hR builds /usr/local] INFO[0039] Taking snapshot of full filesystem... INFO[0050] Skipping paths under /kaniko, as it is a whitelisted directory INFO[0050] Skipping paths under /root/.aws, as it is a whitelisted directory INFO[0050] Skipping paths under /var/run, as it is a whitelisted directory INFO[0051] Skipping paths under /dev, as it is a whitelisted directory INFO[0051] Skipping paths under /proc, as it is a whitelisted directory INFO[0051] Skipping paths under /sys, as it is a whitelisted directory INFO[0056] Using files from context: [/kaniko/buildcontext/appcontent] INFO[0056] ADD appcontent/ /u01/app INFO[0056] Taking snapshot of files... INFO[0056] USER myuser INFO[0056] cmd: USER 2019/05/12 03:56:32 existing blob: sha256:053381643ee38d023c962f8789bb89be21aca864723989f7f69add5f56bd0472 2019/05/12 03:56:32 existing blob: sha256:e0ac5d162547af1273e1dc1293be3820a8c5b3f8e720d0d1d2edc969456f41aa 2019/05/12 03:56:32 existing blob: sha256:09e4a5c080c5192d682a688c786bffc1702566a0e5127262966fdb3f8c64ef45 2019/05/12 03:56:32 existing blob: sha256:14af2901e14150c042e83f3a47375b29b39f7bc31d8c49ad8d4fa582f4eb0627 2019/05/12 03:56:32 existing blob: sha256:6cc848917b0a4c37d6f00a2db476e407c6b36ce371a07e421e1b3b943ed64cba 2019/05/12 03:56:32 existing blob: sha256:62fe5b9a5ae4df86ade5163499bec6552c354611960eabfc7f1391f9e9f57945 2019/05/12 03:56:32 existing blob: sha256:bf295113f40dde5826c75de78b0aaa190302b3b467a3d6a3f222498b0ad1cea3 2019/05/12 03:56:33 pushed blob sha256:7baebbfb1ec4f9ab9d5998eefc78ebdfc063b9547df4395049c5f8a2a359ee20 2019/05/12 03:56:33 pushed blob sha256:6850b912246a34581de92e13238ac41c3389c136d25155d6cbe1c706baf3bc0e 2019/05/12 03:56:33 pushed blob sha256:0f9697e63b4482512d41f803b518ba3fb97bde20b21bec23c14bccc15f89e9f7 2019/05/12 03:56:42 pushed blob sha256:90e84d259def7af68e106b314e669056cb029d7a5d754d85cf0388419a5f2bcd 2019/05/12 03:56:43 index.docker.io/ankitbansal/httpserver:1.0: digest: sha256:13c218bb98623701eb6fd982b49bc3f90791695ce4306530c75b2094a8fdd468 size: 1896 
Enter fullscreen mode Exit fullscreen mode

Once done, yo should be able to verify image in registry.

Using image to deploy app

That’s it. Now you should be able to use this image for creating deployment and verify that image is working fine.

apiVersion: apps/v1 kind: Deployment metadata: name: rubyapp-deployment spec: replicas: 1 # tells deployment to run 2 pods matching the template selector: matchLabels: app: rubyapp1 template: # create pods using pod definition in this template metadata: # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is # generated from the deployment name labels: app: rubyapp1 spec: containers: - name: httpserverruby image: ankitbansal/httpserver:1.0 imagePullPolicy: Always command: [ruby] args: ["httpserver.rb"] ports: - containerPort: 8080 resources: limits: memory: "1Gi" requests: memory: "1Gi" env: - name: APP_HOME value: "/u01/app" - name: PORT value: "8080" 
Enter fullscreen mode Exit fullscreen mode

and verify response using curl

Why not simply use kubernetes docker daemon

Since, kubernetes already have docker daemon running in its node, one of the question arises why not use the same docker daemon to build image. This is not a great option to go for since it requires container to be priviliged one. A privileged container has all the capabilities of the container and it no longer has any limitations set by cgroup. Essentially it can do whatever host can do. This can pose a serious threat.

Top comments (0)