DEV Community

Cover image for How to automate tests with Tekton
Caio Campos Borges Rosa for Woovi

Posted on

How to automate tests with Tekton

Following our series on CI/CD cloud native, we will go on setting up a simple Tekton pipeline to automate testing, using kubernetes. We should cover the simple flow of updating code and testing code. We will be using GitHub webhook events to trigger our pipeline.

First we need to install Tekton operator, so we can focus configuration in one config file, making it easy to have all the features we need in a more declarative way.

Installing Tekton Operator

First, install the Operator Lifecycle Manager, a tool to manage operators running in your cluster.

 curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.27.0/install.sh | bash -s v0.27.0 
Enter fullscreen mode Exit fullscreen mode

Install the operator:

 kubectl create -f https://operatorhub.io/install/tektoncd-operator.yaml 
Enter fullscreen mode Exit fullscreen mode

Wait until the operator is up and running. You can check the status by running:

 kubectl get csv -n operators 
Enter fullscreen mode Exit fullscreen mode

Configure Tekton

We will need to configure Tekton. The operator is configured using a file called TektonConfig.yaml.

 apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config namespace: tekton-pipelines spec: targetNamespace: tekton-pipelines profile: all chain: disabled: false pipeline: await-sidecar-readiness: false disable-affinity-assistant: true disable-creds-init: false enable-api-fields: alpha enable-bundles-resolver: true enable-cluster-resolver: true enable-custom-tasks: true enable-git-resolver: true performance: disable-ha: true buckets: 1 replicas: 1 threads-per-controller: 32 kube-api-qps: 100.0 kube-api-burst: 200 pruner: disabled: false schedule: "0 * * * *" resources: - taskrun - pipelinerun keep: 3 # keep-since: 1440 # NOTE: you can use either keep or keep-since, not both prune-per-resource: true hub: params: - name: enable-devconsole-integration value: "true" options: disabled: false dashboard: readonly: false options: disabled: false 
Enter fullscreen mode Exit fullscreen mode

We will go on about some items in the config file, but you can find the full reference on the TektonConfig.

We are setting up the profile of the operator to all as this gives us access to all the features on the Tekton operator. If you plan to use less and want a slimmer setup, the reference for profiles is:

all: This profile will install all components (TektonPipeline, TektonTrigger, and TektonChain)

basic: This profile will install only TektonPipeline, TektonTrigger, and TektonChain components

lite: This profile will install only TektonPipeline components

We are disabling affinity assistant. This is a feature that coschedules pods of a PipelineRun that share the same persistent volume to the same Node. This is being deprecated in favor of coschedule workspaces. We are also disabling sidecar readiness, as we won't be using any sidecars.

Pruner configuration is configured to run at the beginning of each hour. It will delete Tasks and TaskRuns, clearing resources.

Dashboard readonly false will allow for actions on PipelineRuns and tasks to be taken directly on the dashboard.

Overview

Let's visualize what we want to perform in our CI/CD flow.
First, we will make changes to our codebase. Those changes will be pushed to a GitHub repository. The push event on the GitHub webhook will trigger our pipeline to run tests.

Overview Tekton

Handling Github Events

For external events, we will set up an event listener. Tekton uses a custom resource for that, EventListener. This will create a service exposed via Kubernetes API. This service will receive GitHub webhook events. Then, after that, we can write a filter to match the event and trigger on our event listener.

 --- apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: tekton-github-pr-{{ .Values.projectName }} spec: serviceAccountName: service-account-{{ .Values.projectName }} triggers: - name: pr-trigger interceptors: - ref: name: "cel" kind: ClusterInterceptor apiVersion: triggers.tekton.dev params: - name: "filter" value: > header.match('x-github-event', 'merge') - name: "overlays" value: - key: author expression: body.pusher.name.lowerAscii().replace('/','-').replace('.', '-').replace('_', '-') - key: pr-ref expression: body.ref.lowerAscii().replace("/", '-') bindings: - ref: tb-github-pr-trigger-binding-{{ .Values.projectName }} template: ref: tt-github-pr-trigger-template-{{ .Values.projectName }} 
Enter fullscreen mode Exit fullscreen mode

Notice this resource has a serviceAccountName as required. The EventListener will create resources in our cluster. This will ensure we have the correct roles and permissions to do it properly.

We will use a Cel ClusterInterceptor, another custom resource so we can write filter expressions using CEL, This is how we manage to evaluate the webhook request and filter triggers for many kinds of pipelines.

Here we also use overlays to create variables based on an expression that can be passed down to our pipelines. In this case, we want the author and the ref so we can customize the pipeline display name.

Bindings and templates are two other resources we will reference in the EventListener.

Bindings

TriggerBindings are another way to bind objects from the webhook request to variables we can use to control pipeline flow.

 apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerBinding metadata: name: tb-github-pr-trigger-binding-{{ .Values.projectName }} spec: params: - name: revision value: $(body.after) - name: repo-url value: $(body.repository.ssh_url) - name: author value: $(extensions.author) - name: pr-ref value: $(extensions.pr-ref) - name: repo-full-name value: $(body.repository.full_name) 
Enter fullscreen mode Exit fullscreen mode

Here we reference from the request captured by the EventListener the information we want to be assigned to variables, so we can pass down to the pipeline in the TriggerBinding. The variables we create with overlays in the EventListener we have to reference from the object extensions. We can reference the request body and header too.

Triggering the Pipeline

TriggerTemplate is the resource that pieces together events with the variables we set up on the TriggerBinding. Here we will associate variables as params to the pipelines, creating a PipelineRun, which is the actual automation being executed as a pod in Kubernetes.

 apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerTemplate metadata: name: tt-github-pr-trigger-template-{{ .Values.projectName }} spec: params: - name: revision - name: repo-url - name: author - name: repo-full-name - name: pr-ref resourcetemplates: - apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: pr-$(tt.params.pr-ref)-$(tt.params.author)- spec: serviceAccountName: service-account-{{ .Values.projectName }} pipelineRef: name: {{ .Values.projectName}}-pipeline workspaces: - name: cache persistentVolumeClaim: claimName: pvc-cache-{{ .Values.projectName }} - name: shared-data volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi params: - name: repo-url value: $(tt.params.repo-url) - name: revision value: $(tt.params.revision) - name: repo-full-name value: $(tt.params.repo-full-name) - name: ref value: $(tt.params.ref) - name: deploy-staging value: $(tt.params.deploy-staging) - name: test-all value: $(tt.params.test-all) 
Enter fullscreen mode Exit fullscreen mode

On the TriggerTemplate, we receive the params from the TriggerBinding and pass the params to the PipelineRun.
We need to reference the Pipeline on pipelineRef, this is the pipeline we want to run.

We also define our workspaces. In our example, we are passing 2 workspaces, one as a volumeClaimTemplate that will be discarded at the end of the PipelineRun. It uses claim templates to dynamically provision non-persistent storage with the resource values described. The workspace cache is defined using persistentVolumeClaim, which takes a persistent volume you need to define.

Pipeline

The pipeline is the orchestrated flow of tasks we want to run. It will have the parameters we defined in the TriggerTemplate and the logic we want to execute applied to tasks. For this example, we will look at a simple test pipeline that clones a repository and runs the test script.

 apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: {{ .Values.projectName }}-pipeline-tests spec: workspaces: - name: shared-data - name: cache params: - name: repo-url type: string - name: revision type: string tasks: - name: fetch-source taskRef: resolver: cluster params: - name: kind value: task - name: name value: task-git-clone - name: namespace value: tekton-pipelines params: - name: url value: $(params.repo-url) - name: revision value: $(params.revision) - name: depth value: 2 workspaces: - name: output workspace: shared-data - name: install-deps runAfter: ["fetch-source"] taskRef: resolver: cluster params: - name: kind value: task - name: name value: task-install-deps - name: namespace value: tekton-pipelines params: - name: install-script value: yarn install --prefer-offline --ignore-engines workspaces: - name: source workspace: shared-data - name: cache workspace: cache - name: test-task runAfter: ["install-deps"] taskRef: resolver: cluster params: - name: kind value: task - name: name value: task-test - name: namespace value: tekton-pipelines params: - name: diff value: $(tasks.fetch-source.results.diff) - name: install-deps value: yarn install - name: run-test value: yarn test workspaces: - name: source workspace: shared-data - name: cache workspace: cache 
Enter fullscreen mode Exit fullscreen mode

Here we organize the logic of our pipeline.

To help with organization and reutilization of tasks, which are the more atomic resources of a Tekton pipeline, we use a cluster resolver. This way, we can have one task shared across all namespaces and eliminate the need to duplicate tasks that are common to multiple pipelines. The cluster resolver takes the namespace the task is in and the name of the task.

The parameters we define in the TriggerTemplate and pass to the pipeline run are defined in the pipeline and passed to the tasks.

Another great feature of the Tekton pipeline is the TaskResult. Notice we use a parameter in the test-task that is inherited from a task result. This result is defined in the task fetch-source, which is the task we will create to clone a remote repository. The parameter diff is a list of files that were modified in the PR that triggered this pipeline.

The workspaces we define in the TriggerTemplate are also assigned to the tasks. This ensures all pods created for all tasks in the pipeline execute our automations in the same storage space. That way, we can clone the remote repository at the beginning of the pipeline and perform many tasks with the same files.

Creating our tasks

Now we define the tasks, which are the actual work to be done in our pipeline. In Tekton, each task is a Pod in Kubernetes. It is composed of several steps, each step being a container inside this task pod.

Fetch Source Task

This task fetches a remote repository. We are using depth 2, which means we will be getting the last two commits from the repository, avoiding downloading too much data.

We will also generate some task results we can use in our pipelines. The result is defined in the Task custom resource. Tekton also provides an api to interact with results. To use the results from a task, we only need to reference them as tasks..results..

Here an example in our public repository of a fech-source task.

Test Task

This is a simple task that executes a script command you provide. As with the previous task, we define the workspace where we clone the repository and define one step to install dependencies and another to run the tests. There are many ways to organize this same scenario; this is just an example of tasks and how steps are defined.

 apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: task-test namespace: {{ .Values.projectName }} spec: description: >- A generic task to run any bash command in any given image workspaces: - name: source optional: true - name: cache optional: true params: - name: run-test type: string - name: install-deps type: string - name: diff type: string description: diff of the pull request - name: image type: string default: "node:latest" steps: - name: install image: $(params.image) workingDir: $(workspaces.source.path) script: | #!/usr/bin/env bash set -xe $(params.install-deps) - name: test image: $(params.image) workingDir: $(workspaces.source.path) script: | #!/usr/bin/env bash set -xe $(params.run-test) 
Enter fullscreen mode Exit fullscreen mode

Dashboard

In order to visualize your tekton resources, tekton operators will create a service to host the Tekton Dashboard, to find the ip asigned to the dashboard run:

 kubectl get services -n tekton-pipelines 
Enter fullscreen mode Exit fullscreen mode

Tekton Dashboard

Final Considerations

In this article, we covered the steps to automate a simple testing pipeline using the Tekton Operator. There are many ways to achieve this same result, and Tekton is a powerful tool that offers a lot of resources you can use depending on your needs. Additionally, the community is active and supportive. You can open an issue on the Tekton Github.

Woovi is also improving our own CI/CD internal platform. In the WooviOps repo, you can find our basic implementation of the same case we covered in this article and more. If you want to help us improve, we welcome your PRs, comments, or you can just reach out to us on Twitter.

Also, Woovi is hiring!

Photo by Rock'n Roll Monkey on Unsplash

Top comments (1)

Collapse
 
daniloab profile image
Danilo Assis

Awesome!