DEV Community

Matthieu ROBIN
Matthieu ROBIN

Posted on

Kubernetes-based development with Devspace

Kubernetes-based development with Devspace

Modern applications base more and more on micro-services. Splitting large applications into smaller pieces makes the whole more maintainable and easier to develop. However, instead of developing a big monolith, we work on a bunch of tiny applications, making it more challenging to debug and deploy the whole system. Luckily, there are many tools out there to help us out. An interesting comparison of some of them can be found here. In what follows, we want to see how easy it is to do Kubernetes-based development with devspace.

A micro-services application

Suppose we are developing a micro-services application, for example an e-shop. In essence, our e-shop consists of a frontend application that communicates with a backend through an API. For the sake of simplicity, let's say that our backend looks like this:

baseline

User management is handled by the iam-service. Orders are processed via the message-queue. Most of our backend's business logic is packed in serverless functions served by the faas. Our application's state is held in our database. Finally, for some good reasons (e.g. the ease of testing setup), we are developing our software in a monorepo.

With time, our micro-services application will necessarily contain a lot of business logic that will be packed in even more micro-service code or serverless functions. For example, we might need a connector service between our message-queue and our faas, or an assets service with some logic to add new assets in a controlled way. A very convenient way to host our micro-services is to dockerize them and let Kubernetes orchestrate them.

Typically, our IAM service is a third-party like keycloak or fusionauth which we can easily deploy on Kubernetes by means of a helm chart. Helm is a very practical package manager for Kubernetes. For example, a typical fusionauth deployment would look like something along these lines:

helm repo add fusionauth https://fusionauth.github.io/charts helm install fusionauth --create-namespace fusionauth/fusionauth --namespace auth \ --set database.protocol=postgresql \ --set database.user=<username> \ --set database.password=<password> \ --set database.host=<hostname> \ --set database.port=5432 \ --set database.name=<database-name> \ --set database.root.user=<root-user> \ --set database.root.password=<root-password> \ --set app.runtimeMode=production \ --set search.engine=database 
Enter fullscreen mode Exit fullscreen mode

Our message queue is probably redismq, rabbitmq or kubemq, for which we also easily find helm charts.

Then come our own custom services for which we need to write our own Kubernetes resources (deployments, services, ingresses, etc.). Finally, we can write some kind of script to install all the necessary helm charts and apply our Kubernetes resources.

Because our software deals with sensitive data and makes our business, we need to be careful when deploying a new release. Therefore, we want to somehow test it before we release it, which is very easy to do on Kubernetes clusters. Indeed, we can imagine we have two environments, one for testing and one for production. The testing (or staging) environment would be synchronized with our software repository's main branch while the production environment would be the pendant of our repo's production branch. We develop on the main branch and, as soon as the Q&A is satisfied with the software pushed there, we push it to production.

We are now in the complicated situation where we want to develop our software on a development machine, test it somehow on an almost productive environment, and release it to a production environment. That leads us to three different build and deployment procedures. On a development machine, we surely want to interact with a short-lived database. Moreover, login credentials to our microservices (like the assets service) should be trivial. On staging, we might want to grant unprotected access to some of our services, for the sake of debugging. On production, we want to secure and hide as much as possible.

Finally, if our development environment was close to the production environment, we would minimize the amount of surprises following a deployment to staging or production, which would increase our productivity.

Enter devspace

Devspace is a cli tool that allows automation of both the build and the deployment of container images. In addition, that tool might as well replace our makefile or docker-compose configurations and provides us with the ability to do Kubernetes-based development. Because of the latter ability, let's assume we have set up a small cluster on our development machine. In one click, you can have Jelastic set up that development cluster for you

through a very simple interface

Or you can manually set up your own kind, minikube, or docker for desktop cluster.

The easiest way to install devspace (not on your Kubernetes cluster, on a remote machine from which you develop your code!) is to do

npm install -g devspace 
Enter fullscreen mode Exit fullscreen mode

Then, depending on our use-case, we might run

devspace init 
Enter fullscreen mode Exit fullscreen mode

and follow the instructions. In our particular case, we want to build

  • our API
  • a bunch of custom micro-services

That we do with the following configuration:

version: v1beta10 vars: - name: SOME_IMPORTANT_VARIABLE source: env default: the-important-value images: my-custom-service: image: my-repo/my-custom-service tags: - ${DEVSPACE_RANDOM} dockerfile: ./my-custom-service/Dockerfile context: . build: docker: options: target: app buildArgs: SOME_IMPORTANT_VARIABLE: ${SOME_IMPORTANT_VARIABLE} api: image: my-repo/api tags: - ${DEVSPACE_RANDOM} dockerfile: ./api/Dockerfile context: . 
Enter fullscreen mode Exit fullscreen mode

The above configuration defines how to build our API and our micro-services. When they are pushed to their docker registry, both docker images will have the same random tag (defined by the built-in variable DEVSPACE_RANDOM). Instead of using a docker daemon, we can also choose to use custom build commands or kaniko. We can use environment variables, like SOME_IMPORTANT_VARIABLE and provide the usual options to build docker images.

Next, we want to deploy

  • our API
  • our custom micro-services
  • various third-party services (iam, message queue, faas, assets)

In order to take care of that, we complete the previous configuration with the following snippet:

deployments: # for the custom service, we have regular k8s manifests - name: my-custom-service kubectl: manifests: - my-custom-service/manifest.yaml # for the api, we have written a helm chart - name: api helm: chart: name: api/chart values: image: my-repo/api postgres: database: my-database hostname: postgres username: my-username password: my-password # the database service is a 3rd party - name: postgres helm: chart: name: postgresql repo: https://charts.bitnami.com/bitnami values: postgresqlDatabase: my-database postgresqlUsername: my-username postgresqlPassword: my-password # the iam service is a 3rd party - name: iam-service helm: chart: name: fusionauth/fusionauth values: database: protocol: postgresql user: iam-user password: iam-password host: postgres name: iam-database user: root-db-username password: root-db-password search: engine: database 
Enter fullscreen mode Exit fullscreen mode

The first deployment, my-custom-service, amounts to

kubectl apply -f my-custom-service/manifest.yaml 
Enter fullscreen mode Exit fullscreen mode

The second deployment, api, is a regular helm installation. Instead of writing our own helm chart, we could have used the built-in component charts which offer a compromise between defining our own helm charts and keeping our Kubernetes resources configuration simple. With our current devspace configuration in place, we can start our development environment:

devspace dev 
Enter fullscreen mode Exit fullscreen mode

That command builds our docker images and deploys our software to our development Kubernetes cluster's default namespace. We are now in a situation where we can develop our code on our development machine and push it to our development Kubernetes cluster. With either hot reloading or auto-reloading, we can even fix our code and the result is automatically propagated to our cluster.

Deploy to multiple environments

Now we have a setup that works for development. We are not very far away from our staging environment setup. First, our docker images need to be tagged following the pattern <my-repo>/<my-service>:staging-<commit-short-sha>. Second, our staging environment bases on external database and IAM services. Consequently, we don't want to deploy them on staging and we need to adapt the services that depend on them. In devspace, we can define profiles. Until now, our configuration has no reference to any profile, therefore it is the development profile. We can define the staging profile, let it base on the development profile and adapt it as we've just described. To do that, let's add the following configuration to our devspace.yaml:

profiles: - name: staging patches: # images -> adapt tag - op: replace path: /images/0=${DEVSPACE_RANDOM} value: - staging-${DEVSPACE_GIT_COMMIT} # postgres -> remove, we have an external database - op: remove path: /deployments/name=postgres # iam service -> remove, we have an external iam service - op: remove path: /deployments/name=iam-service # api  # -> we need an ingress - op: replace path: /deployments/name=api/helm/values/ingress value: enabled: true annotations: kubernetes.io/ingress.class: nginx-cert cert-manager.io/cluster-issuer: letsencrypt-prod hosts: - host: api-staging.my-staging-domain.com paths: - / tls: - secretName: api-tls hosts: - api-staging.my-staging-domain.com # -> we need up-to-date database accesses - op: replace path: /deployments/name=api/helm/values/postgres value: database: my-external-database hostname: my-external-database-hostname username: my-external-username password: my-external-password # my-custom-service -> nothing to do 
Enter fullscreen mode Exit fullscreen mode

We can of course follow the same philosophy coupled with the concept of parent profiles to define our production profile. Then, building and deploying to staging or production is as simple as

devspace deploy -p staging devspace deploy -p production 
Enter fullscreen mode Exit fullscreen mode

Obviously, remotely debugging those profiles is also possible.

We've only scratched the surface...

Many more features are available, like custom commands definition, port-(reverse-)forwarding, file synchronization, container log streaming, etc., which you can read about here. Wisely used in CI / CD pipelines, devspace can drastically simplify the way you release your software.

Top comments (0)