DEV Community

Mahinsha Nazeer
Mahinsha Nazeer

Posted on • Originally published at Medium on

Deploying a Simple App on K3S in AWS EC2 with GitHub Actions & ECR

In this session, we’ll walk through the configuration of K3S on an EC2 instance and deploy a multi-container application with a frontend, backend, and database. The application will run inside a Kubernetes cluster using Deployments and StatefulSets in headless mode. For the setup, we’ll use EC2 to host the cluster, GitHub as our code repository, and GitHub Actions to implement CI/CD.

If you’re an absolute beginner and not familiar with configuring EC2, I recommend checking out my blog here:

Step-by-Step Guide to Launching an EC2 Instance on AWS : For Beginners

This will be an end-to-end project deployment designed for those learning K3S, CI/CD, and Docker. You’ll gain hands-on experience in setting up CI/CD pipelines, writing Dockerfiles, and using Docker Compose. We’ll then move on to deploying the application in K3S, working with Kubernetes manifests, and exploring key components such as Deployments, Services (NodePort and ClusterIP), ConfigMaps, Persistent Volumes (PV), Persistent Volume Claims (PVC), and StatefulSets.

K3S is a lightweight Kubernetes distribution developed by Rancher (now SUSE). It’s designed to be:

Lightweight — small binary, minimal dependencies.

Easy to install — single command installation.

Optimized for edge, IoT, and small clusters — runs well on low-resource machines like Raspberry Pi or small EC2 instances.

Fully compliant — supports all standard Kubernetes APIs and workloads.

In short, K3S simplifies Kubernetes and makes it resource-efficient, making it ideal for single-node clusters, test environments, and learning purposes.

Log login to the EC2 machine and install k3s first:

K3s

You can install K3S on your machine using the following single command:

sudo apt update -y && sudo apt upgrade -y curl -sfL https://get.k3s.io | sh - # Check for Ready node, takes ~30 seconds sudo k3s kubectl get node 
Enter fullscreen mode Exit fullscreen mode


Installation of k3s

Once the installation is completed, the output should be similar to this:


Kubectl node status

Once the cluster is up and running, we can move on to the application. You can refer to the following repository for the demo To-Do List app. Before cloning the repository, make sure Docker is installed on the machine to build and test the application. For installing Docker, refer to the following URL:

Ubuntu

#run the following command first to remove conficting packages for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update #Installing Docker sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin #Now verify the installtion. sudo docker run hello-world 
Enter fullscreen mode Exit fullscreen mode

Now, let’s dive into the demo application. The Application Stack:

  • Frontend: React.js
  • Backend API: Node.js + Express
  • Database: MongoDB
  • Containerization & Registry: Docker + AWS ECR
  • Orchestration & Service Management: Kubernetes (K3s)

Next, let’s clone the application repository to your local machine.

GitHub - mahinshanazeer/docker-frontend-backend-db-to_do_app: Simple Application with Frontend + Backened + DB

git clone https://github.com/mahinshanazeer/docker-frontend-backend-db-to_do_app 
Enter fullscreen mode Exit fullscreen mode


Clone the github application

Once the repository is cloned, switch to the application directory and check for the Docker Compose file.


Directory structure

version: "3.8" services: web: build: context: ./frontend args: REACT_APP_API_URL: ${REACT_APP_API_URL} depends_on: - api ports: - "3000:80" networks: - network-backend env_file: - ./frontend/.env api: build: ./backend depends_on: - mongo ports: - "3001:3001" networks: - network-backend mongo: build: ./backend-mongo image: docker-frontend-backend-db-mongo restart: always volumes: - ./backend-mongo/data:/data/db environment: MONGO_INITDB_ROOT_USERNAME: admin MONGO_INITDB_ROOT_PASSWORD: adminhackp2025 networks: - network-backend networks: network-backend: volumes: mongodb_data: 
Enter fullscreen mode Exit fullscreen mode

In the Docker Compose file, you’ll see sections for web, api, and mongo. Let’s dive into each directory and review the Dockerfiles. The Docker Compose file builds the Docker images using the Dockerfiles located in their respective directories.

 root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/frontend# cd /home/ubuntu/docker-frontend-backend-db-to_do_app/frontend root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/frontend# cat Dockerfile # ---------- Build Stage ---------- FROM node:16-alpine AS build WORKDIR /app # Copy dependency files first COPY package*.json ./ # Install dependencies RUN npm install --legacy-peer-deps # Copy rest of the app COPY . . # Build the React app RUN npm run build # ---------- Production Stage ---------- FROM nginx:alpine # Copy custom nginx config if you have one # COPY nginx.conf /etc/nginx/conf.d/default.conf # Copy build output from build stage COPY --from=build /app/build /usr/share/nginx/html # Expose port 80 EXPOSE 80 # Start nginx CMD ["nginx", "-g", "daemon off;"] *** root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/backend# cd /home/ubuntu/docker-frontend-backend-db-to_do_app/backend root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/backend# cat Dockerfile FROM node:10-alpine WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3001 *** root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/backend# cd /home/ubuntu/docker-frontend-backend-db-to_do_app/backend-mongo/ root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/backend-mongo# cat Dockerfile FROM mongo:6.0 EXPOSE 27017 
Enter fullscreen mode Exit fullscreen mode

Open the .env file in the frontend directory and update the IP address to your EC2 public IP. This environment variable is used by the frontend to connect to the backend, which runs on port 3001.

vi /home/ubuntu/docker-frontend-backend-db-to_do_app/frontend/.env #edit the IP address, I have updated my EC2 public IP ~~~ REACT_APP_API_URL=http://54.90.185.176:3001/ ~~~ 
Enter fullscreen mode Exit fullscreen mode

We can also cross-check the total number of APIs using the following commands:

grep -R "router." backend/ | grep "(" grep -R "app." backend/ | grep "(" grep -R "app." backend/ | grep "(" | wc -l grep -R "router." backend/ | wc -l 
Enter fullscreen mode Exit fullscreen mode

Let’s test the application by spinning up the containers. Navigate back to the project’s root directory and run the Docker Compose command.

cd /home/ubuntu/docker-frontend-backend-db-to_do_app docker compose up -d 
Enter fullscreen mode Exit fullscreen mode

Once you run the command, Docker will start building the images and spin up the containers as soon as the images are ready


building docker containers

Wait until you see the ‘built’ and ‘created’ messages. Once the containers are up and running, use docker ps -a to verify the status.


build completed and containers started.

docker ps -a 
Enter fullscreen mode Exit fullscreen mode


docker processes

Once the Docker containers are up and running, verify that the application is working as expected.

Once the Docker containers are up and running, verify that the application is working as expected. Open the server’s IP address on port 3000. You can confirm the mapped ports in the Docker Compose file or by checking the docker ps -a output. Here, port 3000 is for the frontend web app, port 3001 is for the backend, and MongoDB runs internally on port 27017 without public access. In this example, load the website by entering 54.90.185.176:3000 in your browser.


Application interface

If you’re using Chrome, right-click anywhere on the page and open Inspect > Network. Then click on Add Todo to verify that the list updates correctly, and the network console shows a 200 status response


checking the network


Application testing

Click on the buttons and try to add new file, and verify the status codes:


Testing

So far, everything looks good. Now, let’s proceed with the Kubernetes deployment. To configure resources in Kubernetes, we’ll need to create manifest files in YAML format. You can create these files as shown below.

mkdir /home/ubuntu/manifest touch api-deployment.yaml api-service.yaml image_tag.txt mongo-secret.yaml mongo-service.yaml mongo-statefulset-pv-pvc.yaml web-deployment.yaml web-env-configmap.yaml web-service.yaml 
Enter fullscreen mode Exit fullscreen mode

Now edit each file and add the following contents:

  1. api-deployment.yaml:

Defines how the backend API should run inside the cluster.

  • Creates 2 replicas of the API for reliability.

  • Uses environment variables from secrets for MongoDB authentication.

  • Ensures the API pods always restart if they fail.

👉 Importance: Provides scalability and fault tolerance for the backend service.

Rolling Update: Gradually replaces old pods with new ones. Uses fewer resources, minimal downtime if tuned, but users may hit bad pods if the new version is faulty.

👉 Rolling = efficient and native.

apiVersion: apps/v1 kind: Deployment metadata: name: api labels: app: api spec: replicas: 2 selector: matchLabels: app: api strategy: type: RollingUpdate rollingUpdate: maxSurge: 3 maxUnavailable: 0 template: metadata: labels: app: api spec: containers: - name: api image: 495549341534.dkr.ecr.us-east-1.amazonaws.com/hackp2025:api-20250907111542 ports: - containerPort: 3001 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongo-secret key: username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongo-secret key: password restartPolicy: Always 
Enter fullscreen mode Exit fullscreen mode
  1. api-service.yaml

Exposes the API deployment to the outside world.

  • Type NodePort makes the service reachable via :31001.

  • Ensures frontend or external clients can communicate with the backend.

👉 Importance: Acts as a bridge between users/frontend and the backend API.

apiVersion: v1 kind: Service metadata: name: api labels: app: api spec: type: NodePort selector: app: api ports: - port: 3001 # internal cluster port targetPort: 3001 # container port nodePort: 31001 # external port on the node 
Enter fullscreen mode Exit fullscreen mode
  1. mongo-secret.yaml

Stores sensitive information (username & password) in base64-encoded format.

  • Used by both the API and MongoDB.

  • Keeps credentials out of plain-text manifests.

👉 Importance: Secure way to handle database credentials.

apiVersion: v1 kind: Secret metadata: name: mongo-secret type: Opaque data: # Base64 encoded values username: YWRtaW4= # "admin" password: YWRtaW5oYWNrcDIwMjU= # "adminhackp2025" 
Enter fullscreen mode Exit fullscreen mode
  1. mongo-service.yaml

Defines the MongoDB service.

- ClusterIP: None makes it a headless service , required for StatefulSets.

  • Allows pods to connect to MongoDB by DNS (e.g., mongo-0.mongo).

👉 Importance: Provides stable networking for MongoDB StatefulSet pods.

apiVersion: v1 kind: Service metadata: name: mongo labels: app: mongo spec: ports: - port: 27017 targetPort: 27017 selector: app: mongo clusterIP: None # headless service for StatefulSet 
Enter fullscreen mode Exit fullscreen mode
  1. mongo-statefulset-pv-pvc.yaml

Handles the database persistence and StatefulSet definition.

- PersistentVolume (PV): Reserves storage (5Gi).

- PersistentVolumeClaim (PVC): Ensures pods can claim storage.

- StatefulSet: Guarantees stable network identity and persistent storage for MongoDB.

👉 Importance: Ensures MongoDB data is preserved even if the pod restarts.

Blue/Green Deployment: Runs two environments (Blue = live, Green = new). Traffic is switched instantly once Green is ready. Near-zero downtime and easy rollback, but requires double resources and is more complex for stateful apps.

👉 Blue/Green = safer cutover, higher cost.

# PersistentVolume for Green MongoDB apiVersion: v1 kind: PersistentVolume metadata: name: mongo-green-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: /root/hackpproject/data-green # separate path for green persistentVolumeReclaimPolicy: Retain storageClassName: "" # Must match PVC in StatefulSet --- # StatefulSet for Green MongoDB apiVersion: apps/v1 kind: StatefulSet metadata: name: mongo-green labels: app: mongo version: green spec: serviceName: mongo # existing headless service replicas: 1 selector: matchLabels: app: mongo version: green template: metadata: labels: app: mongo version: green spec: containers: - name: mongo image: 495549341534.dkr.ecr.us-east-1.amazonaws.com/hackp2025:db-20250907111542 ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongo-secret key: username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongo-secret key: password volumeMounts: - name: mongo-data mountPath: /data/db volumeClaimTemplates: - metadata: name: mongo-data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi storageClassName: "" # binds to the pre-created PV 
Enter fullscreen mode Exit fullscreen mode
  1. web-deployment.yaml

Defines how the frontend (React.js app) should run.

  • Runs 2 replicas for high availability.

  • Pulls API endpoint from ConfigMap.

  • Resource requests/limits ensure fair scheduling.

👉 Importance: Deploys the UI and links it to the backend API via config.

Rolling Update: Gradually replaces old pods with new ones. Uses fewer resources, minimal downtime if tuned, but users may hit bad pods if the new version is faulty.

👉 Rolling = efficient and native.

apiVersion: apps/v1 kind: Deployment metadata: name: web labels: app: web spec: replicas: 2 selector: matchLabels: app: web strategy: type: RollingUpdate rollingUpdate: maxSurge: 3 maxUnavailable: 0 template: metadata: labels: app: web spec: containers: - name: web image: 495549341534.dkr.ecr.us-east-1.amazonaws.com/hackp2025:web-20250907111542 ports: - containerPort: 3000 protocol: TCP env: - name: REACT_APP_API_URL valueFrom: configMapKeyRef: name: web-env key: REACT_APP_API_URL resources: requests: cpu: "200m" memory: "1024Mi" limits: cpu: "2" memory: "2Gi" restartPolicy: Always 
Enter fullscreen mode Exit fullscreen mode
  1. web-env-configmap.yaml

Stores non-sensitive environment variables.

  • Defines the API endpoint for the frontend (REACT_APP_API_URL).

  • Can be updated easily without rebuilding Docker images.

👉 Importance: Provides flexibility to change configuration without redeploying code.

apiVersion: v1 kind: ConfigMap metadata: name: web-env labels: app: web data: REACT_APP_API_URL: http://98.86.216.31:31001 
Enter fullscreen mode Exit fullscreen mode
  1. web-service.yaml

Exposes the frontend to users.

- Type NodePort makes it available externally at :32000.

  • Maps port 3000 (service) → port 80 (container).

👉 Importance: Allows end-users to access the web app from their browser.

apiVersion: v1 kind: Service metadata: name: web labels: app: web spec: type: NodePort selector: app: web # Must match Deployment labels ports: - name: http port: 3000 # Service port inside cluster targetPort: 80 # Container port nodePort: 32000 # External port accessible from outside 
Enter fullscreen mode Exit fullscreen mode

We have now moved all the manifest files to /root/hackpproject/manifestfiles.

Once the manifests are finalised, the next step is to create a repository in ECR to push the build artefact images.

Steps to Create an ECR Repository:

1.Log in to AWS Console → Go to the ECR service.

  1. Create Repository
  2. Click Create repository.
  3. Select Private repository.
  4. Enter repository names — prodimage. (In this case, we are creating a single repository for all those 3 images)
  5. Leave others as default and click Create repository.
  6. Authenticate Docker with ECR


Step 1: Finding the ECR


Step 2: Creating Repository


Step 3: onfiguring Repository

Once the registry is created, you can proceed with the CI/CD pipeline.


Reposiroty end point

Now, let’s create a GitHub Actions pipeline to deploy the code to the EC2 K3S cluster. The first step is to configure GitHub Actions with access to the repository, ECR, and the EC2 instance via SSH.

Navigate to the project directory and create a folder named ‘.github’. Inside this folder, create a file named ‘ci-cd.yml’.

mkdir .github cd .github touch ci-cd.yml vi ci-cd.yml 
Enter fullscreen mode Exit fullscreen mode

The ci-cd.yml file is the core configuration file for GitHub Actions that defines your CI/CD pipeline. Now use the following script in that ci-cd.yml file:

name: CI/CD Pipeline on: push: branches: - main jobs: build-and-deploy: runs-on: ubuntu-latest env: ECR_REGISTRY: ${{ secrets.ECR_REGISTRY }} steps: - name: Pulling the repository uses: actions/checkout@v3 - name: Pre-Build Checks (optional) run: | echo "Checking Docker installation..." docker --version echo "Checking Docker Compose installation..." docker compose version - name: Set up Docker Buildx uses: docker/setup-buildx-action@v2 - name: Login to AWS ECR uses: aws-actions/amazon-ecr-login@v2 env: AWS_REGION: ${{ secrets.AWS_REGION }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - name: Set Image Tag (Timestamp) id: set_image_tag run: | IMAGE_TAG=$(date +'%Y%m%d%H%M%S') echo "image_tag=$IMAGE_TAG" >> $GITHUB_OUTPUT echo "$IMAGE_TAG" > image_tag.txt echo "IMAGE_TAG=$(cat image_tag.txt)" >> $GITHUB_ENV cat image_tag.txt - name: Upload image_tag.txt to K3s server uses: appleboy/scp-action@v0.1.7 with: host: ${{ secrets.EC2_HOST }} username: ${{ secrets.EC2_USER }} key: ${{ secrets.EC2_SSH_KEY }} port: 22 source: image_tag.txt target: /root/hackpproject/manifestfiles - name: Clean up old ECR images continue-on-error: true env: AWS_REGION: ${{ secrets.AWS_REGION }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} run: | echo "Listing all images in ECR..." aws ecr list-images --repository-name hackp2025 --region $AWS_REGION \ --query 'imageIds[*]' --output json > image_ids.json || echo "No images to delete" if [-s image_ids.json]; then echo "Deleting old images from ECR..." aws ecr batch-delete-image --repository-name hackp2025 --region $AWS_REGION \ --image-ids file://image_ids.json else echo "No images found, skipping deletion." fi - name: Install Trivy run: | wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.deb sudo dpkg -i trivy_0.18.3_Linux-64bit.deb - name: Build and Push Docker Images run: | echo "Building Docker images..." docker compose -f docker-compose.yml build docker images echo "Tagging images with timestamp: $IMAGE_TAG" docker tag hackkptask1-api:latest $ECR_REGISTRY/hackp2025:api-$IMAGE_TAG trivy image $ECR_REGISTRY/hackp2025:api-$IMAGE_TAG || echo "⚠️ Vulnerabilities found in API image. Proceeding anyway." docker push $ECR_REGISTRY/hackp2025:api-$IMAGE_TAG docker tag hackkptask1-web:latest $ECR_REGISTRY/hackp2025:web-$IMAGE_TAG trivy image $ECR_REGISTRY/hackp2025:web-$IMAGE_TAG || echo "⚠️ Vulnerabilities found in Web image. Proceeding anyway." docker push $ECR_REGISTRY/hackp2025:web-$IMAGE_TAG docker tag docker-frontend-backend-db-mongo:latest $ECR_REGISTRY/hackp2025:db-$IMAGE_TAG trivy image $ECR_REGISTRY/hackp2025:db-$IMAGE_TAG || echo "⚠️ Vulnerabilities found in Mongo image. Proceeding anyway." docker push $ECR_REGISTRY/hackp2025:db-$IMAGE_TAG - name: Deploy to K3s via SSH uses: appleboy/ssh-action@v0.1.7 with: host: ${{ secrets.EC2_HOST }} username: ${{ secrets.EC2_USER }} key: ${{ secrets.EC2_SSH_KEY }} port: 22 script: | IMAGE_TAG=$(cat /root/hackpproject/manifestfiles/image_tag.txt) ECR_REGISTRY="495549341534.dkr.ecr.us-east-1.amazonaws.com" MANIFEST_DIR="/root/hackpproject/manifestfiles" echo "IMAGE_TAG=$IMAGE_TAG" echo "ECR_REGISTRY=$ECR_REGISTRY" # Replace only the part after "image: " sudo sed -i "s|image: .*hackp2025:web.*|image: ${ECR_REGISTRY}/hackp2025:web-${IMAGE_TAG}|g" $MANIFEST_DIR/web-deployment.yaml sudo sed -i "s|image: .*hackp2025:api.*|image: ${ECR_REGISTRY}/hackp2025:api-${IMAGE_TAG}|g" $MANIFEST_DIR/api-deployment.yaml sudo sed -i "s|image: .*hackp2025:db.*|image: ${ECR_REGISTRY}/hackp2025:db-${IMAGE_TAG}|g" $MANIFEST_DIR/mongo-statefulset-pv-pvc.yaml aws ecr get-login-password --region us-east-1 \ | sudo docker login --username AWS --password-stdin 495549341534.dkr.ecr.us-east-1.amazonaws.com # Apply manifests sudo kubectl delete all --all -n hackpproject sudo kubectl apply -f $MANIFEST_DIR sudo kubectl rollout status deployment/web sudo kubectl rollout status deployment/api #push file to manifest repo cd $MANIFEST_DIR cd $MANIFEST_DIR # Configure git identity for commits git config user.name "hackp25project" git config user.email "github-push@hackp25" # Ensure we’re on main git checkout main # Fetch latest changes git pull --rebase origin main # Commit and push if changes exist if [-n "$(git status --porcelain)"]; then git add . git commit -m "Update manifests for image tag: ${ECR_REGISTRY}/hackp2025:web-${IMAGE_TAG}" git push origin main else echo "No changes to commit." fi 
Enter fullscreen mode Exit fullscreen mode

Now we need to configure the secrets. Follow the screenshots below:


Step 1: Configuring secrets


Step 2: Configuring secrets


Step 3: Adding Repository secret

Note: You can ignore the KUBECONFIG and GH_SSH_KEY secrets, as they are not required for this specific use case. For reference, please see the screenshot I created using ChatGPT, which outlines each secret, its purpose, and how to obtain it.

For more details on configuring and using the AWS CLI, you can refer to my previous blog. I’ll include the link below for your reference:

Configuring AWS CLI for Terraform / Script Automation

I have not included the SSH key generation and key management steps, as they are common knowledge. Including them would make the guide unnecessarily long. It is expected that users are familiar with configuring SSH keys; otherwise, I recommend reviewing Linux fundamentals before starting with Kubernetes.

Next, we need to authorise the EC2 instance to access ECR using the AWS CLI.

aws ecr get-login-password --region us-east-1 > ecr_pass aws ecr get-login-password --region us-east-1 \ | sudo docker login --username AWS --password-stdin 495549341534.dkr.ecr.us-east-1.amazonaws.com 
Enter fullscreen mode Exit fullscreen mode

Also, create a secret for MongoDB credentials. This command creates a Kubernetes Secret mongo-secret that securely stores the MongoDB username and password. Instead of hardcoding credentials in manifests, applications like the backend API or MongoDB deployment can reference this secret for authentication, ensuring better security and centralised management of sensitive data. We are using the secret in mongo-deploying yaml file, please refer to the manifest file on the top

kubectl create secret generic mongo-secret \ --from-literal=username=admin \ --from-literal=password=adminhackp2025 
Enter fullscreen mode Exit fullscreen mode

Now the configuration is completed, once you commit the changes and push to the github, the pipeline will start working:

 git add .github/ git commit -m "CICD configuration updated" git push origin main #my remote is configured as origin and branch is main 
Enter fullscreen mode Exit fullscreen mode

Once pushed to remote, come back to github repository


Github Actions

Here, you can see some jobs marked with a red cross and others with a green tick. The red cross indicates failed jobs, while the green tick indicates successful ones.


Github Actions failure

Click on the job, then select option (4) to view the complete details. Troubleshooting should be performed based on the errors identified in the logs


Debugging

If you need to make changes to the ci-cd.yml file, follow the steps below and rerun all jobs. By default, the pipeline will automatically restart whenever you commit changes to the ci-cd.yml from the GitHub repository.


Editing CICD yml file

Once the CI/CD jobs execute successfully, you will see a green tick mark on the left.


Job successfull

Now, return to the server and run kubectl get all to verify the list of all deployed components.


kubectl get all

Instead of port 3000, the application is exposed on port 32000. Open the application URL http://98.86.216.31:32000/ and verify that it is working correctly, just as we did earlier with Docker Compose.


Demo Application loading fine

We successfully set up a CI/CD pipeline using GitHub Actions to deploy the application on an EC2-hosted K3S cluster. The workflow automated the build, scan, and push of Docker images to AWS ECR, followed by deployment to the cluster.

After verifying the deployments with kubectlWe confirmed the application is accessible via http://98.86.216.31:32000/ and is working as expected.

Top comments (0)