DEV Community

lykins
lykins

Posted on

GitHub Actions Pipeline

At this stage, my setup is complete, and I’m ready to start building images. To automate the image factory, I connected two GitHub Actions workflows in a chain. While additional workflows could be introduced for more specialized images, these are general-purpose images with a basic setup, so a simple two-workflow approach suffices.

Workflows

Base Image

The first workflow focuses on creating base images. These images are hardened according to enterprise policies or industry standards, such as NIST, PCI, or other relevant frameworks. In addition to hardening, I ensure that essential infrastructure or application monitoring tools are integrated into the image.

For base images, I worked with the following operating systems:

  • Red Hat 9.3
  • Ubuntu 24.04
  • Windows Server 2022

In terms of platforms, the workflow was configured to build images for AWS, Azure, and vSphere for my presentation.

Once a base image was successfully built, its metadata was sent to their corresponding HCP Packer buckets.

App Image

Here's an updated and refined version of your section:

If the base images are successfully built, the next stage focuses on creating app images. This step layers additional components onto the base image. For Linux images, this might involve setting up Docker or Podman to enable running containerized workloads. For Windows images, configuring IIS could be a suitable solution for hosting web applications.

In my presentation, this is where I concluded the process. However, I did build a Nomad-specific image based on my Docker images to demonstrate the concept.

In a real-world scenario, I would go further by adding additional layers to specialize the image. This approach simplifies the "last mile" provisioning and deployment, ensuring that images are fine-tuned for their specific use cases.

Workflow

Credit

There is an opportunity for an end user to select which image or platform to build on, if looking to do a one-off. Most of the process originated from this repository : packer-images

Inputs

For both base and app workflows, the following is set up:

inputs: singleBuild: type: choice description: image override required: false options: - all - rhel - ubuntu - windows platform: type: choice description: platform override required: false options: - all - vsphere - aws - azure 
Enter fullscreen mode Exit fullscreen mode

If triggering a workflow manually, it would look like this:

Image description

Build Steps

The first step runs on a GitHub Actions hosted runner and will output which images and platforms to build.

If All images were selected, then RHEL, Ubuntu, and Windows images will be built in AWS, Azure, and vSphere. You can override which image to build or what platform to build on, in any mix.

selectimages: runs-on: "ubuntu-latest" steps: - name: get-images id: get-images run: | if [[ "${{ inputs.singleBuild }}" != "all" ]] && [[ -n "${{ github.event.inputs.singleBuild }}" ]] ; then export IMAGES=$(echo ${{ inputs.singleBuild }} | jq -R '["\(.)"]') echo "images_out"=$IMAGES"" echo "images_out"=$IMAGES"" >> $GITHUB_OUTPUT else export IMAGES=$(echo $DEFAULT_IMAGES | jq -R 'split(", ")') echo "images_out"=$IMAGES"" echo "images_out"=$IMAGES"" >> $GITHUB_OUTPUT fi if [[ "${{ inputs.platform }}" != "all" ]] && [[ -n "${{ github.event.inputs.platform }}" ]] ; then export PLATFORMS=$(echo ${{ inputs.platform }} | jq -R '["\(.)"]') echo "platforms_out"=$PLATFORMS"" echo "platforms_out"=$PLATFORMS"" >> $GITHUB_OUTPUT else export PLATFORMS=$(echo $DEFAULT_PLATFORMS | jq -R 'split(", ")') echo "platforms_out"=$PLATFORMS"" echo "platforms_out"=$PLATFORMS"" >> $GITHUB_OUTPUT fi outputs: images: ${{ steps.get-images.outputs.images_out }} platforms: ${{ steps.get-images.outputs.platforms_out }} 
Enter fullscreen mode Exit fullscreen mode

The next step takes outputs from the first step. I use a matrix strategy with image + platform as the combinations.

I had never tried to do this before, but was surprised it worked runs-on: ["${{matrix.platform}}"]. This will ensure my AWS builds will run on my privately networks on AWS, same with Azure and vSphere.

build: runs-on: ["${{matrix.platform}}"] needs: [selectimages] env: HCP_PACKER_BUCKET: ${{matrix.platform}}-${{matrix.image}} PLATFORM: ${{matrix.platform}} strategy: fail-fast: false matrix: image: ${{fromJson(needs.selectimages.outputs.images)}} platform: ${{fromJson(needs.selectimages.outputs.platforms)}} steps: - uses: FraBle/clean-after-action@v1 - name: Setup `packer` uses: hashicorp/setup-packer@main id: setup with: version: latest - name: Checkout code uses: actions/checkout@v4.1.1 - name: Image Builder run: | packer init -force -var-file=${BUILDPATH}/${LAYER}/${{matrix.image}}/variables/example.pkrvars.hcl ${BUILDPATH}/${LAYER}/${{matrix.image}}/. packer build -force -var-file=${BUILDPATH}/${LAYER}/${{matrix.image}}/variables/example.pkrvars.hcl ${BUILDPATH}/${LAYER}/${{matrix.image}}/. 
Enter fullscreen mode Exit fullscreen mode

Further Automation

Here's an updated and polished version of your section:

As the process transitions towards production, image building can be triggered on a schedule. While monthly builds might have sufficed in the past, the frequency should ideally increase to weekly. This ensures that security patches and updates are consistently applied. Since the process is automated and runs in the background, this cadence is both practical and efficient.

For testing my images, I’ve utilized the HCP Packer + Terraform integration, which allows me to automatically provision and validate my images. Although I haven’t fully implemented an automated testing pipeline yet, I’ve explored tools like cnspec by Mondoo. This tool integrates with Packer to scan images for compliance during the build process, offering a promising solution for ensuring quality and adherence to security standards.

Top comments (0)