Introduction
I will try to provide some insights on how parallel running jobs using GitHub Actions can be helpful in optimizing our CI/CD pipelines. Their parallel running jobs thus allow running independent jobs, which may save much precious time in our workflows. This is very helpful for larger projects because the overall build time will be reduced and debugging can get very easy since the jobs are separated themselves.
Stop talking, show me the code!
name: (Compiler) Rust on: push: branches: ["main"] jobs: test: # Job 'test' starts same time as Job 'lint' name: Rust Test (${{ matrix.target.os }}) strategy: matrix: # Parallelize jobs across different OS environments target: - target: ubuntu-latest os: ubuntu-latest - target: macos-latest os: macos-latest - target: windows-latest runs-on: ${{ matrix.target.os }} steps: - uses: actions/checkout@v4 - uses: Swatinem/rust-cache@v2 # Cache Rust dependencies - name: cargo test run: cargo test lint: # Job 'lint' starts same time as Job 'test' name: Rust Lint runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions-rs/toolchain@v1 with: toolchain: nightly-2023-08-01 override: true components: rustfmt, clippy - uses: Swatinem/rust-cache@v2 - name: rustfmt run: grep -r --include "*.rs" --files-without-match "@generated" crates | xargs rustup run nightly-2023-08-01 rustfmt --check --config="skip_children=true"
What’s going on here?
- Matrix Strategy: This is where things get interesting. The matrix section allows us to run the same job on different platforms or configurations, which in our case is just Ubuntu.
- Parallel Jobs - Note above that the test and the lint jobs run in parallel, reducing overall job runtime.
- Caching: This is done by leveraging the Rust cache, which increases build times in subsequent compilations by skipping superfluous downloads and recompilations.
Depandent GitHub Actions Workflows
By using the dependencies between jobs, we don’t waste time and resources on work that doesn’t need to be done on the off-chance that earlier jobs may fail. In such a way, it makes sure that more efficiently run jobs will reduce debugging time when failures are caught earlier and later jobs are correctly skipped.
name: (Compiler) Rust on: push: branches: ["main"] jobs: test: name: Rust Test (${{ matrix.target.os }}) strategy: matrix: target: - target: ubuntu-latest os: ubuntu-latest - target: macos-latest os: macos-latest - target: windows-latest runs-on: ${{ matrix.target.os }} steps: - uses: actions/checkout@v4 - uses: Swatinem/rust-cache@v2 - name: cargo test run: cargo test lint: name: Rust Lint runs-on: ubuntu-latest // highlight-next-line needs: [test] # Job 'lint' depends on Job 'test' steps: - uses: actions/checkout@v4 - uses: actions-rs/toolchain@v1 with: toolchain: nightly-2023-08-01 override: true components: rustfmt, clippy - uses: Swatinem/rust-cache@v2 # Reuse cache from Job #1 - name: rustfmt run: grep -r --include "*.rs" --files-without-match "@generated" crates | xargs rustup run nightly-2023-08-01 rustfmt --check --config="skip_children=true"
The important thing here is the use of setting jobs.<job-id>.needs
, which defines dependences between jobs. In this way, we could be assured about the order of job execution, and also avoid spending resources on running useless jobs in case something in any critical task fails.
Conclusion
This post explained the concept of parallel running jobs in GitHub Actions, which represents an efficiency increase since it reduces the build times and makes debugging easier. More optimization is possible with a matrix strategy-you can run a job on multiple platforms/configurations. There could also be a dependency between jobs introduced via the jobs.<job_id>.needs
configuration; this would make sure resource-saving jobs are not executed when some key precedent jobs fail.
Top comments (0)