In this post, I'll share my experience implementing a comprehensive CI/CD pipeline using GitHub Actions, writing tests for an existing codebase, and the valuable lessons learned along the way.
Setting Up GitHub Actions: From Zero to Production-Ready
The Challenge:
When I first approached this project, I was working with repo-contextr, a Python CLI tool that analyzes git repositories and packages their content for sharing with Large Language Models (LLMs). The project already had a solid foundation but lacked automated testing and continuous integration.
Designing the CI Workflow
In this, I made sure that I opt a robust approach to maintain the following:
- Multi-platform compatibility (Linux, Windows, macOS)
- Code quality through linting and formatting
- Type safety with static analysis
- Comprehensive test coverage
- Successful package building
Here's the GitHub Actions workflow I implemented:
name: CI on: push: branches: [ main, develop ] pull_request: branches: [ main, develop ] jobs: test: runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: os: [ubuntu-latest, windows-latest, macos-latest] python-version: ['3.12'] steps: - uses: actions/checkout@v4 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} cache: 'pip' - name: Install dependencies run: | python -m pip install --upgrade pip pip install "ruff>=0.8.0" "mypy>=1.17.1" "pytest>=8.4.2" "pytest-cov>=6.0.0" pip install --editable . - name: Verify installation run: | python -c "import contextr; print('Package imported successfully')" python -c "from contextr.cli import app; print('CLI imported successfully')" - name: Run ruff linting run: | ruff check src tests - name: Run ruff formatting check run: | ruff format --check src tests - name: Run mypy type checking run: | mypy src continue-on-error: true - name: Run tests with coverage run: | pytest --cov=src --cov-report=xml --cov-report=term build: runs-on: ubuntu-latest needs: test steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v5 with: python-version: '3.12' - name: Install build dependencies run: | python -m pip install --upgrade pip pip install build - name: Build package run: python -m build - name: Upload artifacts uses: actions/upload-artifact@v4 with: name: dist-packages path: dist/ Writing Tests for Someone Else's Code: A Different Perspective
Understanding the perspective:
The Repo-Contextor already had a modular architecture as displayed below:
src/contextr/ ├── commands/ # CLI command implementations ├── config/ # Configuration management ├── discovery/ # File discovery logic ├── git/ # Git operations ├── processing/ # File reading and processing └── statistics/ # Token counting and stats My Approach:
- File Discovery Module: The file discovery system needed comprehensive testing for various scenarios:
def test_discover_files_with_pattern(self, temp_dir): """Test discovering files with include pattern.""" (temp_dir / "file1.py").write_text("# python") (temp_dir / "file2.js").write_text("// javascript") (temp_dir / "file3.py").write_text("# python") result = discover_files([temp_dir], include_pattern="*.py") assert len(result) == 2 assert all(f.suffix == ".py" for f in result) - Git Operations: Git integration required careful mocking to test various scenarios:
def test_get_git_info_valid_repo(self, sample_git_repo): """Test getting git info from valid repository.""" result = get_git_info(sample_git_repo) assert result is not None assert isinstance(result, dict) assert "commit" in result assert "branch" in result assert "author" in result assert "date" in result - CLI Interface: Testing the CLI required understanding the Typer framework and proper mocking:
def test_basic_execution(self, mock_config, mock_package): """Test basic CLI execution without errors.""" # Mock configuration mock_config_obj = Mock() mock_config_obj.paths = ["."] mock_config_obj.include = None mock_config_obj.recent = False mock_config_obj.output = None Outcome:
- Test count: Increased from 108 to 160 tests (+48% increase)
- Code coverage: Improved from 35.72% to 78.03% (+42.31% improvement)
- Module coverage: Several modules went from 0% to 95%+ coverage.
Conclusion:
Implementing CI/CD isn't just about automation—it's about adopting a mindset of continuous improvement and quality assurance. The process taught me several valuable lessons: Quality Gates Matter, Fast Feedback Loops, Documentation Through Code, Confidence in Changes.
Top comments (0)