Hi — I'm Nikolay Advolodkin, Chief Engineer at Ultimate QA. In this guide I'll walk you step-by-step through how I used GitHub Spark AI and GitHub Copilot agents to build, test, and deploy a Next.js e-commerce web application with CI/CD and automated tests. This is a practical, hands-on walkthrough that captures the exact workflow I use with AI-powered tools, plus best practices and exercises so you can reproduce the same results in your own projects.
If you're interested in more JavaScript testing tips, click here to subscribe to our newsletter
Table of Contents
- 🚀 What You'll Build and Why It Matters
- 🧭 Overview: Tools and Tech Stack
- 🛠️ Getting Started with GitHub Spark
- 💡 UI Mode, Code Mode, and Assets
- 🔧 Working Locally: Visual Studio Code & GitHub Copilot
- 🗂️ Project Setup & Running the App Locally
- 🧾 Copilot Instructions & Context Engineering
- 🧪 Automated Tests with Playwright (and How AI Helps)
- 🔁 Branches, Pull Requests, and AI-Powered Code Reviews
- ⚙️ Implementing CI/CD with GitHub Actions
- 🧭 Best Practices & Guidance for AI-Assisted Development
- 🎯 Exercises & Challenges (Hands-on Learning)
- ❓ FAQ
- 🔚 Final Thoughts and Next Steps
- 📣 Want Help or More Resources?
🚀 What You'll Build and Why It Matters
By the end of this tutorial you'll be able to:
- Use GitHub Spark to generate a full-stack web app using natural language prompts.
- Run the project locally with Visual Studio Code and GitHub Codespaces.
- Set up GitHub Copilot and Copilot coding agents to author code, write tests, and review pull requests.
- Create automated Playwright tests (both end-to-end and API tests) and have Copilot generate and run them.
- Implement a CI/CD pipeline with GitHub Actions that runs tests and deploys your app.
- Orchestrate agentic workflows so AI performs many routine tasks while you remain the context engineer and reviewer.
This workflow dramatically increases development velocity and helps teams shift left on quality — but it requires clear context, review discipline, and configuration of the right AI settings.
🧭 Overview: Tools and Tech Stack
The core technologies and services I use in this workflow are:
- GitHub Spark — AI-driven, no-code/low-code app generator inside GitHub for rapid prototyping and app generation.
- GitHub Copilot (and Copilot Chat / Copilot coding agent) — AI pair programmer and agent to author code, create branches, and implement issues.
- Visual Studio Code — The IDE where you install Copilot extensions and manage local development.
- GitHub Codespaces — Containerized cloud development environments so "works on my machine" disappears.
- Next.js — React + server-rendered frontend framework used for the e‑commerce demo app.
- Playwright — End-to-end and API testing framework used for automated tests.
- GitHub Actions — CI/CD pipelines for test execution and deployments.
🛠️ Getting Started with GitHub Spark
Getting started with GitHub Spark is straightforward if you follow the necessary account steps:
- Create a GitHub account (if you don't already have one).
- Subscribe to GitHub Pro Plus (or an equivalent plan that grants access to GitHub Spark and Copilot). The Pro Plus plan includes Spark message credits (e.g., 375 messages per month), multiple active sessions, and unlimited manual edits.
- Navigate to github.com/spark to begin composing natural language prompts and generating applications.
When you open Spark, you'll see a chat-like prompt interface. Type a clear, specific description of the app you want — the clearer your requirements, the more deterministic the output. For my e-commerce example I supplied a detailed prompt describing:
- Technology stack (Next.js, TypeScript, simple REST API endpoints)
- Key UI components (product cards, cart badge, checkout placeholders)
- Testing and CI expectations (Playwright tests and a GitHub Actions workflow)
Spark then generates a working app you can preview, tweak the theme for, or open in Codespaces for deeper edits.
💡 UI Mode, Code Mode, and Assets
GitHub Spark provides multiple views that accelerate iteration:
- UI mode — Click elements on the rendered app to edit text and trigger Copilot to change the UI (theme swaps are fast and visually immediate).
- Code mode — A VS Code-like file explorer and editor inside Spark for direct code edits.
- Assets — Upload logos, images, audio files and manage the application's data and API backend directly from the Spark UI.
These modes let you alternate between high-level composition and low-level code changes without context switching between tools.
🔧 Working Locally: Visual Studio Code & GitHub Copilot
After Spark generates an app, clone the repository and open it in Visual Studio Code (VS Code). I recommend the following setup steps:
- Install VS Code from code.visualstudio.com.
- Install the GitHub Copilot extension and the Copilot Chat extension (Command/Control+Shift+X → search for "Copilot").
- Authenticate Copilot with your GitHub account when prompted.
- Open the Copilot chat with Command/Control+Shift+I to paste errors, ask for fixes, and run agentic commands.
When you first run into build or runtime errors (which is likely on diverse developer machines), copy error messages into Copilot Chat and put the agent into agent mode to have it suggest or even run remediation steps. Use high-quality models for software tasks (GPT-5 or Claude Sonnet 4+ where available) to reduce hallucination and iteration cycles.
🗂️ Project Setup & Running the App Locally
A typical setup sequence after cloning the repo:
- Open Terminal in the repo root.
- Run npm install to fetch dependencies.
- Run npm run build to build the project.
- Run npm run dev to start the development server (Next.js will typically serve on localhost:3000).
If the app loads but displays a blank page, open the browser console, copy the error output, and paste it into Copilot Chat to get targeted fixes. This is where Copilot shines — iterative troubleshooting that would otherwise take a lot longer to debug manually.
🧾 Copilot Instructions & Context Engineering
To make Copilot a reliable, consistent collaborator, you must provide it context. The single most important artifact here is a Copilot instructions file placed inside the .github
folder of your repo. These instructions define how Copilot should behave across the project and act like a project's AI standard operating procedures.
Key ideas to include in your Copilot instructions:
- Repository preferences (coding style, TypeScript rules, naming conventions)
- Testing directives (when to create browser tests vs API tests; example commands for running tests)
- CI behavior and commands to run in background terminals
- Security and data sensitivity constraints
- How to handle TypeScript typing and linter warnings
This process is often referred to as context engineering: curating the context the AI needs to make correct choices and avoid costly assumptions. You can generate a starter Copilot instructions file by creating an issue in GitHub and inviting Copilot as your AI pair programmer — Copilot can propose and even implement a recommended configuration automatically.
🧪 Automated Tests with Playwright (and How AI Helps)
Automated testing is vital but often disliked because it is repetitive. With agentic AI we can delegate much of the heavy lifting, but we still must guide, review, and correct the output.
I recommend this approach when creating tests with Copilot:
- Create a small, explicit test prompt that lists the required tests. For the workshop I limited tests to two initial Playwright specs (one browser e2e and one API test).
- Remove existing test files and Playwright config so Copilot generates the entire testing stack from scratch. This often leads to a better, more coherent test suite.
- Feed your Copilot instructions plus the Playwright prompt into Copilot Chat in agent mode and ask it to create tests, update configuration, and run them.
Important testing caveats:
- AI tends to over-generate tests for browser automation and may produce nonsensical or brittle tests. Keep the test set small and iterate.
- For API tests, do not use the browser — instruct the AI explicitly to use direct HTTP requests (fetch or Playwright's API testing features).
- Ask Copilot to run browser tests with no reporters (in CI) to avoid interactive prompts that block non-interactive runs.
- Always review tests manually and tune timeouts (I prefer shorter timeouts like 10000ms instead of 30000ms).
Example of generated tests I reviewed:
- Browser spec: verifies the home page displays featured products and that adding an item increments the cart badge.
- API spec: verifies the GET /products endpoint returns the full product list and required properties.
Once tests are generated and validated locally, commit them into a branch and push the changes so CI can pick them up.
🔁 Branches, Pull Requests, and AI-Powered Code Reviews
When Copilot or you push a change, create a pull request as usual. GitHub Copilot can be configured to automatically review pull requests and provide suggested improvements.
Best practices for AI-assisted code reviews:
- Treat AI suggestions as recommendations, not commandments. Validate each suggestion against project goals.
- Use the review comments to teach Copilot via the repository's instruction file — over time Copilot's suggestions will better align with your patterns.
- Accept only high-confidence changes; leave nitty-gritty style or architectural changes to human reviewers when necessary.
- Use an additional third-party tool like CodeRabbit if you want another perspective, but ensure you are not automerging without human approval.
A real example: Copilot suggested replacing a text cart indicator with a card badge. That suggestion was sensible generally, but in my specific test the expectation relied on text. I ignored that suggestion and explained why in the PR conversation. This is how you guide the AI and keep it aligned to project logic.
⚙️ Implementing CI/CD with GitHub Actions
Once checks and tests are validated locally, the next step is CI. I recommend recreating the CI workflow with Copilot rather than copying an existing one so the actions reflect your repo layout and test commands.
High-level CI flow I use:
- Checkout code and set Node version.
- Install dependencies and build the app.
- Start a web server to serve the built app (for browser tests).
- Run Playwright API tests (non-browser) and browser tests with no reporters in CI mode.
- Upload test artifacts or coverage reports as needed.
- Deploy on successful test completion (optional; can be gated behind manual approvals).
To validate your CI pipeline, delete the old workflow, create a new one via Copilot prompts, and let Copilot iterate until tests pass. You will know you’ve succeeded when GitHub Actions reports a passing run for all checks on your branch.
🧭 Best Practices & Guidance for AI-Assisted Development
Here are concise, practical rules I follow when I integrate AI into my software development lifecycle:
- Provide clear context: descriptive comments, project standards, and constraints reduce hallucination.
- Validate suggestions: always review generated code and run tests locally and in CI.
- Keep AI as collaborator: treat Copilot as a junior-to-mid-level developer to offload routine tasks; you remain lead architect and reviewer.
- Maintain code quality: refactor AI-generated code as needed and enforce linters and type checks.
- Design Copilot instructions: the better your instructions file, the more reliable Copilot behavior becomes.
- Prefer API tests for backend logic: instruct Copilot to use direct network requests for API verification.
🎯 Exercises & Challenges (Hands-on Learning)
Practice is how you internalize this workflow. Here are incremental exercises I recommend:
- Use GitHub Spark to generate a simple app using a prompt I provide (or your own): clone it, run npm install, npm run dev, and confirm it renders.
- Create a Copilot instructions file in
.github/copilot-instructions.md
setting project standards and test commands. - Use Copilot Chat in agent mode to create two Playwright tests: one browser spec and one API spec. Review and fix them.
- Create an issue in your repo and add Copilot as an AI pair programmer. Have Copilot implement the issue on a new branch and review the PR.
- Delete any existing workflow files in
.github/workflows
and ask Copilot to implement a CI pipeline that runs your tests. Iterate until CI passes.
These exercises reflect the exact steps I use in production workshops and client implementations. Repeat them until you feel comfortable orchestrating multiple agents across tasks.
❓ FAQ
How do I get access to GitHub Spark?
Sign up for a GitHub account and subscribe to a plan that includes Spark (for example Pro Plus). Then visit github.com/spark to begin building apps using natural language prompts.
Which Copilot model should I use for software development?
Use the highest-accuracy models available to you for complex software tasks: GPT-5 if accessible, or Claude Sonnet 4+ for focused code generation and debugging. These models reduce hallucination and iteration cycles.
Can Copilot create tests reliably?
Copilot can generate functional tests quickly, but you must verify and refine them. It is stronger at API tests than complex browser end-to-end suites. Use small, specific prompts and incrementally validate tests.
Should I trust Copilot's PR review suggestions?
Consider Copilot's suggestions as a second pair of eyes. Accept suggestions that align with your project's logic and style. Use the PR conversation to correct or reject suggestions and refine Copilot instructions.
How do I avoid flaky or nonsensical tests generated by AI?
Provide explicit instructions (e.g., "Use direct network requests for API tests" or "Do not rely on visual timing delays"), keep tests focused on essential behavior, and enforce timeouts and stable selectors. Regularly refactor tests to remove brittleness.
Is this approach suitable for enterprise teams?
Yes — with governance. For teams, codify Copilot instructions, add test and security gating in CI, and keep human reviewers in the loop before merging to main. Use Codespaces to ensure consistent developer environments.
🔚 Final Thoughts and Next Steps
AI tools like GitHub Spark and Copilot are game changers for web development. They help you rapidly prototype apps, automate repetitive tasks, and scale testing and CI/CD. But AI is best used as a partner — not a replacement. Your role as a context engineer, reviewer, and orchestrator remains essential.
Short‑term action items I recommend:
- Sign up for GitHub Pro Plus and explore GitHub Spark for a weekend prototype.
- Install Copilot and Copilot Chat in VS Code and run the exercises from the previous section.
- Create a robust Copilot instructions file tuned to your project and invite Copilot to implement a small issue.
- Practice converting manual tests into Playwright API tests and run them in CI with GitHub Actions.
Longer term, aim to become the team’s AI integrator: curate instructions, maintain test hygiene, and lead the cultural shift to AI-augmented engineering. This skillset will place you at the forefront of modern software development.
📣 Want Help or More Resources?
If you found this useful, follow me on social media for more tutorials, sign up for any newsletters I share, and consider reaching out if your organization needs help implementing automated testing, CI/CD, or AI‑driven workflows. I consult and train teams on robust, practical AI integration across JavaScript, TypeScript, C#, and Java ecosystems.
If you try the exercises, please provide feedback on what worked and what didn't — it's the best way I can refine further tutorials and help you even more.
Peace 🕊️ — and happy building!
Top comments (0)