DEV Community

Cover image for Demystifying Coding Agent: Prompts That Always Work ✨
Ashley Childress
Ashley Childress

Posted on

Demystifying Coding Agent: Prompts That Always Work ✨

🦄 If you’re still hanging out with me—thank you. First time here? Don't worry—everything’s linked from the beginning. We’ve already talked through the magical, autonomous bits and started pulling out repeatable patterns. Now it's time to get real about prompts: what Coding Agent can reliably handle for you in almost every scenario.

If you have a personal project, simple utility, or other non-critical non-production system? Feel free to experiment and push it further. No reason you can't find out what it can really do under pressure. However, if you prefer to stay in the safe lane a little longer or if your repo is system-critical? All the prompts listed here are 100% safe to hand over without fear of random glitter surprises (not often, anyway!).

Human-Crafted, AI-Edited Badge


It Works, But Now What? 🤔

In previous posts, we covered exactly how GitHub Coding Agent works—the safety, review, approve, merge—so what can you actually do with it? Hopefully you’ve taken it for a spin already. If you’re still unsure where to start, here’s what consistently works for me. Just for fun, I've mixed in some of the various ways you can access Coding Agent currently.

Whether you prompt from VS Code, GitHub.com, or the GitHub mobile app, it always does the same thing: cost is one premium request plus GitHub Actions (GHA) minutes, it starts a new copilot branch, executes the .github/copilot-setup-steps.yml workflow (if one exists), and opens a new PR for the prompter (and CODEOWNERS) to review and merge.

💡 ProTip: Every example here uses existing copilot-instructions—except the ones where I used a chat mode with a very clear role and explicit boundaries. If your results don’t look similar to these, start troubleshooting there.


Feature Documentation 📚

Stereotype or not, us devs are terrible documenters. It's not even that we can't write good documentation, it's just that on the list of all the random things to do for a new feature—docs usually fall right at the bottom.

Coding Agent is on standby just as soon as I'm code-complete and pushed to GitHub, I hand it all off as a separate task in VS Code: update the README, pull the Jira story, do a gap scan, call out missed scenarios and edge cases, and sanity-check whether tests actually cover the error paths (not just line counts). While it's busy writing the documentation (that I no doubt would have glossed over in under 30 seconds), I'm doing everything else on the tidying-up pre-review prep list.

In this case, the Delegate to Coding Agent button in Copilot Chat comes in pretty handy. Results are accessible in the GitHub Pull Requests extension if you want to keep your focus in the IDE, too.

Screenshot prompt Coding Agent via VS Code Delegate to Coding Agent feature in Copilot Chat

Here it is again for your copying needs:

Go review the readme and other relevant documentation for this project and ensure it matches the current implementation. Remove anything no longer relevant and update other docs as needed. Make sure all relevant user and tech guides are updated in the /docs directory as well as systems diagrams that accurately reflect the current implementation. 
Enter fullscreen mode Exit fullscreen mode

💡 ProTip: The Pull Requests extension does provide the same functionality as the web UI, but it's really not the most intuitive solution either. Plan to spend a little time up front figuring out the quirky ways to navigate. Once you have it down though, it's a decent alternative to pulling up a browser.


Why Not All the Documentation? 📚📚

There are some cases when the only doc in sight is a lonely README that's been collecting dust by itself in the corner since birth. That's exactly why I wrote the HLBPA (High-Level Big-Picture Architect) chat mode and an XML version designed especially for Coding Agent.

You can copy the raw XML from my awesome-github-copilot repo. Just paste that directly into Coding Agent from the GitHub UI and click Convert to file—no other edits (yet).

Screenshot GitHub.com prompt Coding Agent with

Once the chat mode is defined and in context, you can add your own prompt as usual. Here's one that I wrote explicitly to test the limits of the HLBPA chat mode. It worked so well that I saved it and reuse it often. It's smart enough to update anything that's already existing, as long as you tell it where to look.

@high_level_architecture.xml this is a complicated app that I need to be brought up to speed on quickly. Your goal is to generate a comprehensive set of docs in the `/docs` folder that covers all major flows in the codebase, broken down into sensible sections per flow. First, this app is a part of a distributed infrastructure. Include a high level overview of where this app fits into the systems architecture, but also drill down to the flows from the time the app is first triggered until completion. Generate this information at both a sequence and flow level. It's also important to understand the data relationships that are used in this app and how that's different between input from other sources. Use ER diagrams to highlight this app’s primary purpose from a data standpoint in addition to the systems information. Next, provide a comprehensive analysis of the current state of testing for this app with a focus on any unit or integration tests. Include performance or other specialty tests, if they exist. Identify any areas of concern in the testing setup along with recommendations for improvement, if applicable. Fourth, provide a detailed analysis of the current state of this app versus desired best case scenarios. It should highlight both the things this app does well and include gaps in logic or design that may need attention now or could be enhanced later to provide significant benefits in the future. List these in order by impact and timeline of estimated amount of work. For any suggested improvement, include a T-shirt size amount of effort (XS, S, M, L, XLG, etc.). Finally the last report is a comprehensive high-level overview of all recent changes, deployments, versions/releases. Use git as needed, but only include items that have already been merged to `main` or commits explicitly included directly or squashed in a release version. Any other branches or dev work should be explicitly ignored. If there are any other recommendations for reports that may highlight specific edge cases not covered here then please also include them along with your analysis. 
Enter fullscreen mode Exit fullscreen mode

I had a hard time finding a shareable repo that didn't already have documentation until I found a random tab still open with GitHub's new MCP Registryperfect! So this screenshot is a slimmed down version of the above prompt. Expect to get even more than this if you execute the full thing at once.

You may want to break it down into smaller pieces though, unless you've got a solid hour to spend reviewing documentation. On the other hand, if you find yourself treading water in the deep end of the "ginormous app pool" and need answers now? Copy and paste after loading the HLBPA chat mode XML and then go refill your coffee. You're going to need it! ☕️

Screenshot example of generated documentation for this repo

💡 ProTip: You can adapt this prompt for just about anything. Research tasks are a breeze if you're given a Jira story with a small focus. Start up the MCP and tell it exactly what you're wanting and in which format. Coding Agent can handle most everything Mermaid lists in their documentation, even if you have to tweak it occasionally yourself.


Time Saver Version ⏱️

Having literally every single doc you can think of in a single round is nice if you already have a good idea of what you're getting into. For that mystery app that nobody remembers even sneaking into the party? You might want to be a bit more direct. The same HLBPA chat mode handles this scenario for me, too, just with a much more targeted prompt. Results are mostly the same, only smaller and much easier to manage when time is a huge factor.

Your task is to research functionality related to the endpoint accessible at `/controller/endpoint`, including how this may potentially interact with other systems. Identify any potential influencers to SLAs or places in the code that could have a direct impact if modified. Start with a generalized flowchart that explains what the system is doing. Also include a sequence diagram that clearly outlines the flow of data from input to database. Include anything else you determine to be immediately relevant in thoroughly explaining this functionality and use case. 
Enter fullscreen mode Exit fullscreen mode

Behavior-Driven Test Specs 🧪

This is a relatively new prompt that I've adapted (and there’s much more planned in my head that comes later). The first part is plenty enough to get you started with a BDD setup, so you're not having to define individual use cases by hand. I borrowed the new GitHub MCP Registry for this test, too.

Perform a thorough search of this codebase acting as SDET tasked with both identifying and documenting a set of feature-driven Gherkin use cases for this repo. Future work will include automating integration tests using these `.feature` documents. These files should exist in a format easily digestible by any testing framework set up by this repo or complementary to if one does not already exist. It should exist in the /docs/feature folder for now. Also, include a summary report that notates any scenarios already covered by unit or integration tests, as well as opportunities for improvement using automated testing. 
Enter fullscreen mode Exit fullscreen mode

Results were solid—5 pages of .feature files plus a summary of overlaps with existing tests and the biggest opportunities to improve.

Screenshot GitHub.com PR review of Gherkin-style feature documents generated with Coding Agent

🦄 It did roast the coverage notes a bit, so I'm not sure if I'd want the entire report hanging out long-term in my OS repo. 10 out of 10 on the "motivation" points, though!


User Guides (with Screenshots) 🖼️

🦄 I should probably start with a disclaimer: On the scale of "justifiable fun" that makes up the full collection of Ashley-hack-time-projects, UI falls somewhere between "great excuse to play with Leonardo" and "meh—AI can totally handle that without me!" While I am acutely aware of the various failures in this system, I've yet to invent a better one that I can actually live with for more than a few days. 🤷‍♀️🤣

GitHub bundles the Playwright MCP for you and it's automatically accessible to Coding Agent in your UI repo. If you don't already have Playwright integration tests set up, it will add a config file or two to make that work. From there it can crawl your UI and draft user-facing guides with on-demand screenshots.

Prompt I used:

Write user documentation for the UI functionality using Playwright to take screenshots as needed. Store everything related to user-specific content in a repo `/docs` directory with appropriate sub-folders and a how-to guide using stubbed data where appropriate. 
Enter fullscreen mode Exit fullscreen mode

It produced twelve pages of user docs with six different screenshots woven in, including one mobile perspective. A massive time-saver over trying to do this manually!

Screenshot GitHub.com results Coding Agent with user guides plus screenshots

💡One gotcha: The diff preview can look broken because main is the baseline and screenshots don’t exist there yet. Open the file view to confirm the links are fine. As long as the images appear on their own and the relative links are accurate, you're probably fine to merge.


Fix a Small Bug From a Screenshot 🐞

You know those bugs that make absolutely zero sense when someone tries to explain it, but then you see it and suddenly the whole situation is crystal clear? Coding Agent can handle every drop of that same energy. Sometimes, it’s just easier to explain with a screenshot and a "fix this" prompt than trying to explain a problem quickly and with any degree of accuracy. Use a GitHub Issue as your prompt, drop in a screenshot showing the exact problem, and assign it to Copilot.

Screenshot GitHub.com using Issue with a screenshot as a Coding Agent prompt

Wait for the eyes emoji 👀 to pop up at the bottom—that means Coding Agent picked up the task and started work.

In this case, I was having trouble getting Copilot to recognize the exact errors I was referring to without a direct copy-paste. This is way easier and zero formatting nightmares.

Fix these security findings for this repo. Do not overengineer any solution, your goal is to correct the finding with the simplest, minimal change possible. 
Enter fullscreen mode Exit fullscreen mode

🦄 Of course I don't have a great example of a UI fix, even though I explicitly remember saving an example somewhere, at some point. But at whatever point I used to have one, it’s since disappeared. So I recreated it with the backend equivalent. 😉


Still Want More? 🔭

In the time since I first started this post and when I hit "Publish", GitHub added two more potential Coding Agent portals to the list. So here are all the places you wouldn't think to access Coding Agent (I haven't tested all of them, but I've given most a spin at least once):

  1. Using the GitHub MCP server's #create_pull_request_with_copilot tool
  2. The GitHub Pull Requests extension in VS Code defines a #copilotCodingAgent tool
  3. In the IDE use the Delegate to Coding Agent button directly in the Copilot Chat interface
  4. The GitHub Copilot Raycast extension adds the option to your Mac toolbar
  5. The GitHub App for MS Teams can now give you direct access to Coding Agent
  6. Respond to PRs on the go or assign issues directly with the GitHub mobile app
  7. If you work with anything Azure, Azure Boards are now integrated with Coding Agent
  8. Test out the GitHub Copilot API to create an issue and assign it to Copilot programmatically

Considering this is Dohmke's last GitHub Universe as CEO, I don't expect this cadence to slow any time soon. 🫟

🦄 Think you have a scenario Coding Agent should be able to handle, but for one reason or another the results are a little off? Maybe a use case you haven’t been able to wire up yet? Post a comment below—I'm happy to take a stab at it! Feeling a little shy? DM my LinkedIn under this same username.


🛡️ This piece was drafted by me, nudged by AI, and

Yes—the running joke about developers and documentation was intentional (and ChatGPT wants you to know that). The real magic is in turning those stereotypes into working, reusable prompts. True automation isn't far away for small limited contexts, either!

Top comments (4)

Collapse
 
cyber8080 profile image
Cyber Safety Zone

Fantastic write-up — this really demystifies how to structure prompts in a way that production-level Coding Agents can reliably act on. ✨

A few things that stood out to me:

  • The emphasis on "clear role definitions" and "explicit boundaries"is spot on. Agents perform best when given guardrails, not vague wishes.
  • I love how you layer in real-world tasks (e.g. “review documentation,” “update README”) rather than just toy examples — it makes the techniques feel immediately usable.
  • The modular prompt structure (i.e. breaking big tasks into smaller, well-scoped chunks) is something I’ll definitely adopt in my own workflows.
  • Your “ProTips” sprinkled throughout are gold — little details like context anchoring or fallback instructions often make or break prompt reliability.

Thanks for sharing this. I’d be curious: have you ever run into a scenario where a well-constructed prompt still failed because of domain knowledge or context gaps? How do you debug those cases?

Collapse
 
anchildress1 profile image
Ashley Childress

Thank you 🙏 To answer your question—yes. All the time! I have a chat saved somewhere where I'd spent weeks debugging JVM-layer database threads turned into permanent specters in this legacy app that wanted a sacrifice of some sort before you could safely touch it anyway. I had finally figured out a way to stop those guys from spawning like they're playing Squid Games Red Light, Green Light! every single time I blinked, and then out of nowhere Copilot decided that it could "improve" my connection pool settings. 😑

My response was something along the lines of, "Consider everything that was just modified and then take into account the SLAs for this application.." and continued to detail estimated trigger times, load volume, SLO/SLA, system interactions, known bottlenecks, and anything else relevant I could think to throw at it. The response was something like, "That changes everything! This won't work at all given this new information!". Yes, Copilot, that's better... 🙄🤣

Your repo instructions absolutely need all of that information if you want to successfully use Copilot (or any other LLM) to create solutions at scale. The MS version is a great starting point, but it's only half the story. It's not even that enterprise code is any more novel than the public data Copilot was trained on, but once you push enterprise loads into a system the semantics of the whole setup may change drastically. If those types of definitions are missing in the default instructions, you've already left out half the context of every solution you ask Copilot to build.

Just add in all the information you'd give to a new senior dev who's taking over as your on-call prod support (without backup of any kind). What will they need to know when things start screaming? That's exactly what your instructions are missing.

I have an Instructionalist chat mode I designed specifically to quiz the senior+ devs who literally know everything documented nowhere, but also don't even think about those things until somebody asks the question. I'm guilty of it all the time! It could definitely use some more testing and there's still a couple things I want to add to it, but you're welcome to what's there if you want to try it out. Docs are in the repo, but I have another copilot post that covers those instructions in detail, too.

Collapse
 
prime_1 profile image
Roshan Sharma

Nice breakdown! The HLBPA prompt is a game-changer for generating docs. I’ve been using it to auto-generate architecture overviews and it’s spot on every time. Definitely saves a ton of time on those long docs.

Collapse
 
anchildress1 profile image
Ashley Childress

Thanks! Glad it works for more than just me 😀 Last I checked the Git history could use a bit more hand-holding, but yeah the systems it has down pretty solid at this point. If you think of anything else to add, let me know!