1) The Problem: My Code Was Slowly Rotting
After months of using general AIs, I kept seeing the same frustrating pattern: duplicated logic everywhere, alternating libraries for the same job, inconsistent patterns that made no sense, weak tests, and a structure that looked like it was designed by committee (a very confused committee).
2) What I Tried (And Why It Didn't Work)
- Spec-first tools like Kiro: They shipped multiple stories beautifully, but when I needed to refactor or fix bugs or change direction? Total disaster. It was like trying to steer a cruise ship with a bicycle.
- Mixing tools: Kiro for greenfield + Cursor for edits + Claude for cleaner code helped, but it was like juggling chainsaws. Stability was still terrible.
3) The Breakthrough: Specialist Agents + Organized Docs
Then I had a lightbulb moment: what if I gave AI the same structure as a real development team? I created sub-agents with clear roles under .claude/agents, and anchored each to specific docs:
- architect → owns
docs/arc42/*(the system design stuff) - tech-lead → owns
docs/developer-guide/*(standards + templates) - developer → follows
CLAUDE.md+ dev guide (the actual coding) - react-ui-designer → Next.js/shadcn;
src/app/*,src/components/* - manual-tester → test plans in
docs/product/* - product-owner → epics/stories in
docs/product/*
4) How It Actually Works (With Code Examples)
Here's how I define each agent:
--- name: architect description: "Use this agent when you need to create, update, or maintain architectural documentation and decisions for the project." model: sonnet color: purple --- You are an expert Software Architect and Technical Documentation Specialist.... And here's what the generated code looks like - consistent service patterns:
import { z } from 'zod'; import { httpClient } from '@/src/lib/http-client'; export class TopicService { private readonly path = '/api/topics'; async list() { const response = await httpClient.get(this.path); return z.array(z.object({ id: z.string(), name: z.string() })).parse(response.data); } } ViewModel + schema validation harmony
import { z } from 'zod'; export const TopicSchema = z.object({ id: z.string(), name: z.string().min(1) }); export type Topic = z.infer<typeof TopicSchema>; export class TopicViewModel { private state: Topic[] = []; setTopics(input: unknown) { this.state = z.array(TopicSchema).parse(input); } get topics() { return this.state; } } ... 5) My Documentation Strategy (The Key to Success)
- Keep docs short and focused: arc42/dev-guide stays concise and references code; put runnable examples in
docs/developer-guide/templates/* - Break up big docs: Shard large markdowns into small files for better agent navigation and recall
- Separate concerns: Keep agent definitions reusable and separate from working docs
6) The Results (Spoiler: It Actually Works)
- Quality compounds instead of decays: My codebase gets better over time, not worse
- Refactors and bug fixes get cheaper: Changes are easier because patterns are consistent
- I still review/fix, but my effort drops over time: As agents align with the repo, they make fewer mistakes
I'll publish the exact documentation method per agent in a follow-up, plus the specific design patterns I followed. If your AI projects are degrading after a few prompts (like mine were), this approach might be worth trying.
Top comments (1)
This is exactly the help I am in great need of! I have been using Cursor, Warp, Kiro, and CoPilot in hopes of successfully building my dream. It started out great, very promising! But fast forward 3 months down the road to now, and my project is now a complete mess! Not to mention having to switch deployment platforms due to running out of free resources on one, so I'd migrate to another... Not a good idea now that I'm actually learning what I'm doing instead of just trusting the AI to handle it all. Looking forward to the follow-up!