You've seen those fancy AI bots that answer GitHub issues. Most of them require subscriptions, SaaS accounts, or complex setups.
What if you could build one yourself with just your LLM API key?
With Perstack, you can. Copy two files, add one secret, done.
See it in action: This issue was answered by the bot reading the actual codebase.
What You Get
An AI bot that:
- 👀 Reacts to show it's processing
- 🔍 Actually reads your codebase (not just guessing)
- 💬 Posts answers with activity logs
- 💰 Costs only what you use (your LLM API)
Activity log example:
💭 I need to understand how the runtime state machine works... 📁 Listing: packages/runtime/src 📖 Reading: runtime-state-machine.ts 💭 Found the state transitions... ✅ Done Setup (5 Minutes)
Step 1: Copy Files
Copy from the example repo:
your-repo/ ├── .github/workflows/issue-bot.yml ← workflow └── scripts/checkpoint-filter.ts ← formats output The workflow runs on:
- New issue opened
- Comment containing
@perstack-issue-bot
Step 2: Add Secret
Settings → Secrets → Actions → ANTHROPIC_API_KEY
That's it. Open an issue and watch it work.
Customize It
Want custom behavior? Here's the agent-first development workflow:
1. Create perstack.toml
model = "claude-sonnet-4-5" [provider] providerName = "anthropic" [experts."my-issue-bot"] description = "Custom issue bot for my project" instruction = """ You are an issue bot for the Acme project. ## Rules - Always check docs/ directory first - Add labels: use `gh issue edit --add-label` - "bug" for bug reports - "feature" for feature requests - "question" for questions - If the issue is about authentication, mention @security-team - Keep answers under 500 words """ [experts."my-issue-bot".skills."@perstack/base"] type = "mcpStdioSkill" command = "npx" packageName = "@perstack/base" requiredEnv = ["GH_TOKEN", "GITHUB_REPO", "ISSUE_NUMBER"] 2. Test Locally with perstack start
Make sure you have GitHub CLI installed and authenticated.
export ANTHROPIC_API_KEY=your-key export GH_TOKEN=$(gh auth token) export GITHUB_REPO=owner/repo export ISSUE_NUMBER=123 npx perstack start my-issue-bot "Answer issue #$ISSUE_NUMBER" This opens an interactive TUI where you can watch the bot think, read files, and generate answers in real-time. Tweak the instructions, run again, iterate.
3. Push When Happy
Update the workflow to use your config:
run: npx perstack run --config ./path/to/perstack.toml my-issue-bot "Answer issue #$ISSUE_NUMBER" Push to your branch, and your customized bot is live.
This is agent-first development — define behavior in text, test interactively, deploy when ready. No code changes, just prompts.
Under the Hood
Why Agent-First Works
The magic is in Perstack's runtime. Your Expert definition — just text in a TOML file — gets executed by the runtime, not compiled into code.
This means:
- Same behavior everywhere: Local machine, CI, production — the Expert runs identically
- Iterate without rebuilding: Change the prompt, run again, no compile step
- Portable: Push to a branch, it just works
The runtime handles everything: connecting to LLMs, managing tool calls, streaming events. You just define what the Expert should do.
Skills & MCP
Experts interact with the world through Skills — MCP servers that expose tools.
@perstack/base provides file ops (readTextFile, listDirectory), command execution (exec), and more. The runtime spins up the MCP server, passes environment variables, and routes tool calls automatically.
[experts."my-bot".skills."@perstack/base"] requiredEnv = ["GH_TOKEN"] # Runtime passes this to the MCP server Event Stream
Everything is observable. The runtime emits JSON events to stdout:
{"type":"callTool","toolCall":{"toolName":"readTextFile","args":{"path":"src/index.ts"}}} {"type":"completeRun","text":"Here's the answer..."} Pipe this to a script for real-time UIs, logging, or integration with your systems. See checkpoint-filter.ts for an example.
What's Perstack?
Think npm for AI agents. Define modular agents, publish to a registry, compose them like packages.
No vendor lock-in. No subscriptions. Just your code and your API key.
I'm building this — feedback welcome!
Top comments (0)