DEV Community

Cover image for Prof: A Structured Way to Manage and Compare Go Profiles
Alexsander Hamir
Alexsander Hamir

Posted on

Prof: A Structured Way to Manage and Compare Go Profiles

Go’s philosophy emphasizes simplicity and readability, aiming to lower the barrier for beginners to understand and contribute to codebases with less friction compared to many other languages. While pprof is already a powerful and user-friendly tool, effective profiling still requires experience to maintain good practices — such as organizing previous runs, documenting performance changes as you go, and keeping track of what was improved and when.

Without that experience, it’s easy to end up with a clutter of files and no clear history, forcing you to dig through old commits just to recall what you did minutes earlier.

That’s why I built Prof — a tool designed to bring structure, clarity, and speed to Go performance workflows, making life easier for both beginners and experienced engineers alike.

The Common Way

The commands below leave organization and documentation entirely up to the developer. And to be fair, these tools already do a lot — but still, why not encourage a more structured approach? Why not simplify the profiling workflow so it doesn’t require running a chain of commands back and forth, or having each team build their own custom scripts around it?

go test -bench=BenchmarkMyFunc -cpuprofile=cpu.out go tool pprof -top cpu.out > results.txt go tool pprof -list=MyFunc cpu.out # Make changes, repeat... # Hours later: "Wait, was that the baseline or the optimized version?" 
Enter fullscreen mode Exit fullscreen mode

The New Way

Prof solves this with a simple idea: treat profiling sessions like a well-structured codebase — organized and easy to navigate.

Instead of wrestling with scattered files, run one command:

prof auto \ --benchmarks "BenchmarkGenPool" \ --profiles "cpu,memory,mutex,block" \ --count 10 \ --tag "baseline" 
Enter fullscreen mode Exit fullscreen mode

That single command replaces dozens of manual steps — creating a neatly organized dataset:

bench/baseline/ ├── description.txt # Your notes for this run ├── bin/BenchmarkGenPool/ # Binary profile files (e.g., .pprof) ├── text/BenchmarkGenPool/ # Human-readable reports (e.g., top, list, disasm) │ ├── cpu_functions/ # ┐ │ ├── <func1>.txt # │ │ ├── <func2>.txt # │ Function-level CPU performance data │ └── ... # │ ├── memory_functions/ # ┘ ├── <func1>.txt # Function-level memory performance data ├── <func2>.txt └── ... 
Enter fullscreen mode Exit fullscreen mode

Now, instead of rerunning commands just to inspect a function’s performance, you can simply open the relevant file or search by its name — everything is structured and ready to explore.

Prof also offers an option to skip wrapping go test, giving users the flexibility to run benchmarks however they prefer while still benefiting from Prof’s organization and analysis.

Profiling Diffs at the Function Level

Thanks to Prof’s structured approach, you no longer need to manually track performance changes between optimizations. Simply pass the tags you want to compare, and Prof will generate the diffs for you — available in HTML, JSON, or terminal output.

prof track auto \ --base "baseline" \ --current "optimized" \ --profile-type "cpu" \ --bench-name "BenchmarkGenPool" \ --output-format "summary-html" 
Enter fullscreen mode Exit fullscreen mode

Get clear, actionable insights:

⚠️ Top Regressions:

  • internal/cache.getShard: +200.0% (0.030s → 0.090s)
  • sync.Pool.Get: +100.0% (0.010s → 0.020s)

✅ Top Improvements:

  • encoding/json.Unmarshal: -95.0% (0.100s → 0.005s)
  • pool/isFull: -85.0% (0.020s → 0.003s)

This gives you a more organized and automated way of doing performance work.

Contributions Welcome

Instead of each team building their own scripts, we can come together to create a tool that helps developers handle performance work more easily — whether under pressure or as part of everyday optimization.

Prof aims to be that shared foundation, making profiling more accessible, consistent, and reliable across teams.

🔗 Prof Repository
🔗 LinkedIn

Top comments (0)