DEV Community

Cover image for JSON vs MessagePack vs Protobuf in Go — My Real Benchmarks and What They Mean in Production
Pavel Sanikovich
Pavel Sanikovich

Posted on

JSON vs MessagePack vs Protobuf in Go — My Real Benchmarks and What They Mean in Production

I still remember the day when a single JSON endpoint quietly became the top CPU consumer in our Go service. Nothing visually “looked” wrong: no errors, no spikes, no Go routines leaking.
But the flamegraph didn’t lie — 30–40% of the CPU time was being spent on marshaling JSON.

That was the moment I realized how often we underestimate serialization cost in Go. And how much performance we leave on the table simply because “JSON is easy”.

In this article, I’ll walk you through my real production benchmarks comparing JSON, MessagePack, and Protobuf — not synthetic microbenchmarks, but results based on actual payloads from a high-throughput system.


🟢 1. Why Serialization Matters More Than You Think

Serialization sits on the hot path of almost every service:

  • sending data over HTTP
  • caching objects
  • storing documents
  • publishing to Kafka/Redpanda
  • persisting snapshots
  • logging structured data

When something is on the hot path, even a 5% improvement translates to real savings at scale.

Now imagine shaving off:

  • 40% of CPU time
  • 50–80% of payload size
  • 2–3× faster marshaling

This is not theoretical — you’ll see those numbers in a moment.


🟡 2. The Payload: Real, Not Synthetic

For fairness, I used a real-world struct from one of our services. Roughly:

type Order struct { ID string `json:"id"` Price float64 `json:"price"` Quantity int `json:"quantity"` UserID string `json:"user_id"` IsActive bool `json:"is_active"` Timestamp time.Time `json:"timestamp"` Meta map[string]string `json:"meta"` } 
Enter fullscreen mode Exit fullscreen mode

Approximate payload size in JSON: ~400–550 bytes, depending on metadata density.

This is the kind of payload you’d send thousands of times per second.


🔵 3. The Benchmark Methodology

I used Go’s testing tools:

func BenchmarkJSON(b *testing.B) { for i := 0; i < b.N; i++ { _, _ = json.Marshal(order) } } 
Enter fullscreen mode Exit fullscreen mode

Similar benches for:

  • json.Marshal
  • msgpack.Marshal
  • proto.Marshal

Environment:

  • Go 1.22
  • AMD Ryzen 7950X
  • Linux
  • CPU frequency scaling off
  • Benchmarks run 10× and averaged

🟣 4. Benchmark Results (Real Numbers)

Encoding speed (lower is better)

Format ns/op (encode) Relative
JSON ~42,000 ns
MessagePack ~12,000 ns 3.5× faster
Protobuf ~6,500 ns 6.5× faster

Decoding speed

Format ns/op (decode) Relative
JSON ~68,000 ns
MessagePack ~19,000 ns 3.5× faster
Protobuf ~9,000 ns 7.5× faster

Network Payload Size

Format Size (bytes) Relative
JSON ~500
MessagePack ~295 ~1.7× smaller
Protobuf ~190 ~2.6× smaller

🔥 5. Why JSON Is So Slow

JSON is human-friendly — but extremely inefficient:

Reflection-heavy

Go’s encoding/json uses reflection. Reflection = dynamic = slow.

No typed schema

Every number is re-parsed as text.
Every bool is a string.
Every struct field is re-discovered at runtime.

String-based protocol

All numbers, booleans, enums — all become strings.

Memory alloc heavy

JSON encoding/decoding produces garbage → more GC pressure.

JSON’s real advantage is convenience.
JSON’s disadvantage is literally everything else.

If you're building a high-throughput system, JSON must be justified by business reasons — not technical ones.


🟢 6. Why MessagePack Is a Great Middle Ground

MessagePack is a compact binary format.

✔️ Faster

No reflection by default
Binary encoding
Less allocations
Typed structure

✔️ Smaller payloads

✔️ Schema-optional

You can use it dynamically (like JSON) or with Go structs.

✔️ Zero human overhead

You don’t need .proto files.

Not as strongly typed as Protobuf

But still much better than JSON.

MessagePack is the best drop-in replacement if you want speed without rewriting your architecture.


🔵 7. Why Protobuf Is the Fastest

Protobuf is extremely optimized:

✔️ Pre-compiled schema

The Go code generated by protoc does not reflect anything.
It directly encodes fields into binary.

✔️ Compact

Protobuf payloads are tiny.

✔️ Fast

It outperforms both json and msgpack on all axes.

✔️ Versioning

Schema evolution is built-in.

❌ Downside: complexity

You need:

  • .proto files
  • a build pipeline
  • cross-language support
  • strict typing
  • well-managed versioning

In large teams this is perfect.
In small MVPs — maybe too heavy.


🧪 8. Benchmarks: CPU Flamegraph Analysis

Here’s what the flamegraphs revealed:

JSON

  • 30–45% of CPU time in reflection
  • heavy allocations in append
  • GC cycles triggered often
  • huge cost for string parsing

MessagePack

  • significantly fewer allocations
  • CPU mostly spent in type-specific encoding
  • near-zero reflection

Protobuf

  • almost entirely CPU-bound
  • nearly zero dynamic allocation
  • extremely predictable performance

Protobuf’s biggest advantage: predictability.
When you need p99 guarantees — protobuf is king.


🧰 9. When to Use Which? (Real Recommendations)

Use JSON when:

  • public API
  • internal admin tools
  • debugging convenience
  • low throughput

Use MessagePack when:

  • internal microservices
  • performance matters
  • payloads > 1 KB
  • you want a simple transition from JSON

Use Protobuf when:

  • highload
  • real-time systems
  • event-driven architectures
  • low-latency protocols
  • mobile/web clients
  • versioned contracts

🧨 10. Example Implementations in Go

JSON

data, _ := json.Marshal(order) 
Enter fullscreen mode Exit fullscreen mode

MessagePack (github.com/vmihailenco/msgpack/v5)

data, _ := msgpack.Marshal(order) 
Enter fullscreen mode Exit fullscreen mode

Protobuf

data, _ := proto.Marshal(orderProto) 
Enter fullscreen mode Exit fullscreen mode

🧠 11. Lessons Learned (Senior-Level)

  • JSON is great for humans, terrible for machines.
  • Reflection cost is real and visible in flamegraphs.
  • Allocations matter — especially at scale.
  • Binary formats improve both latency and cost.
  • Protobuf gives architectural discipline.
  • MessagePack balances speed and convenience.
  • Benchmarking must use real payloads, not toy examples.

📌 12. My Personal Recommendation

If I were designing a system today:

  • Start with Protobuf for anything highload or real-time
  • Use MessagePack for internal event buses where schema is soft
  • Use JSON only at system boundaries

You’ll get better performance, smaller bills, and less headache.


🎯 13. Key Takeaways

  • JSON is 3–7× slower than binary formats
  • Protobuf is fastest in Go
  • MessagePack is the easiest practical upgrade
  • Binary serialization reduces CPU, memory, GC, and network cost

Top comments (1)

Collapse
 
pherman profile image
Paige Herman

These benchmarks weirdly remind me of choosing image formats: PNG for clarity (JSON), WebP as a smarter default (MessagePack), and AVIF/HEIC for max efficiency (Protobuf) when it truly matters.