VR

Benchmarks

Reproducible performance benchmarks for key backend engineering decisions. Each benchmark isolates one variable and measures its impact.

Available Benchmarks

#BenchmarkWhat It MeasuresKey Finding
01Thread vs Async vs Event LoopMemory and CPU cost per concurrent task
02TCP vs HTTP OverheadProtocol overhead per request
03JSON vs ProtobufSerialization speed and wire size
04DB Indexing ImpactQuery time with/without index
05N+1 vs BatchingQuery count impact on latency
06Cache vs No CacheCache hit rate vs effective latency

Running Benchmarks

Each benchmark directory contains a README.md with complete, runnable code. Requirements: Python 3.8+ (standard library only) unless otherwise specified.

Interpreting Results

All benchmarks report:

  • p50 (median): typical case
  • p99: tail latency (worst 1%)
  • Memory usage: peak heap during benchmark
  • Throughput: requests per second at steady state

Run each benchmark 3 times and take the median. Results vary by hardware; focus on relative comparisons, not absolute values.

📚 Related Topics