Thundra
High-fidelity HTTP benchmarking for engineers who care about real numbers.
Maintainer: codewithevilxd
Portfolio: nishantdev.space
Email: codewithevilxd@gmail.com
Why Thundra
Thundra is a Rust-native HTTP benchmarking toolkit (CLI + library) designed for practical load testing:
- sustained concurrency with low runtime overhead
- precise latency percentiles (p50, p90, p95, p99)
- dynamic request generation and rate shaping
- hook system for custom retry/circuit-breaker behavior
- human-readable or machine-readable JSON output
Use it in two modes:
- as a CLI for fast terminal-driven benchmarks
- as a library in integration tests and performance pipelines
Demo
Quick Usage Preview (GIF)

Full Demo Video (MP4)
Download or play the full terminal demo video
Table of Contents
- Install
- Quick Start
- CLI Deep Dive
- Library Deep Dive
- Rate Control Patterns
- Hooks and Retry Control
- Result Model
- Performance Workflow
- Examples
- Development
- Roadmap
Install
CLI
Library
[]
= "1"
= { = "1", = ["full"] }
Build from source
Binary will be available at:
target/release/thundra
Quick Start
1) Fast CLI benchmark
2) JSON output for automation
3) Minimal library usage
use Duration;
use ;
async
CLI Deep Dive
Core syntax
High-value commands
# fixed request budget
# duration-driven run
# post workload
# insecure tls for internal env only
Flags reference
| Flag | Meaning | Default |
|---|---|---|
-c, --concurrency |
concurrent workers | 10 |
-n, --requests |
total requests stop condition | none |
-d, --duration |
duration stop condition (10s, 1m) |
none |
-r, --rate |
fixed request rate (req/s) | none |
-m, --method |
HTTP method | GET |
-H, --header |
repeatable headers | none |
-b, --body |
request body | none |
-t, --timeout |
per-request timeout (seconds) | 30 |
-k, --insecure |
skip TLS verification | false |
-o, --output |
text or json |
text |
Shell completions
# bash
# zsh
# fish
|
# powershell
& )
# elvish
| )
Library Deep Dive
Builder model
The Benchmark::builder() API supports:
- stop by request count, duration, or run-until-interrupt
- static request config (
url,method,header,body) - dynamic request generation via
request_fn - fixed rate via
rateor dynamic rate viarate_fn - before/after hooks with retry control
Dynamic request generation
use HashMap;
use ;
let bench = builder
.request_fn
.concurrency
.requests
.build?;
Production-like headers/body
use ;
let bench = builder
.url
.method
.header
.header
.body
.requests
.concurrency
.build?;
Rate Control Patterns
Fixed rate (stable load)
let bench = builder
.url
.rate
.duration
.build?;
Dynamic ramp (warm-up + peak)
use ;
let bench = builder
.url
.rate_fn
.duration
.build?;
Hooks and Retry Control
Before-request hook (circuit-breaker style)
use ;
let bench = builder
.url
.before_request
.build?;
After-request hook (retry on 5xx)
use ;
let bench = builder
.url
.after_request
.max_retries
.build?;
Result Model
Thundra returns rich BenchmarkResults with:
- total/success/failed request counts
- throughput (
req/s) - latency stats (
min,max,mean,p50,p90,p95,p99) - status code distribution
- total transferred bytes
Sample JSON:
Performance Workflow
A recommended practical flow:
- run baseline with moderate concurrency (
-c 20) and fixed duration - increase concurrency in steps (
20 -> 50 -> 100 -> 200) - track p99 and failure rate, not just throughput
- capture JSON output in CI for trend regression
- apply dynamic rate ramps to simulate real traffic profiles
Examples
Built-in examples live in examples/:
basic_benchmark.rscustom_requests.rsrate_ramping.rshooks_metrics.rstest_server.rs
Run:
Development
# format
# lint
# tests
On some Windows environments, app control policies may block generated test binaries. If that happens, run with a trusted target dir.
Roadmap
- coordinated omission correction
- HDR histogram export
- HTTP/2 support
- HTTP/3 support
- latency breakdown (DNS, TCP, TLS, TTFB)
- warm-up and cool-down phases
- multi-step scenario support
If you build something cool with Thundra, share it with me at codewithevilxd@gmail.com.