Lightbench
A lightweight benchmarking framework for measuring latency, throughput, and reliability metrics.
Features
- Three Benchmark Patterns: Request, Producer/Consumer, and Async Task (submit + poll)
- Benchmark Runner: High-level builder with automatic rate distribution across workers
- Rate Control: Per-worker token bucket (
RateController) and shared lock-free pool (SharedRateController) - CSV Export: Write snapshots to file with
.csv(path)option - Progress Display: User-friendly live progress or raw CSV output
- HDR Histogram Metrics: High-precision latency tracking with percentile reporting (p25, p50, p75, p95, p99)
- Sequence Tracking: Duplicate and gap detection for reliability measurement
- Error Bucketing:
ErrorCountergroups errors by reason string for summary reporting
Quick Start
Add to your Cargo.toml:
[]
= "0.1"
= { = "1", = ["full"] }
Benchmark Pattern (Request/Response)
use ;
async
Worker lifecycle: init() is called once per worker to create State. Put shared,
Clone-friendly resources (URLs, config, Arc<Pool>) in the struct. Put resources that
must not be shared across workers (HTTP clients, dedicated connections) in State.
Rate Modes:
.rate(1000.0)— Shared rate pool (workers compete for 1000 total req/s).rate_per_worker(250.0)— Each worker gets 250 req/s independently.rate(0.0)or.rate(-1.0)— Unlimited (maximum throughput)
Producer/Consumer Pattern
use ;
use VecDeque;
use Arc;
use Mutex;
async
Closure contracts:
- produce: returns
Ok(())on success orErr(reason)on failure - consume: returns
Some(latency_ns)when an item was consumed,Nonewhen queue is empty (worker yields briefly)
Async Task Pattern (Submit + Poll)
use ;
async
Examples
Noop (framework overhead baseline)
Options: --rate <N>, --rate-per-worker <N>, --workers <N>, --duration <S>, --csv <FILE>, --no-progress
HTTP GET Benchmark
Options:
--rate <N>— Total requests/sec (shared pool, use<=0for unlimited)--rate-per-worker <N>— Requests/sec per worker (independent limiters)--duration <S>— Test duration in seconds (default: 10)--workers <N>— Worker count (default: 4)--csv <FILE>— Write snapshots to CSV--no-progress— Disable progress display, output CSV rows to stdout
Producer/Consumer Benchmark
Options: --producers <N>, --consumers <N>, --rate <N>, --duration <S>, --csv <FILE>, --no-progress
Async Task Benchmark
Options: --submit-workers <N>, --poll-workers <N>, --rate <N>, --duration <S>, --processing-delay <MS>, --csv <FILE>, --no-progress
Modules
patterns
Three benchmark patterns, each a builder plus results type.
Benchmark (request pattern):
use ;
let results = new
.rate // Shared rate pool (not split per-worker)
.workers // Workers compete for tokens
.duration_secs
.work
.run
.await;
results.print_summary; // Formatted output
println!;
println!;
ProducerConsumerBenchmark:
.produce(fn)— rate-controlled, returnsOk(())orErr(reason).consume(fn)— free-running, returnsSome(latency_ns)orNone(empty)
AsyncTaskBenchmark:
.submit(fn)— rate-controlled, returnsSome(task_id: u64)orNone.poll(fn)— free-running, returnsPollResult::{Completed{latency_ns}, Pending, Error(reason)}
metrics
Statistics collection with HDR histogram for latency tracking.
use Stats;
let stats = new;
stats.record_sent.await;
stats.record_received.await;
stats.record_received_batch.await; // Efficient batch
let snapshot = stats.snapshot.await;
println!;
println!;
SequenceTracker — per-consumer duplicate/gap detection:
use SequenceTracker;
let mut tracker = new;
tracker.record; // returns false if duplicate
tracker.duplicate_count;
tracker.gap_count; // gaps within min..=max range
tracker.head_loss; // min_seq (sequences lost before first received)
ErrorCounter — thread-safe error bucketing:
use ErrorCounter;
let counter = new;
counter.record.await;
counter.record.await;
let errors = counter.take.await; // HashMap<String, u64>
print_summary;
rate
Token bucket rate limiters for controlled benchmarks.
RateController — per-worker:
use RateController;
let mut rate = new; // 1000 msg/s for this worker
loop
SharedRateController — lock-free, shared across workers:
use SharedRateController;
use Arc;
let rate = new; // 1000 msg/s total
for _ in 0..4
time_sync
Fast timestamp utilities avoiding syscall overhead.
use ;
let start = now_unix_ns_estimate;
// ... do work ...
let elapsed = latency_ns;
logging
Tracing initialization:
use logging;
init.ok; // env-filter string
init_default.ok; // info level
output
Async CSV and stdout writers:
use OutputWriter;
let mut writer = new_csv.await?;
writer.write_snapshot.await?;
writer.flush.await?;
CSV Output Format
Snapshots are written as 19-column CSV rows:
timestamp,sent_count,received_count,error_count,total_throughput,interval_throughput,
latency_ns_p25,latency_ns_p50,latency_ns_p75,latency_ns_p95,latency_ns_p99,
latency_ns_min,latency_ns_max,latency_ns_mean,latency_ns_stddev,latency_sample_count,
duplicate_count,gap_count,head_loss
Quality columns (duplicate_count, gap_count, head_loss) are 0 unless a
SequenceTracker is in use.