Expand description
Percentile-focused benchmarking for Rust.
pbench reports precise tail-latency statistics - p50, p95, p99, p99.9, and p99.99, instead of just min/max/median/mean. This makes it easy to spot outliers and understand the full latency distribution of your code.
§Quick Start
Add pbench to your Cargo.toml:
[dev-dependencies]
pbench = "0.1"Create a benchmark file (e.g. benches/my_bench.rs):
ⓘ
use pbench::Bencher;
#[pbench::bench]
fn my_function(b: &Bencher<'_>) {
b.bench_refs(|| {
// code to benchmark
});
}
fn main() {
pbench::main();
}Add to your Cargo.toml:
[[bench]]
name = "my_bench"
harness = falseRun with cargo bench.
§Benchmark Patterns
§Simple closure
ⓘ
#[pbench::bench]
fn addition(b: &Bencher<'_>) {
b.bench_refs(|| std::hint::black_box(1u64 + 2));
}§Deferred drop (bench_values)
When the benchmarked code returns a value that needs dropping (e.g. a Vec),
use bench_values to defer the drop outside the measurement window:
ⓘ
#[pbench::bench]
fn alloc(b: &Bencher<'_>) {
b.bench_values(|| Vec::<u8>::with_capacity(1024));
}§Throughput counters
Attach a counter for items/s or bytes/s reporting:
ⓘ
use pbench::{Bencher, ItemsCount};
#[pbench::bench]
fn process(b: &Bencher<'_>) {
b.counter(ItemsCount::new(1000));
b.bench_refs(|| { /* process 1000 items */ });
}§Parameterised benchmarks (args)
ⓘ
#[pbench::bench(args = [10, 100, 1000])]
fn sized(b: &Bencher<'_>, n: &str) {
let size: usize = n.parse().unwrap();
b.bench_values(|| Vec::<u8>::with_capacity(size));
}§Feature Flags
| Feature | Description |
|---|---|
json | Enables JSON output (--output json) and baseline save/load (--save-baseline, --baseline). Adds serde + serde_json dependencies. |
§CLI Flags
| Flag | Description |
|---|---|
--filter <pattern> | Only run benchmarks matching pattern |
--skip <pattern> | Skip benchmarks matching pattern (repeatable, takes priority over --filter) |
--output table|json|csv | Output format (default: table) |
--sort name|p50|p99|mean | Sort order (default: name) |
--sample-count <n> | Number of samples to collect |
--sample-size <n> | Iterations per sample |
--list | List benchmark names |
--list --format terse | One benchmark per line (nextest-compatible) |
--test | Verify benchmarks compile and run (no timing) |
--ignored | Run only ignored benchmarks |
--include-ignored | Run all benchmarks including ignored |
--bytes-format binary|decimal | Byte display format (default: decimal) |
--save-baseline <name> | Save results (requires json feature) |
--baseline <name> | Compare against saved baseline (requires json feature) |
--threshold <pct> | Regression threshold percentage (default: 5.0) |
Re-exports§
pub use bencher::Bencher;pub use bencher::BencherWithInput;pub use config::BenchOptions;pub use counter::BytesCount;pub use counter::CharsCount;pub use counter::Counter;pub use counter::CyclesCount;pub use counter::ItemsCount;
Modules§
- bencher
- Core benchmark measurement engine.
- config
- Benchmark configuration with two-layer resolution.
- counter
- Throughput counter system for benchmark measurements.
- entry
- Benchmark entry registration system.
- stats
- Core Percentile Computation Engine
- time
- Timing infrastructure.
Macros§
- dispatch_
timer - Dispatch a timer-dependent operation across all timer backends.
Functions§
- main
- Entry point for benchmark binaries.
Attribute Macros§
- bench
- Mark a function as a benchmark.
- bench_
group - Mark a module as a benchmark group.