What it does
Measures performance and produces verdicts the same way the rest of the
dev-* suite does: machine-readable, stored as dev-report results.
Designed for AI agents and CI gating, not for interactive profiling.
For interactive profiling, use criterion or divan.
Quick start
[]
= "0.1"
use ;
let mut b = new;
for _ in 0..1000
let result = b.finish;
let threshold = regression_pct;
// If you have a stored baseline mean from a previous run, pass it in.
// Returns a CheckResult that can be pushed into a dev_report::Report.
let check = result.compare_against_baseline;
Design choices
- Output is a
CheckResult, not a stdout dump. Agents can read it. - Regression-first, not absolute-perf-first. The interesting question is "did this change make it slower," not "is this the fastest in the world."
- Configurable thresholds: percent-based or absolute-nanosecond.
What's planned
- Throughput measurements (ops/sec).
- Allocation tracking via
dhatintegration (feature-gated). - Baseline storage formats (JSON files keyed by git SHA).
- Comparison helpers that read baselines from disk.
The dev-* suite
dev-bench is one of the producer crates. See
dev-tools for the umbrella
crate and dev-report for
the schema all results conform to.
License
Apache-2.0. See LICENSE.