pub fn run_closure<F>(spec: BenchSpec, f: F) -> Result<BenchReport, BenchError>Expand description
Runs a benchmark by executing a closure repeatedly.
This is the core benchmarking function. It:
- Executes the closure
spec.warmuptimes without recording - Executes the closure
spec.iterationstimes, recording each duration - Returns a
BenchReportwith all samples
§Arguments
spec- Benchmark configuration specifying iterations and warmupf- Closure to benchmark; must returnResult<(), BenchError>
§Returns
A BenchReport containing all timing samples, or a BenchError if
the benchmark fails.
§Example
use mobench_runner::{BenchSpec, run_closure, BenchError};
let spec = BenchSpec::new("sum_benchmark", 100, 10)?;
let report = run_closure(spec, || {
let sum: u64 = (0..1000).sum();
std::hint::black_box(sum);
Ok(())
})?;
assert_eq!(report.samples.len(), 100);
// Calculate mean duration
let total_ns: u64 = report.samples.iter().map(|s| s.duration_ns).sum();
let mean_ns = total_ns / report.samples.len() as u64;
println!("Mean: {} ns", mean_ns);§Error Handling
If the closure returns an error, the benchmark stops immediately:
use mobench_runner::{BenchSpec, run_closure, BenchError};
let spec = BenchSpec::new("failing_bench", 100, 0)?;
let result = run_closure(spec, || {
Err(BenchError::Execution("simulated failure".into()))
});
assert!(result.is_err());§Timing Precision
Uses std::time::Instant for timing, which provides monotonic,
nanosecond-resolution measurements on most platforms.