run_closure

Function run_closure 

Source
pub fn run_closure<F>(spec: BenchSpec, f: F) -> Result<BenchReport, BenchError>
where F: FnMut() -> Result<(), BenchError>,
Expand description

Runs a benchmark by executing a closure repeatedly.

This is the core benchmarking function. It:

  1. Executes the closure spec.warmup times without recording
  2. Executes the closure spec.iterations times, recording each duration
  3. Returns a BenchReport with all samples

§Arguments

  • spec - Benchmark configuration specifying iterations and warmup
  • f - Closure to benchmark; must return Result<(), BenchError>

§Returns

A BenchReport containing all timing samples, or a BenchError if the benchmark fails.

§Example

use mobench_runner::{BenchSpec, run_closure, BenchError};

let spec = BenchSpec::new("sum_benchmark", 100, 10)?;

let report = run_closure(spec, || {
    let sum: u64 = (0..1000).sum();
    std::hint::black_box(sum);
    Ok(())
})?;

assert_eq!(report.samples.len(), 100);

// Calculate mean duration
let total_ns: u64 = report.samples.iter().map(|s| s.duration_ns).sum();
let mean_ns = total_ns / report.samples.len() as u64;
println!("Mean: {} ns", mean_ns);

§Error Handling

If the closure returns an error, the benchmark stops immediately:

use mobench_runner::{BenchSpec, run_closure, BenchError};

let spec = BenchSpec::new("failing_bench", 100, 0)?;

let result = run_closure(spec, || {
    Err(BenchError::Execution("simulated failure".into()))
});

assert!(result.is_err());

§Timing Precision

Uses std::time::Instant for timing, which provides monotonic, nanosecond-resolution measurements on most platforms.