pub struct Bencher<'a, M: Measurement = WallTime> { /* private fields */ }
Expand description

Timer struct used to iterate a benchmarked function and measure the runtime.

This struct provides different timing loops as methods. Each timing loop provides a different way to time a routine and each has advantages and disadvantages.

  • If you want to do the iteration and measurement yourself (eg. passing the iteration count to a separate process), use iter_custom.
  • If your routine requires no per-iteration setup and returns a value with an expensive drop method, use iter_with_large_drop.
  • If your routine requires some per-iteration setup that shouldn’t be timed, use iter_batched or iter_batched_ref. See BatchSize for a discussion of batch sizes. If the setup value implements Drop and you don’t want to include the drop time in the measurement, use iter_batched_ref, otherwise use iter_batched. These methods are also suitable for benchmarking routines which return a value with an expensive drop method, but are more complex than iter_with_large_drop.
  • Otherwise, use iter.

Implementations

Times a routine by executing it many times and timing the total elapsed time.

Prefer this timing loop when routine returns a value that doesn’t have a destructor.

Timing model

Note that the Bencher also times the time required to destroy the output of routine(). Therefore prefer this timing loop when the runtime of mem::drop(O) is negligible compared to the runtime of the routine.

elapsed = Instant::now + iters * (routine + mem::drop(O) + Range::next)
Example
#[macro_use] extern crate criterion;

use criterion::*;

// The function to benchmark
fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.iter(|| foo())
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Times a routine by executing it many times and relying on routine to measure its own execution time.

Prefer this timing loop in cases where routine has to do its own measurements to get accurate timing information (for example in multi-threaded scenarios where you spawn and coordinate with multiple threads).

Timing model

Custom, the timing model is whatever is returned as the Duration from routine.

Example
#[macro_use] extern crate criterion;
use criterion::*;
use criterion::black_box;
use std::time::Instant;

fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.iter_custom(|iters| {
            let start = Instant::now();
            for _i in 0..iters {
                black_box(foo());
            }
            start.elapsed()
        })
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Times a routine by collecting its output on each iteration. This avoids timing the destructor of the value returned by routine.

WARNING: This requires O(iters * mem::size_of::<O>()) of memory, and iters is not under the control of the caller. If this causes out-of-memory errors, use iter_batched instead.

Timing model
elapsed = Instant::now + iters * (routine) + Iterator::collect::<Vec<_>>
Example
#[macro_use] extern crate criterion;

use criterion::*;

fn create_vector() -> Vec<u64> {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("with_drop", move |b| {
        // This will avoid timing the Vec::drop.
        b.iter_with_large_drop(|| create_vector())
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine must consume its input.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model
elapsed = (Instant::now * num_batches) + (iters * (routine + O::drop)) + Vec::extend
Example
#[macro_use] extern crate criterion;

use criterion::*;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| sort(&mut data), BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine should accept the input by mutable reference.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model
elapsed = (Instant::now * num_batches) + (iters * routine) + Vec::extend
Example
#[macro_use] extern crate criterion;

use criterion::*;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| sort(&mut data), BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Convert this bencher into an AsyncBencher, which enables async/await support.

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The alignment of pointer.

The type for initializers.

Initializes a with the given initializer. Read more

Dereferences the given pointer. Read more

Mutably dereferences the given pointer. Read more

Drops the object pointed to by the given pointer. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.