[][src]Struct criterion::Bencher

pub struct Bencher<'a, M: Measurement = WallTime> { /* fields omitted */ }

Timer struct used to iterate a benchmarked function and measure the runtime.

This struct provides different timing loops as methods. Each timing loop provides a different way to time a routine and each has advantages and disadvantages.

  • If you want to do the iteration and measurement yourself (eg. passing the iteration count to a separate process), use iter_custom.
  • If your routine requires no per-iteration setup and returns a value with an expensive drop method, use iter_with_large_drop.
  • If your routine requires some per-iteration setup that shouldn't be timed, use iter_batched or iter_batched_ref. See BatchSize for a discussion of batch sizes. If the setup value implements Drop and you don't want to include the drop time in the measurement, use iter_batched_ref, otherwise use iter_batched. These methods are also suitable for benchmarking routines which return a value with an expensive drop method, but are more complex than iter_with_large_drop.
  • Otherwise, use iter.

Implementations

impl<'a, M: Measurement> Bencher<'a, M>[src]

pub fn iter<O, R>(&mut self, routine: R) where
    R: FnMut() -> O, 
[src]

Times a routine by executing it many times and timing the total elapsed time.

Prefer this timing loop when routine returns a value that doesn't have a destructor.

Timing model

Note that the Bencher also times the time required to destroy the output of routine(). Therefore prefer this timing loop when the runtime of mem::drop(O) is negligible compared to the runtime of the routine.

elapsed = Instant::now + iters * (routine + mem::drop(O) + Range::next)

Example

#[macro_use] extern crate criterion;

use criterion::*;

// The function to benchmark
fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.iter(|| foo())
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn iter_custom<R>(&mut self, routine: R) where
    R: FnMut(u64) -> M::Value
[src]

Times a routine by executing it many times and relying on routine to measure its own execution time.

Prefer this timing loop in cases where routine has to do its own measurements to get accurate timing information (for example in multi-threaded scenarios where you spawn and coordinate with multiple threads).

Timing model

Custom, the timing model is whatever is returned as the Duration from routine.

Example

#[macro_use] extern crate criterion;
use criterion::*;
use criterion::black_box;
use std::time::Instant;

fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.iter_custom(|iters| {
            let start = Instant::now();
            for _i in 0..iters {
                black_box(foo());
            }
            start.elapsed()
        })
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn iter_with_large_drop<O, R>(&mut self, routine: R) where
    R: FnMut() -> O, 
[src]

Times a routine by collecting its output on each iteration. This avoids timing the destructor of the value returned by routine.

WARNING: This requires O(iters * mem::size_of::<O>()) of memory, and iters is not under the control of the caller. If this causes out-of-memory errors, use iter_batched instead.

Timing model

elapsed = Instant::now + iters * (routine) + Iterator::collect::<Vec<_>>

Example

#[macro_use] extern crate criterion;

use criterion::*;

fn create_vector() -> Vec<u64> {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("with_drop", move |b| {
        // This will avoid timing the Vec::drop.
        b.iter_with_large_drop(|| create_vector())
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn iter_batched<I, O, S, R>(
    &mut self,
    setup: S,
    routine: R,
    size: BatchSize
) where
    S: FnMut() -> I,
    R: FnMut(I) -> O, 
[src]

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine must consume its input.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model

elapsed = (Instant::now * num_batches) + (iters * (routine + O::drop)) + Vec::extend

Example

#[macro_use] extern crate criterion;

use criterion::*;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| sort(&mut data), BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn iter_batched_ref<I, O, S, R>(
    &mut self,
    setup: S,
    routine: R,
    size: BatchSize
) where
    S: FnMut() -> I,
    R: FnMut(&mut I) -> O, 
[src]

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine should accept the input by mutable reference.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model

elapsed = (Instant::now * num_batches) + (iters * routine) + Vec::extend

Example

#[macro_use] extern crate criterion;

use criterion::*;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| sort(&mut data), BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Auto Trait Implementations

impl<'a, M> RefUnwindSafe for Bencher<'a, M> where
    M: RefUnwindSafe,
    <M as Measurement>::Value: RefUnwindSafe

impl<'a, M> Send for Bencher<'a, M> where
    M: Sync,
    <M as Measurement>::Value: Send

impl<'a, M> Sync for Bencher<'a, M> where
    M: Sync,
    <M as Measurement>::Value: Sync

impl<'a, M> Unpin for Bencher<'a, M> where
    <M as Measurement>::Value: Unpin

impl<'a, M> UnwindSafe for Bencher<'a, M> where
    M: RefUnwindSafe,
    <M as Measurement>::Value: UnwindSafe

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.