pub struct AsyncBencher<'a, 'b, A: AsyncExecutor, M: Measurement = WallTime> { /* private fields */ }
Expand description

Async/await variant of the Bencher struct.

Implementations

Times a routine by executing it many times and timing the total elapsed time.

Prefer this timing loop when routine returns a value that doesn’t have a destructor.

Timing model

Note that the AsyncBencher also times the time required to destroy the output of routine(). Therefore prefer this timing loop when the runtime of mem::drop(O) is negligible compared to the runtime of the routine.

elapsed = Instant::now + iters * (routine + mem::drop(O) + Range::next)
Example
#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

// The function to benchmark
async fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.to_async(FuturesExecutor).iter(|| async { foo().await } )
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Times a routine by executing it many times and relying on routine to measure its own execution time.

Prefer this timing loop in cases where routine has to do its own measurements to get accurate timing information (for example in multi-threaded scenarios where you spawn and coordinate with multiple threads).

Timing model

Custom, the timing model is whatever is returned as the Duration from routine.

Example
#[macro_use] extern crate criterion;
use criterion::*;
use criterion::black_box;
use criterion::async_executor::FuturesExecutor;
use std::time::Instant;

async fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.to_async(FuturesExecutor).iter_custom(|iters| {
            async move {
                let start = Instant::now();
                for _i in 0..iters {
                    black_box(foo().await);
                }
                start.elapsed()
            }
        })
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Times a routine by collecting its output on each iteration. This avoids timing the destructor of the value returned by routine.

WARNING: This requires O(iters * mem::size_of::<O>()) of memory, and iters is not under the control of the caller. If this causes out-of-memory errors, use iter_batched instead.

Timing model
elapsed = Instant::now + iters * (routine) + Iterator::collect::<Vec<_>>
Example
#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

async fn create_vector() -> Vec<u64> {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("with_drop", move |b| {
        // This will avoid timing the Vec::drop.
        b.to_async(FuturesExecutor).iter_with_large_drop(|| async { create_vector().await })
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine must consume its input.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model
elapsed = (Instant::now * num_batches) + (iters * (routine + O::drop)) + Vec::extend
Example
#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
async fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| async move { sort(&mut data).await }, BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine should accept the input by mutable reference.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model
elapsed = (Instant::now * num_batches) + (iters * routine) + Vec::extend
Example
#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
async fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| async move { sort(&mut data).await }, BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The alignment of pointer.

The type for initializers.

Initializes a with the given initializer. Read more

Dereferences the given pointer. Read more

Mutably dereferences the given pointer. Read more

Drops the object pointed to by the given pointer. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.