Struct criterion::AsyncBencher

source ·
pub struct AsyncBencher<'a, 'b, A: AsyncExecutor, M: Measurement = WallTime> { /* private fields */ }
Expand description

Async/await variant of the Bencher struct.

Implementations§

source§

impl<'a, 'b, A: AsyncExecutor, M: Measurement> AsyncBencher<'a, 'b, A, M>

source

pub fn iter<O, R, F>(&mut self, routine: R)where R: FnMut() -> F, F: Future<Output = O>,

Times a routine by executing it many times and timing the total elapsed time.

Prefer this timing loop when routine returns a value that doesn’t have a destructor.

Timing model

Note that the AsyncBencher also times the time required to destroy the output of routine(). Therefore prefer this timing loop when the runtime of mem::drop(O) is negligible compared to the runtime of the routine.

elapsed = Instant::now + iters * (routine + mem::drop(O) + Range::next)
Example
#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

// The function to benchmark
async fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.to_async(FuturesExecutor).iter(|| async { foo().await } )
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);
source

pub fn iter_custom<R, F>(&mut self, routine: R)where R: FnMut(u64) -> F, F: Future<Output = M::Value>,

Times a routine by executing it many times and relying on routine to measure its own execution time.

Prefer this timing loop in cases where routine has to do its own measurements to get accurate timing information (for example in multi-threaded scenarios where you spawn and coordinate with multiple threads).

Timing model

Custom, the timing model is whatever is returned as the Duration from routine.

Example
#[macro_use] extern crate criterion;
use criterion::*;
use criterion::black_box;
use criterion::async_executor::FuturesExecutor;
use std::time::Instant;

async fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.to_async(FuturesExecutor).iter_custom(|iters| {
            async move {
                let start = Instant::now();
                for _i in 0..iters {
                    black_box(foo().await);
                }
                start.elapsed()
            }
        })
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);
source

pub fn iter_with_large_drop<O, R, F>(&mut self, routine: R)where R: FnMut() -> F, F: Future<Output = O>,

Times a routine by collecting its output on each iteration. This avoids timing the destructor of the value returned by routine.

WARNING: This requires O(iters * mem::size_of::<O>()) of memory, and iters is not under the control of the caller. If this causes out-of-memory errors, use iter_batched instead.

Timing model
elapsed = Instant::now + iters * (routine) + Iterator::collect::<Vec<_>>
Example
#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

async fn create_vector() -> Vec<u64> {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("with_drop", move |b| {
        // This will avoid timing the Vec::drop.
        b.to_async(FuturesExecutor).iter_with_large_drop(|| async { create_vector().await })
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);
source

pub fn iter_batched<I, O, S, R, F>( &mut self, setup: S, routine: R, size: BatchSize )where S: FnMut() -> I, R: FnMut(I) -> F, F: Future<Output = O>,

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine must consume its input.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model
elapsed = (Instant::now * num_batches) + (iters * (routine + O::drop)) + Vec::extend
Example
#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
async fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| async move { sort(&mut data).await }, BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);
source

pub fn iter_batched_ref<I, O, S, R, F>( &mut self, setup: S, routine: R, size: BatchSize )where S: FnMut() -> I, R: FnMut(&mut I) -> F, F: Future<Output = O>,

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine should accept the input by mutable reference.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model
elapsed = (Instant::now * num_batches) + (iters * routine) + Vec::extend
Example
#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
async fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| async move { sort(&mut data).await }, BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Auto Trait Implementations§

§

impl<'a, 'b, A, M> RefUnwindSafe for AsyncBencher<'a, 'b, A, M>where A: RefUnwindSafe, M: RefUnwindSafe, <M as Measurement>::Value: RefUnwindSafe,

§

impl<'a, 'b, A, M> Send for AsyncBencher<'a, 'b, A, M>where A: Send, M: Sync, <M as Measurement>::Value: Send,

§

impl<'a, 'b, A, M> Sync for AsyncBencher<'a, 'b, A, M>where A: Sync, M: Sync, <M as Measurement>::Value: Sync,

§

impl<'a, 'b, A, M> Unpin for AsyncBencher<'a, 'b, A, M>where A: Unpin,

§

impl<'a, 'b, A, M = WallTime> !UnwindSafe for AsyncBencher<'a, 'b, A, M>

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for Twhere U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

§

impl<T> Pointable for T

§

const ALIGN: usize = mem::align_of::<T>()

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.