[][src]Struct criterion::BenchmarkGroup

pub struct BenchmarkGroup<'a, M: Measurement> { /* fields omitted */ }

Structure used to group together a set of related benchmarks, along with custom configuration settings for groups of benchmarks. All benchmarks performed using a benchmark group will be grouped together in the final report.

Examples:

#[macro_use] extern crate criterion;
use self::criterion::*;
use std::time::Duration;

fn bench_simple(c: &mut Criterion) {
    let mut group = c.benchmark_group("My Group");

    // Now we can perform benchmarks with this group
    group.bench_function("Bench 1", |b| b.iter(|| 1 ));
    group.bench_function("Bench 2", |b| b.iter(|| 2 ));
    
    // It's recommended to call group.finish() explicitly at the end, but if you don't it will
    // be called automatically when the group is dropped.
    group.finish();
}

fn bench_nested(c: &mut Criterion) {
    let mut group = c.benchmark_group("My Second Group");
    // We can override the configuration on a per-group level
    group.measurement_time(Duration::from_secs(1));

    // We can also use loops to define multiple benchmarks, even over multiple dimensions.
    for x in 0..3 {
        for y in 0..3 {
            let point = (x, y);
            let parameter_string = format!("{} * {}", x, y);
            group.bench_with_input(BenchmarkId::new("Multiply", parameter_string), &point,
                |b, (p_x, p_y)| b.iter(|| p_x * p_y));
        }
    }
    
    group.finish();
}

fn bench_throughput(c: &mut Criterion) {
    let mut group = c.benchmark_group("Summation");
     
    for size in [1024, 2048, 4096].iter() {
        // Generate input of an appropriate size...
        let input = vec![1u64, *size];

        // We can use the throughput function to tell Criterion.rs how large the input is
        // so it can calculate the overall throughput of the function. If we wanted, we could
        // even change the benchmark configuration for different inputs (eg. to reduce the
        // number of samples for extremely large and slow inputs) or even different functions.
        group.throughput(Throughput::Elements(*size as u64));

        group.bench_with_input(BenchmarkId::new("sum", *size), &input,
            |b, i| b.iter(|| i.iter().sum::<u64>()));
        group.bench_with_input(BenchmarkId::new("fold", *size), &input,
            |b, i| b.iter(|| i.iter().fold(0u64, |a, b| a + b)));
    }

    group.finish();
}

criterion_group!(benches, bench_simple, bench_nested, bench_throughput);
criterion_main!(benches);

Methods

impl<'a, M: Measurement> BenchmarkGroup<'a, M>[src]

pub fn sample_size(&mut self, n: usize) -> &mut Self[src]

Changes the size of the sample for this benchmark

A bigger sample should yield more accurate results if paired with a sufficiently large measurement time.

Sample size must be at least 10.

Panics

Panics if n < 10.

pub fn warm_up_time(&mut self, dur: Duration) -> &mut Self[src]

Changes the warm up time for this benchmark

Panics

Panics if the input duration is zero

pub fn measurement_time(&mut self, dur: Duration) -> &mut Self[src]

Changes the target measurement time for this benchmark group.

Criterion will attempt to spent approximately this amount of time measuring each benchmark on a best-effort basis. If it is not possible to perform the measurement in the requested time (eg. because each iteration of the benchmark is long) then Criterion will spend as long as is needed to collect the desired number of samples. With a longer time, the measurement will become more resilient to interference from other programs.

Panics

Panics if the input duration is zero

pub fn nresamples(&mut self, n: usize) -> &mut Self[src]

Changes the number of resamples for this benchmark group

Number of resamples to use for the bootstrap

A larger number of resamples reduces the random sampling errors which are inherent to the bootstrap method, but also increases the analysis time.

Panics

Panics if the number of resamples is set to zero

pub fn noise_threshold(&mut self, threshold: f64) -> &mut Self[src]

Changes the noise threshold for benchmarks in this group. The noise threshold is used to filter out small changes in performance from one run to the next, even if they are statistically significant. Sometimes benchmarking the same code twice will result in small but statistically significant differences solely because of noise. This provides a way to filter out some of these false positives at the cost of making it harder to detect small changes to the true performance of the benchmark.

The default is 0.01, meaning that changes smaller than 1% will be ignored.

Panics

Panics if the threshold is set to a negative value

pub fn confidence_level(&mut self, cl: f64) -> &mut Self[src]

Changes the confidence level for benchmarks in this group. The confidence level is the desired probability that the true runtime lies within the estimated confidence interval. The default is 0.95, meaning that the confidence interval should capture the true value 95% of the time.

Panics

Panics if the confidence level is set to a value outside the (0, 1) range

pub fn significance_level(&mut self, sl: f64) -> &mut Self[src]

Changes the significance level for benchmarks in this group. This is used to perform a hypothesis test to see if the measurements from this run are different from the measured performance of the last run. The significance level is the desired probability that two measurements of identical code will be considered 'different' due to noise in the measurements. The default value is 0.05, meaning that approximately 5% of identical benchmarks will register as different due to noise.

This presents a trade-off. By setting the significance level closer to 0.0, you can increase the statistical robustness against noise, but it also weaken's Criterion.rs' ability to detect small but real changes in the performance. By setting the significance level closer to 1.0, Criterion.rs will be more able to detect small true changes, but will also report more spurious differences.

See also the noise threshold setting.

Panics

Panics if the significance level is set to a value outside the (0, 1) range

pub fn plot_config(&mut self, new_config: PlotConfiguration) -> &mut Self[src]

Changes the plot configuration for this benchmark group.

pub fn throughput(&mut self, throughput: Throughput) -> &mut Self[src]

Set the input size for this benchmark group. Used for reporting the throughput.

pub fn bench_function<ID: IntoBenchmarkId, F>(
    &mut self,
    id: ID,
    f: F
) -> &mut Self where
    F: FnMut(&mut Bencher<M>), 
[src]

Benchmark the given parameterless function inside this benchmark group.

pub fn bench_with_input<ID: IntoBenchmarkId, F, I>(
    &mut self,
    id: ID,
    input: &I,
    f: F
) -> &mut Self where
    F: FnMut(&mut Bencher<M>, &I),
    I: ?Sized
[src]

Benchmark the given parameterized function inside this benchmark group.

pub fn finish(self)[src]

Consume the benchmark group and generate the summary reports for the group.

It is recommended to call this explicitly, but if you forget it will be called when the group is dropped.

Trait Implementations

impl<'a, M: Measurement> Drop for BenchmarkGroup<'a, M>[src]

Auto Trait Implementations

impl<'a, M> !RefUnwindSafe for BenchmarkGroup<'a, M>

impl<'a, M> !Send for BenchmarkGroup<'a, M>

impl<'a, M> !Sync for BenchmarkGroup<'a, M>

impl<'a, M> Unpin for BenchmarkGroup<'a, M>

impl<'a, M> !UnwindSafe for BenchmarkGroup<'a, M>

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.