[][src]Struct criterion::Criterion

pub struct Criterion<M: Measurement = WallTime> { /* fields omitted */ }

The benchmark manager

Criterion lets you configure and execute benchmarks

Each benchmark consists of four phases:

  • Warm-up: The routine is repeatedly executed, to let the CPU/OS/JIT/interpreter adapt to the new load
  • Measurement: The routine is repeatedly executed, and timing information is collected into a sample
  • Analysis: The sample is analyzed and distiled into meaningful statistics that get reported to stdout, stored in files, and plotted
  • Comparison: The current sample is compared with the sample obtained in the previous benchmark.

Methods

impl<M: Measurement> Criterion<M>[src]

pub fn with_measurement<M2: Measurement>(self, m: M2) -> Criterion<M2>[src]

Changes the measurement for the benchmarks run with this runner. See the Measurement trait for more details

pub fn with_profiler<P: Profiler + 'static>(self, p: P) -> Criterion<M>[src]

Changes the internal profiler for benchmarks run with this runner. See the Profiler trait for more details.

pub fn sample_size(self, n: usize) -> Criterion<M>[src]

Changes the default size of the sample for benchmarks run with this runner.

A bigger sample should yield more accurate results if paired with a sufficiently large measurement time.

Sample size must be at least 10.

Panics

Panics if n < 10

pub fn warm_up_time(self, dur: Duration) -> Criterion<M>[src]

Changes the default warm up time for benchmarks run with this runner.

Panics

Panics if the input duration is zero

pub fn measurement_time(self, dur: Duration) -> Criterion<M>[src]

Changes the default measurement time for benchmarks run with this runner.

With a longer time, the measurement will become more resilient to transitory peak loads caused by external programs

Note: If the measurement time is too "low", Criterion will automatically increase it

Panics

Panics if the input duration in zero

pub fn nresamples(self, n: usize) -> Criterion<M>[src]

Changes the default number of resamples for benchmarks run with this runner.

Number of resamples to use for the bootstrap

A larger number of resamples reduces the random sampling errors, which are inherent to the bootstrap method, but also increases the analysis time

Panics

Panics if the number of resamples is set to zero

pub fn noise_threshold(self, threshold: f64) -> Criterion<M>[src]

Changes the default noise threshold for benchmarks run with this runner.

This threshold is used to decide if an increase of X% in the execution time is considered significant or should be flagged as noise

Note: A value of 0.02 is equivalent to 2%

Panics

Panics is the threshold is set to a negative value

pub fn confidence_level(self, cl: f64) -> Criterion<M>[src]

Changes the default confidence level for benchmarks run with this runner

The confidence level is used to calculate the confidence intervals of the estimated statistics

Panics

Panics if the confidence level is set to a value outside the (0, 1) range

pub fn significance_level(self, sl: f64) -> Criterion<M>[src]

Changes the default significance level for benchmarks run with this runner

The significance level is used for hypothesis testing

Panics

Panics if the significance level is set to a value outside the (0, 1) range

pub fn with_plots(self) -> Criterion<M>[src]

Enables plotting

pub fn without_plots(self) -> Criterion<M>[src]

Disables plotting

pub fn can_plot(&self) -> bool[src]

Return true if generation of the plots is possible.

pub fn save_baseline(self, baseline: String) -> Criterion<M>[src]

Names an explicit baseline and enables overwriting the previous results.

pub fn retain_baseline(self, baseline: String) -> Criterion<M>[src]

Names an explicit baseline and disables overwriting the previous results.

pub fn with_filter<S: Into<String>>(self, filter: S) -> Criterion<M>[src]

Filters the benchmarks. Only benchmarks with names that contain the given string will be executed.

pub fn configure_from_args(self) -> Criterion<M>[src]

Configure this criterion struct based on the command-line arguments to this process.

pub fn benchmark_group<S: Into<String>>(
    &mut self,
    group_name: S
) -> BenchmarkGroup<M>
[src]

Return a benchmark group. All benchmarks performed using a benchmark group will be grouped together in the final report.

Examples:

#[macro_use] extern crate criterion;
use self::criterion::*;

fn bench_simple(c: &mut Criterion) {
    let mut group = c.benchmark_group("My Group");

    // Now we can perform benchmarks with this group
    group.bench_function("Bench 1", |b| b.iter(|| 1 ));
    group.bench_function("Bench 2", |b| b.iter(|| 2 ));
    
    group.finish();
}
criterion_group!(benches, bench_simple);
criterion_main!(benches);

impl<M> Criterion<M> where
    M: Measurement + 'static, 
[src]

pub fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion<M> where
    F: FnMut(&mut Bencher<M>), 
[src]

Benchmarks a function. For comparing multiple functions, see benchmark_group.

Example

#[macro_use] extern crate criterion;
use self::criterion::*;

fn bench(c: &mut Criterion) {
    // Setup (construct data, allocate memory, etc)
    c.bench_function(
        "function_name",
        |b| b.iter(|| {
            // Code to benchmark goes here
        }),
    );
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn bench_with_input<F, I>(
    &mut self,
    id: BenchmarkId,
    input: &I,
    f: F
) -> &mut Criterion<M> where
    F: FnMut(&mut Bencher<M>, &I), 
[src]

Benchmarks a function with an input. For comparing multiple functions or multiple inputs, see benchmark_group.

Example

#[macro_use] extern crate criterion;
use self::criterion::*;

fn bench(c: &mut Criterion) {
    // Setup (construct data, allocate memory, etc)
    let input = 5u64;
    c.bench_with_input(
        BenchmarkId::new("function_name", input), &input,
        |b, i| b.iter(|| {
            // Code to benchmark using input `i` goes here
        }),
    );
}

criterion_group!(benches, bench);
criterion_main!(benches);

Trait Implementations

impl Default for Criterion[src]

fn default() -> Criterion[src]

Creates a benchmark manager with the following default settings:

  • Sample size: 100 measurements
  • Warm-up time: 3 s
  • Measurement time: 5 s
  • Bootstrap size: 100 000 resamples
  • Noise threshold: 0.01 (1%)
  • Confidence level: 0.95
  • Significance level: 0.05
  • Plotting: enabled (if gnuplot is available)
  • No filter

Auto Trait Implementations

impl<M = WallTime> !Send for Criterion<M>

impl<M> Unpin for Criterion<M> where
    M: Unpin

impl<M = WallTime> !Sync for Criterion<M>

impl<M = WallTime> !UnwindSafe for Criterion<M>

impl<M = WallTime> !RefUnwindSafe for Criterion<M>

Blanket Implementations

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> Any for T where
    T: 'static + ?Sized
[src]