Struct criterion::Criterion [] [src]

pub struct Criterion { /* fields omitted */ }

The benchmark manager

Criterion lets you configure and execute benchmarks

Each benchmark consists of four phases:

  • Warm-up: The routine is repeatedly executed, to let the CPU/OS/JIT/interpreter adapt to the new load
  • Measurement: The routine is repeatedly executed, and timing information is collected into a sample
  • Analysis: The sample is analyzed and distiled into meaningful statistics that get reported to stdout, stored in files, and plotted
  • Comparison: The current sample is compared with the sample obtained in the previous benchmark.

Methods

impl Criterion
[src]

[src]

Changes the default size of the sample for benchmarks run with this runner.

A bigger sample should yield more accurate results, if paired with a "sufficiently" large measurement time, on the other hand, it also increases the analysis time

Panics

Panics if set to zero

[src]

Changes the default warm up time for benchmarks run with this runner.

Panics

Panics if the input duration is zero

[src]

Changes the default measurement time for benchmarks run with this runner.

With a longer time, the measurement will become more resilient to transitory peak loads caused by external programs

Note: If the measurement time is too "low", Criterion will automatically increase it

Panics

Panics if the input duration in zero

[src]

Changes the default number of resamples for benchmarks run with this runner.

Number of resamples to use for the bootstrap

A larger number of resamples reduces the random sampling errors, which are inherent to the bootstrap method, but also increases the analysis time

Panics

Panics if the number of resamples is set to zero

[src]

Changes the default noise threshold for benchmarks run with this runner.

This threshold is used to decide if an increase of X% in the execution time is considered significant or should be flagged as noise

Note: A value of 0.02 is equivalent to 2%

Panics

Panics is the threshold is set to a negative value

[src]

Changes the default confidence level for benchmarks run with this runner

The confidence level is used to calculate the confidence intervals of the estimated statistics

Panics

Panics if the confidence level is set to a value outside the (0, 1) range

[src]

Changes the default significance level for benchmarks run with this runner

The significance level is used for hypothesis testing

Panics

Panics if the significance level is set to a value outside the (0, 1) range

[src]

Enables plotting

[src]

Disabled plotting

[src]

Checks if plotting is possible

[src]

Filters the benchmarks. Only benchmarks with names that contain the given string will be executed.

[src]

Configure this criterion struct based on the command-line arguments to this process.

[src]

Benchmarks a function

The function under test must follow the setup - bench - teardown pattern:

use self::criterion::{Bencher, Criterion};

fn routine(b: &mut Bencher) {
    // Setup (construct data, allocate memory, etc)

    b.iter(|| {
        // Code to benchmark goes here
    })

    // Teardown (free resources)
}

Criterion::default().bench_function("routine", routine);

[src]

Benchmarks multiple functions

All functions get the same input and are compared with the other implementations. Works similar to bench, but with multiple functions.


fn bench_seq_fib(b: &mut Bencher, i: &u32) {
    b.iter(|| {
        seq_fib(i);
    });
}

fn bench_par_fib(b: &mut Bencher, i: &u32) {
    b.iter(|| {
        par_fib(i);
    });
}

let sequential_fib = Fun::new("Sequential", bench_seq_fib);
let parallel_fib = Fun::new("Parallel", bench_par_fib);
let funs = vec![sequential_fib, parallel_fib];

Criterion::default().bench_functions("Fibonacci", funs, 14);

[src]

Benchmarks a function under various inputs

This is a convenience method to execute several related benchmarks. Each benchmark will receive the id: ${id}/${input}.

use self::criterion::{Bencher, Criterion};

Criterion::default()
    .bench_function_over_inputs("from_elem", |b: &mut Bencher, size: &usize| {
        b.iter(|| vec![0u8; *size]);
    }, vec![1024, 2048, 4096]);

[src]

Benchmarks an external program

The external program must conform to the following specification:


fn main() {
    let stdin = io::stdin();
    let ref mut stdin = stdin.lock();

    // For each line in stdin
    for line in stdin.lines() {
        // Parse line as the number of iterations
        let iters: u64 = line.unwrap().trim().parse().unwrap();

        // Setup

        // Benchmark
        let start = Instant::now();
        // Execute the routine "iters" times
        for _ in 0..iters {
            // Code to benchmark goes here
        }
        let elapsed = start.elapsed();

        // Teardown

        // Report elapsed time in nanoseconds to stdout
        println!("{}", elapsed.to_nanos());
    }
}

[src]

Benchmarks an external program under various inputs

This is a convenience method to execute several related benchmarks. Each benchmark will receive the id: ${id}/${input}.

[src]

Executes the given benchmark. Use this variant to execute benchmarks with complex configuration.

use self::criterion::{Bencher, Criterion, Benchmark};

fn routine(b: &mut Bencher) {
    // Setup (construct data, allocate memory, etc)

    b.iter(|| {
        // Code to benchmark goes here
    })

    // Teardown (free resources)
}

Criterion::default()
    .bench("routine", Benchmark::new("routine", routine)
        .sample_size(50));

Trait Implementations

impl Default for Criterion
[src]

[src]

Creates a benchmark manager with the following default settings:

  • Sample size: 100 measurements
  • Warm-up time: 3 s
  • Measurement time: 5 s
  • Bootstrap size: 100 000 resamples
  • Noise threshold: 0.01 (1%)
  • Confidence level: 0.95
  • Significance level: 0.05
  • Plotting: enabled (if gnuplot is available)
  • No filter

Auto Trait Implementations

impl !Send for Criterion

impl !Sync for Criterion