[][src]Struct criterion::Criterion

pub struct Criterion { /* fields omitted */ }

The benchmark manager

Criterion lets you configure and execute benchmarks

Each benchmark consists of four phases:

  • Warm-up: The routine is repeatedly executed, to let the CPU/OS/JIT/interpreter adapt to the new load
  • Measurement: The routine is repeatedly executed, and timing information is collected into a sample
  • Analysis: The sample is analyzed and distiled into meaningful statistics that get reported to stdout, stored in files, and plotted
  • Comparison: The current sample is compared with the sample obtained in the previous benchmark.

Methods

impl Criterion[src]

pub fn sample_size(self, n: usize) -> Criterion[src]

Changes the default size of the sample for benchmarks run with this runner.

A bigger sample should yield more accurate results if paired with a sufficiently large measurement time.

Sample size must be at least 2.

Panics

Panics if set to zero or one

pub fn warm_up_time(self, dur: Duration) -> Criterion[src]

Changes the default warm up time for benchmarks run with this runner.

Panics

Panics if the input duration is zero

pub fn measurement_time(self, dur: Duration) -> Criterion[src]

Changes the default measurement time for benchmarks run with this runner.

With a longer time, the measurement will become more resilient to transitory peak loads caused by external programs

Note: If the measurement time is too "low", Criterion will automatically increase it

Panics

Panics if the input duration in zero

pub fn nresamples(self, n: usize) -> Criterion[src]

Changes the default number of resamples for benchmarks run with this runner.

Number of resamples to use for the bootstrap

A larger number of resamples reduces the random sampling errors, which are inherent to the bootstrap method, but also increases the analysis time

Panics

Panics if the number of resamples is set to zero

pub fn noise_threshold(self, threshold: f64) -> Criterion[src]

Changes the default noise threshold for benchmarks run with this runner.

This threshold is used to decide if an increase of X% in the execution time is considered significant or should be flagged as noise

Note: A value of 0.02 is equivalent to 2%

Panics

Panics is the threshold is set to a negative value

pub fn confidence_level(self, cl: f64) -> Criterion[src]

Changes the default confidence level for benchmarks run with this runner

The confidence level is used to calculate the confidence intervals of the estimated statistics

Panics

Panics if the confidence level is set to a value outside the (0, 1) range

pub fn significance_level(self, sl: f64) -> Criterion[src]

Changes the default significance level for benchmarks run with this runner

The significance level is used for hypothesis testing

Panics

Panics if the significance level is set to a value outside the (0, 1) range

pub fn with_plots(self) -> Criterion[src]

Enables plotting

pub fn without_plots(self) -> Criterion[src]

Disables plotting

pub fn can_plot(&self) -> bool[src]

Return true if generation of the plots is possible.

pub fn save_baseline(self, baseline: String) -> Criterion[src]

Names an explicit baseline and enables overwriting the previous results.

pub fn retain_baseline(self, baseline: String) -> Criterion[src]

Names an explicit baseline and disables overwriting the previous results.

pub fn with_filter<S: Into<String>>(self, filter: S) -> Criterion[src]

Filters the benchmarks. Only benchmarks with names that contain the given string will be executed.

pub fn configure_from_args(self) -> Criterion[src]

Configure this criterion struct based on the command-line arguments to this process.

pub fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion where
    F: FnMut(&mut Bencher) + 'static, 
[src]

Benchmarks a function

Example


fn bench(c: &mut Criterion) {
    // Setup (construct data, allocate memory, etc)
    c.bench_function(
        "function_name",
        |b| b.iter(|| {
            // Code to benchmark goes here
        }),
    );
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn bench_functions<I>(
    &mut self,
    id: &str,
    funs: Vec<Fun<I>>,
    input: I
) -> &mut Criterion where
    I: Debug + 'static, 
[src]

Benchmarks multiple functions

All functions get the same input and are compared with the other implementations. Works similar to bench_function, but with multiple functions.

Example


fn bench_seq_fib(b: &mut Bencher, i: &u32) {
    b.iter(|| {
        seq_fib(i);
    });
}

fn bench_par_fib(b: &mut Bencher, i: &u32) {
    b.iter(|| {
        par_fib(i);
    });
}

fn bench(c: &mut Criterion) {
    let sequential_fib = Fun::new("Sequential", bench_seq_fib);
    let parallel_fib = Fun::new("Parallel", bench_par_fib);
    let funs = vec![sequential_fib, parallel_fib];

    c.bench_functions("Fibonacci", funs, 14);
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn bench_function_over_inputs<I, F>(
    &mut self,
    id: &str,
    f: F,
    inputs: I
) -> &mut Criterion where
    I: IntoIterator,
    I::Item: Debug + 'static,
    F: FnMut(&mut Bencher, &I::Item) + 'static, 
[src]

Benchmarks a function under various inputs

This is a convenience method to execute several related benchmarks. Each benchmark will receive the id: ${id}/${input}.

Example


fn bench(c: &mut Criterion) {
    c.bench_function_over_inputs("from_elem",
        |b: &mut Bencher, size: &usize| {
            b.iter(|| vec![0u8; *size]);
        },
        vec![1024, 2048, 4096]
    );
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn bench_program(&mut self, id: &str, program: Command) -> &mut Criterion[src]

Deprecated since 0.2.6:

External program benchmarks were rarely used and are awkward to maintain, so they are scheduled for deletion in 0.3.0

Benchmarks an external program

The external program must:

  • Read the number of iterations from stdin
  • Execute the routine to benchmark that many times
  • Print the elapsed time (in nanoseconds) to stdout
// Example of an external program that implements this protocol

fn main() {
    let stdin = io::stdin();
    let ref mut stdin = stdin.lock();

    // For each line in stdin
    for line in stdin.lines() {
        // Parse line as the number of iterations
        let iters: u64 = line.unwrap().trim().parse().unwrap();

        // Setup

        // Benchmark
        let start = Instant::now();
        // Execute the routine "iters" times
        for _ in 0..iters {
            // Code to benchmark goes here
        }
        let elapsed = start.elapsed();

        // Teardown

        // Report elapsed time in nanoseconds to stdout
        println!("{}", elapsed.to_nanos());
    }
}

pub fn bench_program_over_inputs<I, F>(
    &mut self,
    id: &str,
    program: F,
    inputs: I
) -> &mut Criterion where
    F: FnMut() -> Command + 'static,
    I: IntoIterator,
    I::Item: Debug + 'static, 
[src]

Deprecated since 0.2.6:

External program benchmarks were rarely used and are awkward to maintain, so they are scheduled for deletion in 0.3.0

Benchmarks an external program under various inputs

This is a convenience method to execute several related benchmarks. Each benchmark will receive the id: ${id}/${input}.

pub fn bench<B: BenchmarkDefinition>(
    &mut self,
    group_id: &str,
    benchmark: B
) -> &mut Criterion
[src]

Executes the given benchmark. Use this variant to execute benchmarks with complex configuration. This can be used to compare multiple functions, execute benchmarks with custom configuration settings and more. See the Benchmark and ParameterizedBenchmark structs for more information.


fn bench(c: &mut Criterion) {
    // Setup (construct data, allocate memory, etc)
    c.bench(
        "routines",
        Benchmark::new("routine_1", |b| b.iter(|| routine_1()))
            .with_function("routine_2", |b| b.iter(|| routine_2()))
            .sample_size(50)
    );
}

criterion_group!(benches, bench);
criterion_main!(benches);

Trait Implementations

impl Default for Criterion[src]

fn default() -> Criterion[src]

Creates a benchmark manager with the following default settings:

  • Sample size: 100 measurements
  • Warm-up time: 3 s
  • Measurement time: 5 s
  • Bootstrap size: 100 000 resamples
  • Noise threshold: 0.01 (1%)
  • Confidence level: 0.95
  • Significance level: 0.05
  • Plotting: enabled (if gnuplot is available)
  • No filter

Auto Trait Implementations

impl !Send for Criterion

impl !Sync for Criterion

Blanket Implementations

impl<T> From for T[src]

impl<T, U> Into for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T> Borrow for T where
    T: ?Sized
[src]

impl<T> BorrowMut for T where
    T: ?Sized
[src]

impl<T, U> TryInto for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<T> Any for T where
    T: 'static + ?Sized
[src]