Struct criterion::Criterion
[−]
[src]
pub struct Criterion { /* fields omitted */ }
The benchmark manager
Criterion
lets you configure and execute benchmarks
Each benchmark consists of four phases:
- Warm-up: The routine is repeatedly executed, to let the CPU/OS/JIT/interpreter adapt to the new load
- Measurement: The routine is repeatedly executed, and timing information is collected into a sample
- Analysis: The sample is analyzed and distiled into meaningful statistics that get reported to stdout, stored in files, and plotted
- Comparison: The current sample is compared with the sample obtained in the previous
benchmark. If a significant regression in performance is spotted,
Criterion
will trigger a task panic
Methods
impl Criterion
[src]
fn sample_size(&mut self, n: usize) -> &mut Criterion
[src]
Changes the size of the sample
A bigger sample should yield more accurate results, if paired with a "sufficiently" large measurement time, on the other hand, it also increases the analysis time
Panics
Panics if set to zero
fn warm_up_time(&mut self, dur: Duration) -> &mut Criterion
[src]
fn measurement_time(&mut self, dur: Duration) -> &mut Criterion
[src]
Changes the measurement time
With a longer time, the measurement will become more resilient to transitory peak loads caused by external programs
Note: If the measurement time is too "low", Criterion will automatically increase it
Panics
Panics if the input duration in zero
fn nresamples(&mut self, n: usize) -> &mut Criterion
[src]
Changes the number of resamples
Number of resamples to use for the bootstrap
A larger number of resamples reduces the random sampling errors, which are inherent to the bootstrap method, but also increases the analysis time
Panics
Panics if the number of resamples is set to zero
fn noise_threshold(&mut self, threshold: f64) -> &mut Criterion
[src]
Changes the noise threshold
This threshold is used to decide if an increase of X%
in the execution time is considered
significant or should be flagged as noise
Note: A value of 0.02
is equivalent to 2%
Panics
Panics is the threshold is set to a negative value
fn confidence_level(&mut self, cl: f64) -> &mut Criterion
[src]
Changes the confidence level
The confidence level is used to calculate the confidence intervals of the estimated statistics
Panics
Panics if the confidence level is set to a value outside the (0, 1)
range
fn significance_level(&mut self, sl: f64) -> &mut Criterion
[src]
Changes the significance level
The significance level is used for hypothesis testing
Panics
Panics if the significance level is set to a value outside the (0, 1)
range
fn with_plots(&mut self) -> &mut Criterion
[src]
Enables plotting
fn without_plots(&mut self) -> &mut Criterion
[src]
Disabled plotting
fn can_plot(&self) -> bool
[src]
Checks if plotting is possible
fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion where
F: FnMut(&mut Bencher),
[src]
F: FnMut(&mut Bencher),
Benchmarks a function
The function under test must follow the setup - bench - teardown pattern:
use self::criterion::{Bencher, Criterion}; fn routine(b: &mut Bencher) { // Setup (construct data, allocate memory, etc) b.iter(|| { // Code to benchmark goes here }) // Teardown (free resources) } Criterion::default().bench_function("routine", routine);
fn bench_functions<I>(
&mut self,
id: &str,
funs: Vec<Fun<I>>,
input: &I
) -> &mut Criterion where
I: Display,
[src]
&mut self,
id: &str,
funs: Vec<Fun<I>>,
input: &I
) -> &mut Criterion where
I: Display,
Benchmarks multiple functions
All functions get the same input and are compared with the other implementations.
Works similar to bench
, but with multiple functions.
fn bench_seq_fib(b: &mut Bencher, i: &u32) { b.iter(|| { seq_fib(i); }); } fn bench_par_fib(b: &mut Bencher, i: &u32) { b.iter(|| { par_fib(i); }); } let sequential_fib = Fun::new("Sequential", bench_seq_fib); let parallel_fib = Fun::new("Parallel", bench_par_fib); let funs = vec![sequential_fib, parallel_fib]; Criterion::default().bench_functions("Fibonacci", funs, &14);
fn bench_function_over_inputs<I, F>(
&mut self,
id: &str,
f: F,
inputs: I
) -> &mut Criterion where
I: IntoIterator,
I::Item: Display,
F: FnMut(&mut Bencher, &I::Item),
[src]
&mut self,
id: &str,
f: F,
inputs: I
) -> &mut Criterion where
I: IntoIterator,
I::Item: Display,
F: FnMut(&mut Bencher, &I::Item),
Benchmarks a function under various inputs
This is a convenience method to execute several related benchmarks. Each benchmark will
receive the id: ${id}/${input}
.
use self::criterion::{Bencher, Criterion}; Criterion::default() .bench_function_over_inputs("from_elem", |b: &mut Bencher, &&size: &&usize| { b.iter(|| vec![0u8; size]); }, &[1024, 2048, 4096]);
fn bench_program(&mut self, id: &str, program: Command) -> &mut Criterion
[src]
Benchmarks an external program
The external program must conform to the following specification:
fn main() { let stdin = io::stdin(); let ref mut stdin = stdin.lock(); // For each line in stdin for line in stdin.lines() { // Parse line as the number of iterations let iters: u64 = line.unwrap().trim().parse().unwrap(); // Setup // Benchmark let start = Instant::now(); // Execute the routine "iters" times for _ in 0..iters { // Code to benchmark goes here } let elapsed = start.elapsed(); // Teardown // Report elapsed time in nanoseconds to stdout println!("{}", elapsed.to_nanos()); } }
fn bench_program_over_inputs<I, F>(
&mut self,
id: &str,
program: F,
inputs: I
) -> &mut Criterion where
F: FnMut() -> Command,
I: IntoIterator,
I::Item: Display,
[src]
&mut self,
id: &str,
program: F,
inputs: I
) -> &mut Criterion where
F: FnMut() -> Command,
I: IntoIterator,
I::Item: Display,
Benchmarks an external program under various inputs
This is a convenience method to execute several related benchmarks. Each benchmark will
receive the id: ${id}/${input}
.
fn summarize(&mut self, id: &str) -> &mut Criterion
[src]
Summarize the results stored under the .criterion/${id}
folder
Note: The bench_with_inputs
and bench_program_with_inputs
functions internally call
the summarize
method
Trait Implementations
impl Default for Criterion
[src]
fn default() -> Criterion
[src]
Creates a benchmark manager with the following default settings:
- Sample size: 100 measurements
- Warm-up time: 1 s
- Measurement time: 1 s
- Bootstrap size: 100 000 resamples
- Noise threshold: 0.01 (1%)
- Confidence level: 0.95
- Significance level: 0.05
- Plotting: enabled (if gnuplot is available)