Struct criterion::Criterion [−][src]
pub struct Criterion { /* fields omitted */ }
The benchmark manager
Criterion
lets you configure and execute benchmarks
Each benchmark consists of four phases:
- Warm-up: The routine is repeatedly executed, to let the CPU/OS/JIT/interpreter adapt to the new load
- Measurement: The routine is repeatedly executed, and timing information is collected into a sample
- Analysis: The sample is analyzed and distiled into meaningful statistics that get reported to stdout, stored in files, and plotted
- Comparison: The current sample is compared with the sample obtained in the previous benchmark.
Methods
impl Criterion
[src]
impl Criterion
pub fn sample_size(self, n: usize) -> Criterion
[src]
pub fn sample_size(self, n: usize) -> Criterion
Changes the default size of the sample for benchmarks run with this runner.
A bigger sample should yield more accurate results if paired with a sufficiently large measurement time.
Sample size must be at least 2.
Panics
Panics if set to zero or one
pub fn warm_up_time(self, dur: Duration) -> Criterion
[src]
pub fn warm_up_time(self, dur: Duration) -> Criterion
Changes the default warm up time for benchmarks run with this runner.
Panics
Panics if the input duration is zero
pub fn measurement_time(self, dur: Duration) -> Criterion
[src]
pub fn measurement_time(self, dur: Duration) -> Criterion
Changes the default measurement time for benchmarks run with this runner.
With a longer time, the measurement will become more resilient to transitory peak loads caused by external programs
Note: If the measurement time is too "low", Criterion will automatically increase it
Panics
Panics if the input duration in zero
pub fn nresamples(self, n: usize) -> Criterion
[src]
pub fn nresamples(self, n: usize) -> Criterion
Changes the default number of resamples for benchmarks run with this runner.
Number of resamples to use for the bootstrap
A larger number of resamples reduces the random sampling errors, which are inherent to the bootstrap method, but also increases the analysis time
Panics
Panics if the number of resamples is set to zero
pub fn noise_threshold(self, threshold: f64) -> Criterion
[src]
pub fn noise_threshold(self, threshold: f64) -> Criterion
Changes the default noise threshold for benchmarks run with this runner.
This threshold is used to decide if an increase of X%
in the execution time is considered
significant or should be flagged as noise
Note: A value of 0.02
is equivalent to 2%
Panics
Panics is the threshold is set to a negative value
pub fn confidence_level(self, cl: f64) -> Criterion
[src]
pub fn confidence_level(self, cl: f64) -> Criterion
Changes the default confidence level for benchmarks run with this runner
The confidence level is used to calculate the confidence intervals of the estimated statistics
Panics
Panics if the confidence level is set to a value outside the (0, 1)
range
pub fn significance_level(self, sl: f64) -> Criterion
[src]
pub fn significance_level(self, sl: f64) -> Criterion
Changes the default significance level for benchmarks run with this runner
The significance level is used for hypothesis testing
Panics
Panics if the significance level is set to a value outside the (0, 1)
range
pub fn with_plots(self) -> Criterion
[src]
pub fn with_plots(self) -> Criterion
Enables plotting
pub fn without_plots(self) -> Criterion
[src]
pub fn without_plots(self) -> Criterion
Disables plotting
pub fn can_plot(&self) -> bool
[src]
pub fn can_plot(&self) -> bool
Return true if generation of the plots is possible.
pub fn save_baseline(self, baseline: String) -> Criterion
[src]
pub fn save_baseline(self, baseline: String) -> Criterion
Names an explicit baseline and enables overwriting the previous results.
pub fn retain_baseline(self, baseline: String) -> Criterion
[src]
pub fn retain_baseline(self, baseline: String) -> Criterion
Names an explicit baseline and disables overwriting the previous results.
pub fn with_filter<S: Into<String>>(self, filter: S) -> Criterion
[src]
pub fn with_filter<S: Into<String>>(self, filter: S) -> Criterion
Filters the benchmarks. Only benchmarks with names that contain the given string will be executed.
pub fn configure_from_args(self) -> Criterion
[src]
pub fn configure_from_args(self) -> Criterion
Configure this criterion struct based on the command-line arguments to this process.
pub fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion where
F: FnMut(&mut Bencher) + 'static,
[src]
pub fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion where
F: FnMut(&mut Bencher) + 'static,
Benchmarks a function
Example
fn bench(c: &mut Criterion) { // Setup (construct data, allocate memory, etc) c.bench_function( "function_name", |b| b.iter(|| { // Code to benchmark goes here }), ); } criterion_group!(benches, bench); criterion_main!(benches);
pub fn bench_functions<I>(
&mut self,
id: &str,
funs: Vec<Fun<I>>,
input: I
) -> &mut Criterion where
I: Debug + 'static,
[src]
pub fn bench_functions<I>(
&mut self,
id: &str,
funs: Vec<Fun<I>>,
input: I
) -> &mut Criterion where
I: Debug + 'static,
Benchmarks multiple functions
All functions get the same input and are compared with the other implementations.
Works similar to bench_function
, but with multiple functions.
Example
fn bench_seq_fib(b: &mut Bencher, i: &u32) { b.iter(|| { seq_fib(i); }); } fn bench_par_fib(b: &mut Bencher, i: &u32) { b.iter(|| { par_fib(i); }); } fn bench(c: &mut Criterion) { let sequential_fib = Fun::new("Sequential", bench_seq_fib); let parallel_fib = Fun::new("Parallel", bench_par_fib); let funs = vec![sequential_fib, parallel_fib]; c.bench_functions("Fibonacci", funs, 14); } criterion_group!(benches, bench); criterion_main!(benches);
pub fn bench_function_over_inputs<I, F>(
&mut self,
id: &str,
f: F,
inputs: I
) -> &mut Criterion where
I: IntoIterator,
I::Item: Debug + 'static,
F: FnMut(&mut Bencher, &I::Item) + 'static,
[src]
pub fn bench_function_over_inputs<I, F>(
&mut self,
id: &str,
f: F,
inputs: I
) -> &mut Criterion where
I: IntoIterator,
I::Item: Debug + 'static,
F: FnMut(&mut Bencher, &I::Item) + 'static,
Benchmarks a function under various inputs
This is a convenience method to execute several related benchmarks. Each benchmark will
receive the id: ${id}/${input}
.
Example
fn bench(c: &mut Criterion) { c.bench_function_over_inputs("from_elem", |b: &mut Bencher, size: &usize| { b.iter(|| vec![0u8; *size]); }, vec![1024, 2048, 4096] ); } criterion_group!(benches, bench); criterion_main!(benches);
pub fn bench_program(&mut self, id: &str, program: Command) -> &mut Criterion
[src]
pub fn bench_program(&mut self, id: &str, program: Command) -> &mut Criterion
Benchmarks an external program
The external program must:
- Read the number of iterations from stdin
- Execute the routine to benchmark that many times
- Print the elapsed time (in nanoseconds) to stdout
// Example of an external program that implements this protocol fn main() { let stdin = io::stdin(); let ref mut stdin = stdin.lock(); // For each line in stdin for line in stdin.lines() { // Parse line as the number of iterations let iters: u64 = line.unwrap().trim().parse().unwrap(); // Setup // Benchmark let start = Instant::now(); // Execute the routine "iters" times for _ in 0..iters { // Code to benchmark goes here } let elapsed = start.elapsed(); // Teardown // Report elapsed time in nanoseconds to stdout println!("{}", elapsed.to_nanos()); } }
pub fn bench_program_over_inputs<I, F>(
&mut self,
id: &str,
program: F,
inputs: I
) -> &mut Criterion where
F: FnMut() -> Command + 'static,
I: IntoIterator,
I::Item: Debug + 'static,
[src]
pub fn bench_program_over_inputs<I, F>(
&mut self,
id: &str,
program: F,
inputs: I
) -> &mut Criterion where
F: FnMut() -> Command + 'static,
I: IntoIterator,
I::Item: Debug + 'static,
Benchmarks an external program under various inputs
This is a convenience method to execute several related benchmarks. Each benchmark will
receive the id: ${id}/${input}
.
pub fn bench<B: BenchmarkDefinition>(
&mut self,
group_id: &str,
benchmark: B
) -> &mut Criterion
[src]
pub fn bench<B: BenchmarkDefinition>(
&mut self,
group_id: &str,
benchmark: B
) -> &mut Criterion
Executes the given benchmark. Use this variant to execute benchmarks with complex configuration. This can be used to compare multiple functions, execute benchmarks with custom configuration settings and more. See the Benchmark and ParameterizedBenchmark structs for more information.
fn bench(c: &mut Criterion) { // Setup (construct data, allocate memory, etc) c.bench( "routines", Benchmark::new("routine_1", |b| b.iter(|| routine_1())) .with_function("routine_2", |b| b.iter(|| routine_2())) .sample_size(50) ); } criterion_group!(benches, bench); criterion_main!(benches);
Trait Implementations
impl Default for Criterion
[src]
impl Default for Criterion
fn default() -> Criterion
[src]
fn default() -> Criterion
Creates a benchmark manager with the following default settings:
- Sample size: 100 measurements
- Warm-up time: 3 s
- Measurement time: 5 s
- Bootstrap size: 100 000 resamples
- Noise threshold: 0.01 (1%)
- Confidence level: 0.95
- Significance level: 0.05
- Plotting: enabled (if gnuplot is available)
- No filter