criterion 0.1.1

Statistics-driven micro-benchmarking library
Documentation helps you write fast code by detecting and measuring performance improvements or regressions, even small ones, quickly and accurately. You can optimize with confidence, knowing how each change affects the performance of your code.

Table of Contents


  • Statistics: Statistical analysis detects if, and by how much, performance has changed since the last benchmark run
  • Charts: Uses gnuplot to generate detailed graphs of benchmark results.
  • Benchmark external programs written in any language.

Quickstart currently requires a nightly version of Rust. Additionally, in order to generate plots, you must have gnuplot installed. See the gnuplot website for installation instructions.

To start with, add the following to your cargo.toml file:

    criterion = "0.1.1"

    name = "my_benchmark"
    harness = false

Next, define a benchmark by creating a file at $PROJECT/benches/ with the following contents.

extern crate criterion;

use criterion::Criterion;

fn fibonacci(n: u64) -> u64 {
    match n {
        0 => 1,
        1 => 1,
        n => fibonacci(n-1) + fibonacci(n-2),

fn criterion_benchmark(c: &mut Criterion) {
        .bench_function("fib 20", |b| b.iter(|| fibonacci(20)));

criterion_group!(benches, criterion_benchmark);

Finally, run this benchmark with cargo bench. You should see output similar to the following:

     Running target\release\deps\criterion_example-c6a3683ae7e18b5a.exe

running 1 test
Gnuplot not found, disabling plotting
Benchmarking fib 20
> Warming up for 3.0000 s
> Collecting 100 samples in estimated 5.0726 s
> Found 11 outliers among 99 measurements (11.11%)
  > 2 (2.02%) high mild
  > 9 (9.09%) high severe
> Performing linear regression
  >  slope [26.778 us 27.139 us]
  >    R^2  0.8382863 0.8358049
> Estimating the statistics of the sample
  >   mean [26.913 us 27.481 us]
  > median [26.706 us 26.910 us]
  >    MAD [276.37 ns 423.53 ns]
  >     SD [729.17 ns 2.0625 us]

test criterion_benchmark ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

See the Getting Started guide for more details.


The primary goal of is to provide a powerful and statistically rigorous tool for measuring the performance of code, preventing performance regressions and accurately measuring optimizations. Additionally, it should be as programmer-friendly as possible and make it easy to create reliable, useful benchmarks, even for programmers without an advanced background in statistics.

The statistical analysis is mostly solid already; the next few releases will focus mostly on improving ease of use.


First, thank you for contributing.

One great way to contribute to is to use it for your own benchmarking needs and report your experiences, file and comment on issues, etc.

Code or documentation improvements in the form of pull requests are also welcome.

If your issues or pull requests have no response after a few days, feel free to ping me (@bheisler)

For more details, see the file

Maintenance was originally created by Jorge Aparicio (@japaric) and is currently being maintained by Brook Heisler (@bheisler).

License is dual licensed under the Apache 2.0 license and the MIT license.

Related Projects

  • bencher - A port of the libtest benchmark runner to stable Rust
  • criterion - The Haskell microbenchmarking library that inspired
  • cargo-benchcmp - Cargo subcommand to compare the output of two libtest or bencher benchmark runs