criterion 0.2.1

Statistics-driven micro-benchmarking library
Documentation helps you write fast code by detecting and measuring performance improvements or regressions, even small ones, quickly and accurately. You can optimize with confidence, knowing how each change affects the performance of your code.

Table of Contents


  • Statistics: Statistical analysis detects if, and by how much, performance has changed since the last benchmark run
  • Charts: Uses gnuplot to generate detailed graphs of benchmark results.
  • Stable-compatible: Benchmark your code without installing nightly Rust.
  • Benchmark external programs written in any language.


In order to generate plots, you must have gnuplot installed. See the gnuplot website for installation instructions. also currently requires Rust 1.23 or later.

To start with, add the following to your Cargo.toml file:

    criterion = "0.2"

    name = "my_benchmark"
    harness = false

Next, define a benchmark by creating a file at $PROJECT/benches/ with the following contents.

extern crate criterion;

use criterion::Criterion;

fn fibonacci(n: u64) -> u64 {
    match n {
        0 => 1,
        1 => 1,
        n => fibonacci(n-1) + fibonacci(n-2),

fn criterion_benchmark(c: &mut Criterion) {
    c.bench_function("fib 20", |b| b.iter(|| fibonacci(20)));

criterion_group!(benches, criterion_benchmark);

Finally, run this benchmark with cargo bench. You should see output similar to the following:

     Running target/release/deps/example-423eedc43b2b3a93
fib 20                  time:   [26.029 us 26.251 us 26.505 us]
Found 11 outliers among 99 measurements (11.11%)
  6 (6.06%) high mild
  5 (5.05%) high severe

See the Getting Started guide for more details.


The primary goal of is to provide a powerful and statistically rigorous tool for measuring the performance of code, preventing performance regressions and accurately measuring optimizations. Additionally, it should be as programmer-friendly as possible and make it easy to create reliable, useful benchmarks, even for programmers without an advanced background in statistics.

The statistical analysis is mostly solid already; the next few releases will focus mostly on improving ease of use.


First, thank you for contributing.

One great way to contribute to is to use it for your own benchmarking needs and report your experiences, file and comment on issues, etc.

Code or documentation improvements in the form of pull requests are also welcome.

If your issues or pull requests have no response after a few days, feel free to ping me (@bheisler)

For more details, see the file

Maintenance was originally created by Jorge Aparicio (@japaric) and is currently being maintained by Brook Heisler (@bheisler).

License is dual licensed under the Apache 2.0 license and the MIT license.

Related Projects

  • bencher - A port of the libtest benchmark runner to stable Rust
  • criterion - The Haskell microbenchmarking library that inspired
  • cargo-benchcmp - Cargo subcommand to compare the output of two libtest or bencher benchmark runs