Crate easybench [] [src]

A lightweight benchmarking library which:

  • uses linear regression to screen off sources of constant error;
  • handles benchmarks which must mutate some state;
  • has a very simple API!

It's inspired by criterion, but doesn't do as much sophisticated analysis. Perhaps some day it will!

use easybench::{bench,bench_env};

// Simple benchmarks are performed with `bench`.
println!("fib 200: {}", bench(|| fib(200) ));
println!("fib 500: {}", bench(|| fib(500) ));

// If a function needs to mutate some state, use `bench_env`.
println!("reverse: {}", bench_env(vec![1,2,3], |xs| xs.reverse()));
println!("sort:    {}", bench_env(vec![1,2,3], |xs| xs.sort()));

Running the above yields the following results (make sure you compile with --release):

fib 200:         38 ns   (R²=1.000, 26053498 iterations in 155 samples)
fib 500:        109 ns   (R²=1.000, 9131585 iterations in 144 samples)
reverse:          3 ns   (R²=0.998, 23684997 iterations in 154 samples)
sort:             3 ns   (R²=0.999, 23684997 iterations in 154 samples)

Caveat: pure functions

Benchmarking pure functions involves a nasty gotcha which users should be aware of. Consider the following benchmarks:

let fib_1 = bench(|| fib(500) );                     // fine
let fib_2 = bench(|| { fib(500); } );                // spoiler: NOT fine
let fib_3 = bench_env(0, |x| { *x = fib(500); } );   // also fine, but ugly

The results are a little surprising:

fib_1:        110 ns   (R²=1.000, 9131585 iterations in 144 samples)
fib_2:          0 ns   (R²=1.000, 413289203 iterations in 184 samples)
fib_3:        109 ns   (R²=1.000, 9131585 iterations in 144 samples)

Oh, fib_2, why do you lie? The answer is: because fib(500) is pure, and its return value is immediately thrown away, so the optimiser replaces the call with a no-op (which clocks in at 0 ns).

What about the other two? fib_1 works because the closure passed to bench returns the result of the fib(500) call. Easybench takes whatever your code returns and tricks the optimiser into thinking it's going to do something with it. In fib_3, we actually do use the return value - we store it in the benchmark's private mutable state. This works fine but looks a bit weird.

The moral of the story: when benchmarking pure functions, make sure to return enough information to prevent the optimiser from eliminating code from your benchmark!

The benchmarking algorithm

First, let me define "sample" and "iteration". An iteration is a single execution of your code. A sample is a measurement, during which your code may be run many times. That is: taking a sample means performing some number of iterations and measuring the total time.

We begin by taking a sample and throwing away the results, in an effort to warm up some caches.

Now we start collecting data. The first sample performs only 1 iteration, but as we continue taking samples we increase the number of iterations exponentially. We stop when a time limit is reached (currently 1 second).

Next, we perform OLS regression on the resulting data. The gradient of the regression line is our measure of the time it takes to perform a single iteration of the benchmark. The R² value is a measure of how much noise there is in the data.

Structs

Stats

Statistics for a benchmark run.

Functions

bench

Run a benchmark.

bench_env

Run a benchmark with an environment.