## Expand description

A lightweight micro-benchmarking library which:

- uses linear regression to screen off constant error;
- handles benchmarks which mutate state;
- is very easy to use!

Easybench is designed for benchmarks with a running time in the range `1 ns < x < 1 ms`

- results may be unreliable if benchmarks are very quick or very
slow. It’s inspired by criterion, but doesn’t do as much sophisticated
analysis (no outlier detection, no HTML output).

```
use easybench::{bench,bench_env};
// Simple benchmarks are performed with `bench`.
println!("fib 200: {}", bench(|| fib(200) ));
println!("fib 500: {}", bench(|| fib(500) ));
// If a function needs to mutate some state, use `bench_env`.
println!("reverse: {}", bench_env(vec![0;100], |xs| xs.reverse() ));
println!("sort: {}", bench_env(vec![0;100], |xs| xs.sort() ));
```

Running the above yields the following results:

```
fib 200: 38 ns (R²=1.000, 26053497 iterations in 154 samples)
fib 500: 110 ns (R²=1.000, 9131584 iterations in 143 samples)
reverse: 54 ns (R²=0.999, 5669992 iterations in 138 samples)
sort: 93 ns (R²=1.000, 4685942 iterations in 136 samples)
```

Easy! However, please read the caveats below before using.

## Benchmarking algorithm

An *iteration* is a single execution of your code. A *sample* is a measurement,
during which your code may be run many times. In other words: taking a sample
means performing some number of iterations and measuring the total time.

The first sample we take performs only 1 iteration, but as we continue taking samples we increase the number of iterations exponentially. We stop when a global time limit is reached (currently 1 second).

If a benchmark must mutate some state while running, before taking a sample
`n`

copies of the initial state are prepared, where `n`

is the number of
iterations in that sample.

Once we have the data, we perform OLS linear regression to find out how the sample time varies with the number of iterations in the sample. The gradient of the regression line tells us how long it takes to perform a single iteration of the benchmark. The R² value is a measure of how much noise there is in the data.

## Caveats

### Caveat 1: Harness overhead

**TL;DR: Compile with --release; the overhead is likely to be within the
noise of your benchmark.**

Any work which easybench does once-per-sample is ignored (this is the purpose of the linear
regression technique described above). However, work which is done once-per-iteration *will* be
counted in the final times.

- In the case of
`bench`

this amounts to incrementing the loop counter. - In the case of
`bench_env`

, we also do a lookup into a big vector in order to get the environment for that iteration. - If you compile your program unoptimised, there may be additional overhead.

The cost of the above operations depend on the details of your benchmark; namely: (1) how large is the return value? and (2) does the benchmark evict the environment vector from the CPU cache? In practice, these criteria are only satisfied by longer-running benchmarks, making these effects hard to measure.

If you have concerns about the results you’re seeing, please take a look at
the inner loop of `bench_env`

. The whole library `cloc`

s in at
under 100 lines of code, so it’s pretty easy to read.

### Caveat 2: Sufficient data

**TL;DR: Measurements are unreliable when code takes too long (> 1 ms) to run.**

Each benchmark collects data for 1 second. This means that in order to collect a statistically significant amount of data, your code should run much faster than this.

When inspecting the results, make sure things look statistically significant. In particular:

- Make sure the number of samples is big enough. More than 100 is probably OK.
- Make sure the R² isn’t suspiciously low. It’s easy to achieve a high R² value when the number of samples is small, so unfortunately the definition of “suspiciously low” depends on how many samples were taken. As a rule of thumb, expect values greater than 0.99.

### Caveat 3: Pure functions

**TL;DR: Return enough information to prevent the optimiser from eliminating
code from your benchmark.**

Benchmarking pure functions involves a nasty gotcha which users should be aware of. Consider the following benchmarks:

```
let fib_1 = bench(|| fib(500) ); // fine
let fib_2 = bench(|| { fib(500); } ); // spoiler: NOT fine
let fib_3 = bench_env(0, |x| { *x = fib(500); } ); // also fine, but ugly
```

The results are a little surprising:

```
fib_1: 110 ns (R²=1.000, 9131585 iterations in 144 samples)
fib_2: 0 ns (R²=1.000, 413289203 iterations in 184 samples)
fib_3: 109 ns (R²=1.000, 9131585 iterations in 144 samples)
```

Oh, `fib_2`

, why do you lie? The answer is: `fib(500)`

is pure, and its
return value is immediately thrown away, so the optimiser replaces the call
with a no-op (which clocks in at 0 ns).

What about the other two? `fib_1`

looks very similar, with one exception:
the closure which we’re benchmarking returns the result of the `fib(500)`

call. When it runs your code, easybench takes the return value and tricks the
optimiser into thinking it’s going to use it for something, before throwing
it away. This is why `fib_1`

is safe from having code accidentally eliminated.

In the case of `fib_3`

, we actually *do* use the return value: each
iteration we take the result of `fib(500)`

and store it in the iteration’s
environment. This has the desired effect, but looks a bit weird.

## Structs

- Statistics for a benchmark run.

## Functions

- Run a benchmark.
- Run a benchmark with an environment.
- Run a benchmark with a generated environment.