# Crate scaling[−][src]

A lightweight micro-benchmarking library which:

- uses linear regression to screen off constant error;
- handles benchmarks which mutate state;
- can measure simple polynomial or exponential scaling behavior
- is very easy to use!

`scaling`

is designed to work with either slow or fast functions.
It’s forked from easybench, which is itself inspired by criterion,
but doesn’t do as much sophisticated
analysis (no outlier detection, no HTML output).

use scaling::{bench,bench_env,bench_scaling}; // Simple benchmarks are performed with `bench` or `bench_scaling`. println!("fib 200: {}", bench(|| fib(200) )); println!("fib 500: {}", bench(|| fib(500) )); println!("fib scaling: {}", bench_scaling(|n| fib(n), 0)); // If a function needs to mutate some state, use `bench_env`. println!("reverse: {}", bench_env(vec![0;100], |xs| xs.reverse() )); println!("sort: {}", bench_env(vec![0;100], |xs| xs.sort() ));

Running the above yields the following results:

```
fib 200: 50ns (R²=0.995, 20435 iterations in 68 samples)
fib 500: 144ns (R²=0.999, 7235 iterations in 57 samples)
fib scaling: 0.30ns/N (R²=0.999, 8645 iterations in 59 samples)
reverse: 46ns (R²=0.990, 30550 iterations in 72 samples)
sort: 137ns (R²=0.991, 187129 iterations in 91 samples)
```

Easy! However, please read the caveats below before using.

# Benchmarking algorithm

An *iteration* is a single execution of your code. A *sample* is a measurement,
during which your code may be run many times. In other words: taking a sample
means performing some number of iterations and measuring the total time.

The first sample we take performs only 1 iteration, but as we continue taking samples we increase the number of iterations with increasing rapidity. We stop either when a global time limit is reached (currently 10 seconds), or when we have collected sufficient statistics (but have run for at least a millisecond).

If a benchmark requires some state to run, `n`

copies of the initial state are
prepared before the sample is taken.

Once we have the data, we perform OLS linear regression to find out how the sample time varies with the number of iterations in the sample. The gradient of the regression line tells us how long it takes to perform a single iteration of the benchmark. The R² value is a measure of how much noise there is in the data.

If the function is too slow (5 or 10 seconds), the linear regression is skipped, and a simple average of timings is used. For slow functions, any overhead will be negligible.

# Caveats

## Caveat 1: Harness overhead

**TL;DR: Compile with `--release`

; the overhead is likely to be within the
**noise of your
benchmark.**

Any work which `scaling`

does once-per-sample is ignored (this is the purpose of the linear
regression technique described above). However, work which is done once-per-iteration *will* be
counted in the final times.

- In the case of
`bench()`

this amounts to incrementing the loop counter and copying the return value. - In the case of
`bench_env`

and`bench_gen_env`

, we also do a lookup into a big vector in order to get the environment for that iteration. - If you compile your program unoptimised, there may be additional overhead.

The cost of the above operations depend on the details of your benchmark; namely: (1) how large is the return value? and (2) does the benchmark evict the environment vector from the CPU cache? In practice, these criteria are only satisfied by longer-running benchmarks, making these effects hard to measure.

## Caveat 2: Pure functions

**TL;DR: Return enough information to prevent the optimiser from eliminating
code from your benchmark.**

Benchmarking pure functions involves a nasty gotcha which users should be aware of. Consider the following benchmarks:

let fib_1 = bench(|| fib(500) ); // fine let fib_2 = bench(|| { fib(500); } ); // spoiler: NOT fine let fib_3 = bench_env(0, |x| { *x = fib(500); } ); // also fine, but ugly

The results are a little surprising:

```
fib_1: 110 ns (R²=1.000, 9131585 iterations in 144 samples)
fib_2: 0 ns (R²=1.000, 413289203 iterations in 184 samples)
fib_3: 109 ns (R²=1.000, 9131585 iterations in 144 samples)
```

Oh, `fib_2`

, why do you lie? The answer is: `fib(500)`

is pure, and its
return value is immediately thrown away, so the optimiser replaces the call
with a no-op (which clocks in at 0 ns).

What about the other two? `fib_1`

looks very similar, with one exception:
the closure which we’re benchmarking returns the result of the `fib(500)`

call. When it runs your code, `scaling`

takes the return value and tricks the
optimiser into thinking it’s going to use it for something, before throwing
it away. This is why `fib_1`

is safe from having code accidentally eliminated.

In the case of `fib_3`

, we actually *do* use the return value: each
iteration we take the result of `fib(500)`

and store it in the iteration’s
environment. This has the desired effect, but looks a bit weird.

## Bonus caveat: Black box

The function which `scaling`

uses to trick the optimiser (`black_box`

)
is stolen from bencher, which states:

NOTE: We don’t have a proper black box in stable Rust. This is a workaround implementation, that may have a too big performance overhead, depending on operation, or it may fail to properly avoid having code optimized out. It is good enough that it is used by default.

## Structs

Scaling | The timing and scaling results (without statistics) for a benchmark. |

ScalingStats | Statistics for a benchmark run determining the scaling of a function. |

Stats | Statistics for a benchmark run. |

## Functions

bench | Run a benchmark. |

bench_env | Run a benchmark with an environment. |

bench_gen_env | Run a benchmark with a generated environment. |

bench_scaling | Benchmark the power-law scaling of the function |

bench_scaling_gen | Benchmark the power-law scaling of the function with generated input |