Crate iai_callgrind
source ·Expand description
Iai-Callgrind is a benchmarking framework/harness which uses Valgrind’s Callgrind to provide extremely accurate and consistent measurements of Rust code, making it perfectly suited to run in environments like a CI.
Features
- Precision: High-precision measurements allow you to reliably detect very small optimizations of your code
- Consistency: Iai-Callgrind can take accurate measurements even in virtualized CI environments
- Performance: Since Iai-Callgrind only executes a benchmark once, it is typically a lot faster to run than benchmarks measuring the execution and wall time
- Regression: Iai-Callgrind reports the difference between benchmark runs to make it easy to spot detailed performance regressions and improvements.
- Profiling: Iai-Callgrind generates a Callgrind profile of your code while benchmarking, so you can use Callgrind-compatible tools like callgrind_annotate or the visualizer kcachegrind to analyze the results in detail
- Stable-compatible: Benchmark your code without installing nightly Rust
Benchmarking
iai-callgrind can be divided into two sections: Benchmarking the library and
its public functions and benchmarking of a crate’s binary.
Library Benchmarks
Use this scheme of the main macro if you want to benchmark functions of your
crate’s library.
Important default behavior
The environment variables are cleared before running a library benchmark. See also the Configuration section below if you need to change that behavior.
Quickstart
use iai_callgrind::{black_box, library_benchmark, library_benchmark_group, main};
// Our function we want to test. Just assume this is a public function in your
// library.
fn bubble_sort(mut array: Vec<i32>) -> Vec<i32> {
for i in 0..array.len() {
for j in 0..array.len() - i - 1 {
if array[j + 1] < array[j] {
array.swap(j, j + 1);
}
}
}
array
}
// This function is used to create a worst case array we want to sort with our
// implementation of bubble sort
fn setup_worst_case_array(start: i32) -> Vec<i32> {
if start.is_negative() {
(start..0).rev().collect()
} else {
(0..start).rev().collect()
}
}
// The #[library_benchmark] attribute let's you define a benchmark function which you
// can later use in the `library_benchmark_groups!` macro.
#[library_benchmark]
fn bench_bubble_sort_empty() -> Vec<i32> {
// The `black_box` is needed to tell the compiler to not optimize what's inside
// black_box or else the benchmarks might return inaccurate results.
black_box(bubble_sort(black_box(vec![])))
}
// This benchmark uses the `bench` attribute to setup benchmarks with different
// setups. The big advantage is, that the setup costs and event counts aren't
// attributed to the benchmark (and opposed to the old api we don't have to deal with
// callgrind arguments, toggles, ...)
#[library_benchmark]
#[bench::empty(vec![])]
#[bench::worst_case_6(vec![6, 5, 4, 3, 2, 1])]
// Function calls are fine too
#[bench::worst_case_4000(setup_worst_case_array(4000))]
// The argument of the benchmark function defines the type of the argument from the
// `bench` cases.
fn bench_bubble_sort(array: Vec<i32>) -> Vec<i32> {
// Note `array` is not put in a `black_box` because that's already done for you.
black_box(bubble_sort(array))
}
// A group in which we can put all our benchmark functions
library_benchmark_group!(
name = bubble_sort_group;
benchmarks = bench_bubble_sort_empty, bench_bubble_sort
);
// Finally, the mandatory main! macro which collects all `library_benchmark_groups`.
// The main! macro creates a benchmarking harness and runs all the benchmarks defined
// in the groups and benches.
main!(library_benchmark_groups = bubble_sort_group);Note that it is important to annotate the benchmark functions with
#[library_benchmark].
Configuration
It’s possible to configure some of the behavior of iai-callgrind. See the docs of
crate::LibraryBenchmarkConfig for more details. Configure library benchmarks at
top-level with the crate::main macro, at group level within the
crate::library_benchmark_group, at crate::library_benchmark level
and at bench level:
#[library_benchmark]
#[bench::some_id(args = (1, 2), config = LibraryBenchmarkConfig::default())]
// ...The config at bench level overwrites the config at library_benchmark level. The config at
library_benchmark level overwrites the config at group level and so on. Note that
configuration values like envs are additive and don’t overwrite configuration values of higher
levels.
See also the docs of crate::library_benchmark_group. The
README of this crate includes more explanations,
common recipes and some examples.
Binary Benchmarks
Use this scheme of the main macro to benchmark one or more binaries of your crate. If you
really like to, it’s possible to benchmark any executable file in the PATH or any executable
specified with an absolute path. The documentation for setting up binary benchmarks with the
binary_benchmark_group macro can be found in the docs of crate::binary_benchmark_group.
Temporary Workspace and other important default behavior
Per default, all binary benchmarks and the before, after, setup and teardown functions
are executed in a temporary directory. See crate::BinaryBenchmarkConfig::sandbox for a
deeper explanation and how to control and change this behavior. Also, the environment variables
of benchmarked binaries are cleared before the benchmark is run. See also
crate::BinaryBenchmarkConfig::env_clear for how to change this behavior.
Quickstart
Suppose your crate’s binary is named my-exe and you have a fixtures directory in
benches/fixtures with a file test1.txt in it:
use iai_callgrind::{
main, binary_benchmark_group, BinaryBenchmarkConfig, BinaryBenchmarkGroup,
Run, Arg, Fixtures
};
fn my_setup() {
println!("We can put code in here which will be run before each benchmark run");
}
// We specify a cmd `"my-exe"` for the whole group which is a binary of our crate. This
// eliminates the need to specify a `cmd` for each `Run` later on and we can use the
// auto-discovery of a crate's binary at group level. We'll also use the `setup` argument
// to run a function before each of the benchmark runs.
binary_benchmark_group!(
name = my_exe_group;
setup = my_setup;
// This directory will be copied into the root of the sandbox (as `fixtures`)
config = BinaryBenchmarkConfig::default().fixtures(Fixtures::new("benches/fixtures"));
benchmark = |"my-exe", group: &mut BinaryBenchmarkGroup| setup_my_exe_group(group));
// Working within a macro can be tedious sometimes so we moved the setup code into
// this method
fn setup_my_exe_group(group: &mut BinaryBenchmarkGroup) {
group
// Setup our first run doing something with our fixture `test1.txt`. The
// id (here `do foo with test1`) of an `Arg` has to be unique within the
// same group
.bench(Run::with_arg(Arg::new(
"do foo with test1",
["--foo=fixtures/test1.txt"],
)))
// Setup our second run with two positional arguments
.bench(Run::with_arg(Arg::new(
"positional arguments",
["foo", "foo bar"],
)))
// Our last run doesn't take an argument at all.
.bench(Run::with_arg(Arg::empty("no argument")));
}
// As last step specify all groups we want to benchmark in the main! macro argument
// `binary_benchmark_groups`. The main macro is always needed and finally expands
// to a benchmarking harness
main!(binary_benchmark_groups = my_exe_group);Configuration
Much like the configuration of library benchmarks (See above) it’s possible to configure binary
benchmarks at top-level in the main! macro and at group-level in the
binary_benchmark_groups! with the config = ...; argument. In contrast to library benchmarks,
binary benchmarks can be configured at a lower and last level within Run directly.
For further details see the section about binary benchmarks of the crate::main docs the docs
of crate::binary_benchmark_group and Run. Also, the
README of this crate includes some introductory
documentation with additional examples.
Flamegraphs
Flamegraphs are opt-in and can be created if you pass a FlamegraphConfig to the
BinaryBenchmarkConfig::flamegraph, Run::flamegraph or
LibraryBenchmarkConfig::flamegraph. Callgrind flamegraphs are meant as a complement to
valgrind’s visualization tools callgrind_annotate and kcachegrind.
Callgrind flamegraphs show the inclusive costs for functions and a specific event type, much
like callgrind_annotate does but in a nicer (and clickable) way. Especially, differential
flamegraphs facilitate a deeper understanding of code sections which cause a bottleneck or a
performance regressions etc.
The produced flamegraph svg files are located next to the respective callgrind output file in
the target/iai directory.
Re-exports
pub use bincode;
Macros
- Macro used to define a group of binary benchmarks
- Macro used to define a group of library benchmarks
- The
iai_callgrind::mainmacro expands to amainfunction which runs all of the benchmarks.
Structs
- The arguments needed for
Runwhich are passed to the benchmarked binary - An id for an
Argwhich can be used to produce unique ids from parameters - The main configuration of a binary benchmark.
- The
BinaryBenchmarkGrouplets you configure binary benchmarkRuns - A builder of
Fixturesto specify the fixtures directory which will be copied into the sandbox - The
FlamegraphConfigwhich allows the customization of the created flamegraphs - The main configuration of a library benchmark.
Runlet’s you set up and configure a benchmark run of a binary
Enums
- The
Directionin which the flamegraph should grow. - All
EventKinds callgrind produces and additionally some derived events - Set the expected exit status of a binary benchmark
- The kind of
Flamegraphwhich is going to be constructed
Functions
- A function that is opaque to the optimizer, used to prevent the compiler from optimizing away computations in a benchmark.
Attribute Macros
- The
#[library_benchmark]attribute let’s you define a benchmark function which you can later use in thelibrary_benchmark_groups!macro.