A statistics-driven micro-benchmarking library written in Rust.
This crate is a microbenchmarking library which aims to provide strong statistical confidence in detecting and estimating the size of performance improvements and regressions, while also being easy to use.
See the user guide for examples as well as details on the measurement and analysis process, and the output.
- Collects detailed statistics, providing strong confidence that changes to performance are real, not measurement noise
- Produces detailed charts, providing thorough understanding of your code's performance behavior.
This module defines a set of traits that can be used to plug different measurements (eg. Unix's Processor Time, CPU or GPU performance counters, etc.) into Criterion.rs. It also includes the WallTime struct which defines the default wall-clock time measurement.
This module provides an extension trait which allows in-process profilers
to be hooked into the
Macro used to define a function group for the benchmark harness; see the
Macro which expands to a benchmark harness.
Timer struct used to iterate a benchmarked function and measure the runtime.
Structure used to group together a set of related benchmarks, along with custom configuration settings for groups of benchmarks. All benchmarks performed using a benchmark group will be grouped together in the final report.
Simple structure representing an ID for a benchmark. The ID must be unique within a benchmark group.
The benchmark manager
Contains the configuration options for the plots generated by a particular benchmark or benchmark group.
Axis scaling type
Baseline describes how the baseline_directory is handled.
Enum used to select the plotting backend.
This enum allows the user to control how Criterion.rs chooses the iteration count when sampling. The default is Auto, which will choose a method automatically based on the iteration time during the warm-up phase.
Enum representing different ways of measuring the throughput of benchmarked code. If the throughput setting is configured for a benchmark then the estimated throughput will be reported as well as the time per iteration.
A function that is opaque to the optimizer, used to prevent the compiler from optimizing away computations in a benchmark.