Crate glassbench[−][src]
A micro-benchmarking crate with memory.
See usage and example in README.
In a standard setup you’ll only use
- the glassbench! macro which let you title your bench and add functions defining tasks
- the Bench struct, as argument of your global bench function, with its Bench::task function to define a task
- the TaskBench struct that you receive as argument when defining a task. You’ll call TaskBench::iter with the callback to benchmark
- pretend_used as an opaque sinkhole, which can receive the values you produce in your tests and prevent the optimizer to remove their construction
Macros
glassbench | Generates a benchmark with a consistent id (using the benchmark file title), calling the benchmarking functions given in argument. |
Structs
Bench | A whole benchmark |
Command | What the user asked at the cli |
Db | Storage interface for Glassbench, wrapping a SQLite connection |
GitInfo | Git related information regarding the execution context |
HistoryGraph | A temporary structure for graphing a history |
HistoryTbl | A temporary structure for printing as table a history in standard output |
Printer | A small helper to print using markdown templates |
Report | A temporary structure to print the result of a benchmark to the console |
TaskBench | Benching of one task |
TaskBenchDiff | Printable difference between two task measures |
TaskHistory | The history of the measures of ta task as defined by the bench name and task name |
TaskMeasure | The result of the measure of a task: number of iterations and total duration |
TaskRecord | A measure of a task, with time, commit and tag |
Enums
GlassBenchError | glassbench error type |
Constants
ESTIMATE_ITERATIONS | Number of iterations to do, after warmup, to estimate the total number of iterations to do |
MINIMAL_ITERATIONS | The absolute minimal number of iterations we don’t want to go below for benchmarking (to minimize random dispersion) |
OPTIMAL_DURATION_NS | How long we’d like the measures of a task to go. Will be divided by the duration of a task in the estimate phase to decide how many iterations we’ll do for measures |
VERSION | version of the schema |
WARMUP_ITERATIONS | Number of iterations to do before everything else |
Functions
after_bench | Print the tabular report for the executed benchmark then grap, list history, and or save according to command |
create_bench | Create a bench with a user defined name (instead of the file name) and command (instead of the one read in arguments) |
pretend_used | tell the compiler not to optimize away the given argument (which is expected to be the function call you want to benchmark). |