Expand description

A micro-benchmarking crate with memory.

See usage and example in README.

In a standard setup you’ll only use

  • the glassbench! macro which let you title your bench and add functions defining tasks
  • the Bench struct, as argument of your global bench function, with its Bench::task function to define a task
  • the TaskBench struct that you receive as argument when defining a task. You’ll call TaskBench::iter with the callback to benchmark
  • pretend_used as an opaque sinkhole, which can receive the values you produce in your tests and prevent the optimizer to remove their construction

Macros

Generates a benchmark with a consistent id (using the benchmark file title), calling the benchmarking functions given in argument.

Structs

A whole benchmark

What the user asked at the cli

Storage interface for Glassbench, wrapping a SQLite connection

Git related information regarding the execution context

A temporary structure for graphing a history

A temporary structure for printing as table a history in standard output

The builder of the HTML standalone viewer

A small helper to print using markdown templates

A temporary structure to print the result of a benchmark to the console

Benching of one task

Printable difference between two task measures

The history of the measures of ta task as defined by the bench name and task name

The result of the measure of a task: number of iterations and total duration

A measure of a task, with time, commit and tag

Enums

glassbench error type

Constants

Number of iterations to do, after warmup, to estimate the total number of iterations to do

The absolute minimal number of iterations we don’t want to go below for benchmarking (to minimize random dispersion)

How long we’d like the measures of a task to go. Will be divided by the duration of a task in the estimate phase to decide how many iterations we’ll do for measures

version of the schema

Number of iterations to do before everything else

Functions

Print the tabular report for the executed benchmark then graph, list history, and or save according to command

Create a bench with a user defined name (instead of the file name) and command (instead of the one read in arguments)

tell the compiler not to optimize away the given argument (which is expected to be the function call you want to benchmark).