Crate glassbench

Source
Expand description

A micro-benchmarking crate with memory.

See usage and example in README.

In a standard setup you’ll only use

  • the glassbench! macro which let you title your bench and add functions defining tasks
  • the Bench struct, as argument of your global bench function, with its Bench::task function to define a task
  • the TaskBench struct that you receive as argument when defining a task. You’ll call TaskBench::iter with the callback to benchmark
  • pretend_used as an opaque sinkhole, which can receive the values you produce in your tests and prevent the optimizer to remove their construction

Macros§

glassbench
Generates a benchmark with a consistent id (using the benchmark file title), calling the benchmarking functions given in argument.

Structs§

Bench
A whole benchmark
Command
What the user asked at the cli
Db
Storage interface for Glassbench, wrapping a SQLite connection
GitInfo
Git related information regarding the execution context
HistoryGraph
A temporary structure for graphing a history
HistoryTbl
A temporary structure for printing as table a history in standard output
HtmlViewer
The builder of the HTML standalone viewer
Printer
A small helper to print using markdown templates
Report
A temporary structure to print the result of a benchmark to the console
TaskBench
Benching of one task
TaskBenchDiff
Printable difference between two task measures
TaskHistory
The history of the measures of ta task as defined by the bench name and task name
TaskMeasure
The result of the measure of a task: number of iterations and total duration
TaskRecord
A measure of a task, with time, commit and tag

Enums§

GlassBenchError
glassbench error type

Constants§

DOLL_JS
ESTIMATE_ITERATIONS
Number of iterations to do, after warmup, to estimate the total number of iterations to do
MINIMAL_ITERATIONS
The absolute minimal number of iterations we don’t want to go below for benchmarking (to minimize random dispersion)
OPTIMAL_DURATION_NS
How long we’d like the measures of a task to go. Will be divided by the duration of a task in the estimate phase to decide how many iterations we’ll do for measures
SQL_JS
SQL_WASM
VERSION
version of the schema
VIEWER_CSS
VIEWER_JS
VIS_CSS
VIS_JS
WARMUP_ITERATIONS
Number of iterations to do before everything else

Functions§

after_bench
Print the tabular report for the executed benchmark then graph, list history, and or save according to command
create_bench
Create a bench with a user defined name (instead of the file name) and command (instead of the one read in arguments)
make_temp_file
pretend_used
tell the compiler not to optimize away the given argument (which is expected to be the function call you want to benchmark).
write_db