Expand description
A micro-benchmarking crate with memory.
See usage and example in README.
In a standard setup you’ll only use
- the glassbench! macro which let you title your bench and add functions defining tasks
- the Bench struct, as argument of your global bench function, with its Bench::task function to define a task
- the TaskBench struct that you receive as argument when defining a task. You’ll call TaskBench::iter with the callback to benchmark
- pretend_used as an opaque sinkhole, which can receive the values you produce in your tests and prevent the optimizer to remove their construction
Macros§
- glassbench
- Generates a benchmark with a consistent id (using the benchmark file title), calling the benchmarking functions given in argument.
Structs§
- Bench
- A whole benchmark
- Command
- What the user asked at the cli
- Db
- Storage interface for Glassbench, wrapping a SQLite connection
- GitInfo
- Git related information regarding the execution context
- History
Graph - A temporary structure for graphing a history
- History
Tbl - A temporary structure for printing as table a history in standard output
- Html
Viewer - The builder of the HTML standalone viewer
- Printer
- A small helper to print using markdown templates
- Report
- A temporary structure to print the result of a benchmark to the console
- Task
Bench - Benching of one task
- Task
Bench Diff - Printable difference between two task measures
- Task
History - The history of the measures of ta task as defined by the bench name and task name
- Task
Measure - The result of the measure of a task: number of iterations and total duration
- Task
Record - A measure of a task, with time, commit and tag
Enums§
- Glass
Bench Error - glassbench error type
Constants§
- DOLL_JS
- ESTIMATE_
ITERATIONS - Number of iterations to do, after warmup, to estimate the total number of iterations to do
- MINIMAL_
ITERATIONS - The absolute minimal number of iterations we don’t want to go below for benchmarking (to minimize random dispersion)
- OPTIMAL_
DURATION_ NS - How long we’d like the measures of a task to go. Will be divided by the duration of a task in the estimate phase to decide how many iterations we’ll do for measures
- SQL_JS
- SQL_
WASM - VERSION
- version of the schema
- VIEWER_
CSS - VIEWER_
JS - VIS_CSS
- VIS_JS
- WARMUP_
ITERATIONS - Number of iterations to do before everything else
Functions§
- after_
bench - Print the tabular report for the executed benchmark then graph, list history, and or save according to command
- create_
bench - Create a bench with a user defined name (instead of the file name) and command (instead of the one read in arguments)
- make_
temp_ file - pretend_
used - tell the compiler not to optimize away the given argument (which is expected to be the function call you want to benchmark).
- write_
db