bma-benchmark
Benchmark for Rust and humans
What is this for
I like testing different libraries, crates and algorithms. I do benchmarks on prototypes almost every day and decided to make a simple dedicated crate for that. Here we go: https://crates.io/crates/bma-benchmark
The benchmark engine is very simple to launch and outputs all required data in a pretty colored readable format.
How to use
Let us create a simple benchmark, using crate macros only:
#[macro_use]
extern crate bma_benchmark;
use std::sync::Mutex;
let n = 100_000_000;
let mutex = Mutex::new(0);
benchmark_start!();
for _ in 0..n {
let _a = mutex.lock().unwrap();
}
benchmark_print!(n);
The same can also be done with a single "benchmark" macro:
extern crate bma_benchmark;
use Mutex;
let mutex = new;
benchmark!;

Pretty cool, isn't it? Let us create a more complex staged benchmark and compare e.g. Mutex vs RwLock. Staged benchmarks display a comparison table. If the reference stage is specified, the table also contains speed difference for all others.
extern crate bma_benchmark;
use ;
let n = 10_000_000;
let mutex = new;
let rwlock = new;
staged_benchmark_start!;
for _ in 0..n
staged_benchmark_finish_current!;
staged_benchmark_start!;
for _ in 0..n
staged_benchmark_finish_current!;
staged_benchmark_print_for!;
The same can also be done with a couple of staged_benchmark macros:
extern crate bma_benchmark;
use ;
let n = 10_000_000;
let mutex = new;
let rwlock = new;
staged_benchmark!;
staged_benchmark!;
staged_benchmark_print_for!;
Or split into functions with benchmark_stage attributes:
use ;
extern crate bma_benchmark;
let mutex = new;
let rwlock = new;
benchmark_mutex;
benchmark_rwlock;
staged_benchmark_print_for!;

Errors
The macros benchmark_print, staged_benchmark_finish and staged_benchmark_finish_current accept error count as an additional parameter.
For code blocks, macros benchmark_check and staged_benchmark_check can be used. In this case, a statement MUST return true for the normal execution and false for errors:
#[macro_use]
extern crate bma_benchmark;
use std::sync::Mutex;
let mutex = Mutex::new(0);
benchmark_check!(10_000_000, {
mutex.lock().is_ok()
});
The benchmark_stage attribute has check option, which behaves similarly. If used, the function body MUST (not return but) END with a bool as well.
If any errors are reported, additional columns appear, success count, error count and error rate:

Latency benchmarks
use LatencyBenchmark;
let mut lb = new;
for _ in 0..1000
lb.print;
latency (μs) avg: 883, min: 701, max: 1_165
Performance measurements
let file_path = "largefile";
let mut perf = new;
for _ in 0..10
perf.print;

Need anything more complex? Check the crate docs and use structures manually.
Enjoy!