Skip to main content

Crate fluxbench

Crate fluxbench 

Source
Expand description

§FluxBench

Benchmarking framework for Rust with crash isolation, statistical rigor, and CI integration.

FluxBench provides a next-generation benchmarking platform:

  • Process Isolation: Crash-resilient “Fail-Late” architecture; panicking benchmarks don’t crash the suite
  • Zero-Copy IPC: Efficient supervisor-worker communication using rkyv serialization
  • Statistical Rigor: Bootstrap resampling with BCa (bias-corrected and accelerated) confidence intervals
  • CI Integration: Severity levels (critical/warning/info), GitHub Actions summaries, baseline comparison
  • Algebraic Verification: Performance assertions directly in code with mathematical expressions
  • Synthetic Metrics: Compute derived metrics from benchmark results
  • Multi-Way Comparisons: Generate comparison tables and series charts
  • Allocation Tracking: TrackingAllocator measures heap usage per iteration
  • High-Precision Timing: RDTSC cycle counting on x86_64 with Instant fallback

§Quick Start

use fluxbench::prelude::*;

#[flux::bench]
fn my_benchmark(b: &mut Bencher) {
    b.iter(|| {
        // Code to benchmark
        expensive_operation()
    });
}

§Async Benchmarks

#[flux::bench(runtime = "multi_thread", worker_threads = 4)]
async fn async_benchmark(b: &mut Bencher) {
    b.iter(|| async {
        tokio::time::sleep(Duration::from_millis(1)).await;
    });
}

§Performance Assertions

#[flux::verify(expr = "(raw - overhead) < 50000000", severity = "critical")]
struct NetTimeCheck;

Modules§

flux
Attribute namespace for flux macros
prelude
Prelude for convenient imports

Structs§

Bencher
The Bencher provides iteration control for benchmarks.
BenchmarkDef
Benchmark definition registered via #[flux::bench]
BenchmarkResult
Result of a single benchmark run
BootstrapConfig
Bootstrap configuration
BootstrapResult
Result of bootstrap analysis
ChartDef
Chart definition for dashboard
CompareDef
Comparison group - groups multiple benchmarks for side-by-side comparison
GroupDef
Group definition for organizing benchmarks
MetricContext
Context holding benchmark metrics for expression evaluation
ReportDef
Report/dashboard definition
SummaryStatistics
Comprehensive summary statistics
SyntheticDef
Definition of a synthetic metric registered via #[flux::synthetic]
TrackingAllocator
Tracking allocator that wraps the system allocator
Verification
Verification definition
VerificationResult
Result of a verification check
VerifyDef
Definition of a verification rule registered via #[flux::verify]

Enums§

ChartType
Chart type for dashboard layout
IterationMode
Mode of iteration for the benchmark
Severity
Severity levels for CI integration
VerificationStatus
Verification execution status with explicit states for all outcomes.

Functions§

compute_bootstrap
Compute bootstrap confidence interval for the mean
compute_summary
Compute summary statistics with proper separation of cleaned vs raw data
current_allocation
Get current allocation statistics
reset_allocation_counter
Reset allocation counters (call before each iteration)
run
Run the FluxBench CLI harness.

Attribute Macros§

bench
Register a benchmark function
compare
Define a comparison group for multiple benchmarks
group
Define a benchmark group
report
Define a dashboard report
synthetic
Define a synthetic (computed) metric
verify
Define a performance verification