Skip to main content

Crate profiler

Crate profiler 

Source
Expand description

§Metrics oriented profiler + bencher.

This library provide a way to define a metrics, trough Metrics trait and gather them in places instrumented by tracing crate.

§Bencher and profiler in one place

Imagine having crate with some defined pipeline:

fn pipeline(data: &[u8]) -> Vec<u8> {
   serialize(process(parse(data)))
}

At some point, perfomance of pipeline doesn’t suit needs, and one want to start optimisation, but for doing optimisation, one first need to know what part is slowing down pipeline.

So one start to integrate bencher + profiler.

Profiler shows what parts took more time, and bencher just snapshot current state of perfomance in case of future regressions.

With classic toolset you endup with:

  1. Benchmark for each pipeline phase (with some setup code and syntetic data for each phase)
  2. And some entrypoint with test data suitable for profiler (can be shared with bench, but with care)
  3. [Optional] Add some metrics in production, to allow gather perfomance stats in production.

This aproach envolves a lot of duplication and boilerplates, and also enforces to expose some private api (input/output of pipeline phases).

Instead one can use profiler and simplify the process:

// Instrument functions that want to observe using `tracing::instrument`
#[tracing::instrument(skip_all)]
fn parse(data: &[u8]) -> Vec<u32> {
   data.chunks(4).map(|c| u32::from_le_bytes(c.try_into().unwrap_or([0; 4]))).collect()
}
fn process(items: Vec<u32>) -> u64 {
   // Or use `tracing::span` api
   let _span = tracing::info_span!("parse").entered();
   items.iter().map(|&x| x as u64).sum()
}
#[tracing::instrument(skip_all)]
fn serialize(result: u64) -> Vec<u8> {
   result.to_le_bytes().to_vec()
}
fn pipeline(data: &[u8]) -> Vec<u8> {
   serialize(process(parse(data)))
}

// -- And create single entrypoint with custom setup.
fn bench_pipeline() {
   let data: Vec<u8> = (0..1024u16).flat_map(|x| x.to_le_bytes()).collect();
   pipeline(&data);
}
profiler::bench_main!(bench_pipeline);

Putting this file somewhere in <CARGO_ROOT>/benches/bench_name.rs, and add bench section to Cargo.toml:

[[bench]]
name = "bench_name"
harness = false

And now one have single entrypoint, where they can observe and debug perfomance regressions.

§Extend metrics

By default profiler provides a multiple metrics providers. And implement default bench::MetricsProvider used in benchmarking. User can decide what important for them, by deriving their own combination using #[derive(Metrics)] of Metrics trait, and use them in bench_main!.

If one need to track something unique for the application (bytes read, slab size, etc.) they can define their own provider using SingleMetric trait.

If one want to collect metrics outside of benchmark, they can use Collector api directly.

Re-exports§

pub use crate::metrics::InstantProvider;
pub use crate::metrics::Metrics;
pub use crate::metrics::PerfEventMetric;
pub use crate::metrics::RusageKind;
pub use crate::metrics::RusageMetric;
pub use crate::metrics::SingleMetric;
pub use crate::metrics::format_unit_helper;

Modules§

bench
metrics
Metrics extending functionality of the profiler.

Macros§

bench_main
Generate main function for benchmark.

Structs§

Collector
Single collector: a tracing_subscriber::Layer that captures ProfileEntry events on span enter / exit into an internal buffer.

Enums§

ProfileEntry
Entry collected by Collector.

Functions§

black_box
An identity function that hints to the compiler to be maximally pessimistic about what black_box could do.

Derive Macros§

Metrics
Derive macros for Metrics trait.