dipstick
A fast and modular metrics toolkit for all Rust applications. Similar to popular logging frameworks, but with counters and timers.
- Does not bind application code to a single metrics implementation.
- Builds on stable Rust with minimal dependencies.
use *;
let app_metrics = metrics;
app_metrics.counter.count;
Metrics can be sent to multiple outputs at the same time.
let app_metrics = metrics;
Since instruments are decoupled from the backend, outputs can be swapped easily.
Metrics can be aggregated and sent periodically in the background.
use Duration;
let = aggregate;
publish_every;
let app_metrics = metrics;
Use predefined publishing strategies all_stats
, summary
, average
or roll your own.
Metrics can be statistically sampled.
let app_metrics = metrics;
Metrics can be recorded asynchronously.
let app_metrics = metrics;
Metric definitions can be cached to make using ad-hoc metrics faster.
let app_metrics = metrics;
app_metrics.gauge.value;
Timers can be used multiple ways.
let timer = app_metrics.timer;
time!;
timer.time;
let start = timer.start;
/* slow code here */
timer.stop;
timer.interval_us;
Related metrics can share a namespace.
let db_metrics = app_metrics.with_prefix;
let db_timer = db_metrics.timer;
let db_counter = db_metrics.counter;
Design
Dipstick's design goals are to:
- support as many metrics backends as possible while favoring none
- support all types of applications, from embedded to servers
- promote metrics conventions that facilitate app monitoring and maintenance
- stay out of the way in the code and at runtime (ergonomic, fast, resilient)
Performance
Predefined timers use a bit more code but are generally faster because their initialization cost is is only paid once.
Ad-hoc timers are redefined "inline" on each use. They are more flexible, but have more overhead because their init cost is paid on each use.
Defining a metric cache()
reduces that cost for recurring metrics.
Run benchmarks with cargo +nightly bench --features bench
.
TODO
Although already usable, Dipstick is still under heavy development and makes no guarantees of any kind at this point. See the following list for any potential caveats :
- META turn TODOs into GitHub issues
- generic publisher / sources
- dispatch scopes
- feature flags
- derive stats
- time measurement units in metric kind (us, ms, etc.) for naming & scaling
- heartbeat metric on publish
- logger templates
- configurable aggregation
- non-aggregating buffers
- framework glue (rocket, iron, gotham, indicatif, etc.)
- more tests & benchmarks
- complete doc / inline samples
- more example apps
- A cool logo
- method annotation processors
#[timer("name")]
- fastsinks (M / &M) vs. safesinks (Arc)