metrics-lib 0.9.1

The fastest metrics library for Rust. Lock-free 0.6ns gauges, 18ns counters, timers, rate meters, async timing, adaptive sampling, and system health. Cross-platform with minimal dependencies.
Documentation

Latest local Criterion means (cargo bench --bench metrics_bench --features meter, Windows x86_64, Rust stable):

  • Counter increment: 1.48ns/op (676.36M ops/sec)
  • Gauge set: 0.40ns/op (2500.31M ops/sec)
  • Timer record: 3.17ns/op (314.99M ops/sec)
  • Memory: 64 bytes per metric (cache-aligned)

Features

Core Metrics

  • 🔢 Counters - Atomic increment/decrement with overflow protection
  • 📊 Gauges - IEEE 754 atomic floating-point with mathematical operations
  • ⏱️ Timers - Nanosecond precision with RAII guards and batch recording
  • 📈 Rate Meters - Tumbling window rates with burst detection and API limiting
  • 💾 System Health - Built-in CPU, memory, and process monitoring

Advanced Features

  • Lock-Free - Zero locks in hot paths, pure atomic operations
  • Async Native - First-class async/await support with zero-cost abstractions
  • Resilience - Circuit breakers, adaptive sampling, and backpressure control
  • Cross-Platform - Linux, macOS, Windows with optimized system integrations
  • Cache-Aligned - 64-byte alignment prevents false sharing

API Overview

For a complete reference with examples, see docs/API.md.

  • Counter — ultra-fast atomic counters with batch and conditional ops
  • Gauge — atomic f64 gauges with math ops, EMA, and min/max helpers
  • Timer — nanosecond timers, RAII guards, and closure/async timing
  • RateMeter — tumbling-window rate tracking and bursts
  • SystemHealth — CPU, memory, load, threads, FDs, health score
  • Async supportAsyncTimerExt, AsyncMetricBatch
  • Adaptive controls — sampling, circuit breaker, backpressure
  • Prelude — convenient re-exports

Error handling: try_ variants

All core metrics expose non-panicking try_ methods that validate inputs and return Result<_, MetricsError> instead of panicking:

  • Counter: try_inc, try_add, try_set, try_fetch_add, try_inc_and_get
  • Gauge: try_set, try_add, try_sub, try_set_max, try_set_min
  • Timer: try_record_ns, try_record, try_record_batch
  • RateMeter: try_tick, try_tick_n, try_tick_if_under_limit

Error semantics:

  • MetricsError::Overflow — arithmetic would overflow/underflow an internal counter.
  • MetricsError::InvalidValue { reason } — non-finite or otherwise invalid input (e.g., NaN for Gauge).
  • MetricsError::OverLimit — operation would exceed a configured limit (e.g., rate limiting helpers).

Example:

use metrics_lib::{init, metrics, MetricsError};

init();
let c = metrics().counter("jobs");
c.try_add(10)?;      // Result<(), MetricsError>
let r = metrics().rate("qps");
let allowed = r.try_tick_if_under_limit(1000.0)?; // Result<bool, MetricsError>

Panic guarantees: the plain methods (inc, add, set, tick, etc.) prioritize speed and may saturate or assume valid inputs. Prefer try_ variants when you need explicit error handling.

Installation

Add to your Cargo.toml:

[dependencies]
metrics-lib = "0.9.1"

# Optional features
metrics-lib = { version = "0.9.1", features = ["async"] }

# Full feature set (stable + async + serde)
metrics-lib = { version = "0.9.1", features = ["full"] }

Quick Start

use metrics_lib::{init, metrics};

// Initialize once at startup
init();

// Counters
metrics().counter("requests").inc();
metrics().counter("errors").add(5);

// Gauges
metrics().gauge("cpu_usage").set(87.3);
metrics().gauge("memory_gb").add(1.5);

// Timers - automatic RAII timing
{
    let _timer = metrics().timer("api_call").start();
    // Your code here - automatically timed on drop
}

// Or time a closure
let result = metrics().time("db_query", || {
    // Database operation
    "user_data"
});

// System health monitoring
let cpu = metrics().system().cpu_used();
let memory_gb = metrics().system().mem_used_gb();

// Rate metering
metrics().rate("api_calls").tick();

Observability Quick Start

  • Integration Examples: see docs/API.md#integration-examples
  • Grafana dashboard (ready to import): observability/grafana-dashboard.json
  • Prometheus recording rules: observability/recording-rules.yaml
  • Kubernetes Service: docs/k8s/service.yaml
  • Prometheus Operator ServiceMonitor: docs/k8s/servicemonitor.yaml
  • Secured ServiceMonitor (TLS/Bearer): docs/k8s/servicemonitor-secured.yaml

Commands

# Import Grafana dashboard via API
curl -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <GRAFANA_API_TOKEN>" \
  http://<grafana-host>/api/dashboards/db \
  -d @observability/grafana-dashboard.json

# Validate Prometheus recording rules
promtool check rules observability/recording-rules.yaml

# Apply Kubernetes manifests
kubectl apply -f docs/k8s/service.yaml
kubectl apply -f docs/k8s/servicemonitor.yaml
# For secured endpoints
kubectl apply -f docs/k8s/servicemonitor-secured.yaml

Advanced Usage

Async Support

use std::time::Duration;
use metrics_lib::{metrics, AsyncMetricBatch, AsyncTimerExt};

// Async timing with zero overhead and typed result
let result: &str = metrics()
    .timer("async_work")
    .time_async(|| async {
        tokio::time::sleep(Duration::from_millis(10)).await;
        "completed"
    })
    .await;

// Batched async updates (flush takes &MetricsCore)
let mut batch = AsyncMetricBatch::new();
batch.counter_inc("requests", 1);
batch.gauge_set("cpu", 85.2);
batch.flush(metrics());

Examples

Run these self-contained examples to see the library in action:

  • Quick Start

    • File: examples/quick_start.rs
    • Run:
      cargo run --example quick_start --release
      
  • Streaming Rate Window

    • File: examples/streaming_rate_window.rs
    • Run:
      cargo run --example streaming_rate_window --release
      
  • Axum Registry Integration (minimal web service)

    • File: examples/axum_registry_integration.rs
    • Run:
      cargo run --example axum_registry_integration --release
      
    • Endpoints:
      • GET /health — liveness probe
      • GET /metrics-demo — updates metrics (counter/gauge/timer/rate)
      • GET /export — returns a JSON snapshot of selected metrics
  • Quick Tour

    • File: examples/quick_tour.rs
    • Run:
      cargo run --example quick_tour --release
      
  • Async Batch + Timing

    • File: examples/async_batch_timing.rs
    • Run:
      cargo run --example async_batch_timing --release
      
  • Token Bucket Rate Limiter

    • File: examples/token_bucket_limiter.rs
    • Run:
      cargo run --example token_bucket_limiter --release
      
  • Custom Exporter (OpenMetrics-like)

    • File: examples/custom_exporter_openmetrics.rs
    • Run:
      cargo run --example custom_exporter_openmetrics --release
      
  • Axum Middleware Metrics (minimal)

    • File: examples/axum_middleware_metrics.rs
    • Run:
      cargo run --example axum_middleware_metrics --release
      
  • Contention & Admission Demo

    • File: examples/contention_admission.rs
    • Run:
      cargo run --example contention_admission --release
      
  • CPU Stats Overview

    • File: examples/cpu_stats.rs
    • Run:
      cargo run --example cpu_stats --release
      
  • Memory Stats Overview

    • File: examples/memory_stats.rs
    • Run:
      cargo run --example memory_stats --release
      
  • Health Dashboard

    • File: examples/health_dashboard.rs
    • Run:
      cargo run --example health_dashboard --release
      
  • Cache Hit/Miss

    • File: examples/cache_hit_miss.rs
    • Run:
      cargo run --example cache_hit_miss --release
      
  • Broker Throughput

    • File: examples/broker_throughput.rs
    • Run:
      cargo run --example broker_throughput --release
      

More Real-World Examples (API Reference)

Resilience Features

Running many examples quickly

For convenience, a helper script runs a curated set of non-blocking examples sequentially in release mode (skips server examples like Axum middleware):

bash tools/run_examples.sh

You can also pass a custom comma-separated list via EXAMPLES:

EXAMPLES="quick_start,quick_tour,cpu_stats" bash tools/run_examples.sh
use metrics_lib::{AdaptiveSampler, SamplingStrategy, MetricCircuitBreaker};

// Adaptive sampling under load
let sampler = AdaptiveSampler::new(SamplingStrategy::Dynamic {
    min_rate: 1,
    max_rate: 100,
    target_throughput: 10000,
});

if sampler.should_sample() {
    metrics().timer("expensive_op").record(duration);
}

// Circuit breaker protection
let breaker = MetricCircuitBreaker::new(Default::default());
if breaker.is_allowed() {
    // Perform operation
    breaker.record_success();
} else {
    // Circuit is open, skip operation
}

System Monitoring

let health = metrics().system();

println!("CPU: {:.1}%", health.cpu_used());
println!("Memory: {:.1} GB", health.mem_used_gb());
println!("Load: {:.2}", health.load_avg());
println!("Threads: {}", health.thread_count());

Benchmarks

Run the included benchmarks to see performance on your system:

# Basic performance comparison
cargo run --example benchmark_comparison --release

# Comprehensive benchmarks (Criterion)
cargo bench --bench metrics_bench --features meter

# Cross-platform system tests
cargo test --all-features

Interpreting Criterion Results

  • Criterion writes reports to target/criterion/ with per-benchmark statistics and comparisons.
  • Key numbers to watch: time: [low … mean … high] and outlier percentages.
  • Compare runs over time to detect regressions. Store artifacts from CI for historical comparison.
  • Benchmarks are microbenchmarks; validate with end-to-end measurements as needed.

CI Artifacts

  • Pull Requests: CI runs a fast smoke bench and uploads criterion-reports with target/criterion.
  • Nightly: The Benchmarks workflow runs full-duration benches on Linux/macOS/Windows and uploads artifacts as benchmark-results-<os>.
  • You can download these artifacts from the GitHub Actions run page to compare results across commits.

Latest CI Benchmarks

View the latest nightly results and artifacts here:

Latest CI Benchmarks (Benchmarks workflow)

Benchmark history (GitHub Pages):

Benchmark History (gh-pages)

Sample Results (latest local run; Windows x86_64, Rust stable):

Counter Increment: 1.48 ns/op (676.36 M ops/sec)
Gauge Set:         0.40 ns/op (2500.31 M ops/sec)
Timer Record:      3.17 ns/op (314.99 M ops/sec)
Mixed Operations:  151.58 ns/op (6.60 M ops/sec)

Notes: Latest numbers taken from local Criterion means under target/criterion/**/new/estimates.json. Actual throughput varies by CPU and environment; use the GitHub Pages benchmark history for trends.

Methodology

  • Tooling: Criterion with release builds.
  • Flags for stability on local runs: cargo bench --bench metrics_bench --features meter -- -w 3.0 -m 5.0 -n 100 (increase on dedicated runners).
  • Environment disclosure (example):
    • CPU: Apple M1 Pro (performance cores)
    • Rust: stable toolchain
    • Target: aarch64-apple-darwin
    • Governor: default (for CI use a performance governor where applicable)

See also: docs/zero-overhead-proof.md for assembly inspection and binary size analysis, and docs/performance-tuning.md for environment hardening.

Architecture

Lock-Free Design

  • Atomic Operations: All metrics use Relaxed ordering for maximum performance
  • Cache-Line Alignment: 64-byte alignment eliminates false sharing
  • Compare-and-Swap: Lock-free min/max tracking in timers
  • Thread-Local Storage: Fast random number generation for sampling

Memory Layout

#[repr(align(64))]
pub struct Counter {
    value: AtomicU64,           // 8 bytes
    // 56 bytes padding to cache line boundary
}

Zero-Cost Abstractions

  • RAII Timers: Compile-time guaranteed cleanup
  • Async Guards: No allocation futures for timing
  • Batch Operations: Vectorized updates for efficiency

Testing

Comprehensive automated coverage includes:

  • default features: 63 unit tests + 2 API smoke tests + 14 rustdoc tests
  • all features: 110 unit tests + 3 API smoke tests + 17 rustdoc tests
# Run all tests
cargo test

# Test with all features
cargo test --all-features

# Run only bench-gated tests (feature-flagged and ignored by default)
cargo test --features bench-tests -- --ignored

# Run benchmarks (Criterion)
cargo bench --bench metrics_bench --features meter

# Check for memory leaks (with valgrind)
cargo test --target x86_64-unknown-linux-gnu

Cross-Platform Support

Tier 1 Support:

  • ✅ Linux (x86_64, aarch64)
  • ✅ macOS (x86_64, Apple Silicon)
  • ✅ Windows (x86_64)

System Integration:

  • Linux: /proc filesystem, sysinfo APIs
  • macOS: mach system calls, sysctl APIs
  • Windows: Performance counters, WMI integration

Graceful Fallbacks:

  • Unsupported platforms default to portable implementations
  • Feature detection at runtime
  • No panics on missing system features

Comparison

Library Counter ns/op Gauge ns/op Timer ns/op Memory/Metric Features
metrics-lib (latest local run) 1.48 0.40 3.17 64B ✅ Async, Circuit breakers, System monitoring
metrics-rs N/A in this repo run N/A in this repo run N/A in this repo run N/A in this repo run External crate
prometheus N/A in this repo run N/A in this repo run N/A in this repo run N/A in this repo run External crate
statsd N/A in this repo run N/A in this repo run N/A in this repo run N/A in this repo run External crate

Configuration

Feature Flags

Feature Default Description
count Counter metric type
gauge Gauge metric type
timer Timer metric type
meter Rate meter metric type
sample Statistical sampling
histogram Histogram support (requires sample)
async Async/await support (requires Tokio)
serde Serde serialization support
all All stable features (excludes async and serde)
full All features including async and serde
minimal Smallest useful build (counter only)
# All stable features:
metrics-lib = { version = "0.9.1", features = ["all"] }

# Full build including async and serde:
metrics-lib = { version = "0.9.1", features = ["full"] }

# Minimal build (counter only):
metrics-lib = { version = "0.9.1", features = ["minimal"] }

Runtime Configuration

use metrics_lib::{init_with_config, Config};

let config = Config {
    max_metrics: 10000,
    update_interval_ms: 1000,
    enable_system_metrics: true,
};

init_with_config(config);

Contributing

We welcome contributions! Please see our Contributing Guide.

Development Setup

# Clone repository
git clone https://github.com/jamesgober/metrics-lib.git
cd metrics-lib

# Run tests
cargo test --all-features

# Run benchmarks
cargo bench --bench metrics_bench --features meter

# Check formatting and lints
cargo fmt --all -- --check
cargo clippy --all-features -- -D warnings

Links

Guides