Skip to main content

Module metrics

Module metrics 

Source
Expand description

Metrics with Prometheus and/or OpenTelemetry backends.

Provides production-ready metrics collection with support for:

  • metrics feature only: Prometheus scrape endpoint via /metrics
  • otel-metrics feature only: OTLP push to OTel-compatible backends
  • Both features: Fanout recorder sends to both Prometheus AND OTel

§Features

  • Counter, Gauge, Histogram metric types
  • Automatic process metrics (CPU, memory, file descriptors)
  • Container metrics from cgroups (memory limit, CPU limit)
  • Built-in HTTP server for /metrics endpoint (Prometheus)
  • OTLP push to HyperDX, Jaeger, Grafana, etc. (OTel)
  • Readiness callback for /health/ready endpoints
  • Optional scaling pressure endpoint (/scaling/pressure)
  • Optional memory guard endpoint (/memory/pressure)
  • Custom route support via start_server_with_routes

§Basic Example

use hyperi_rustlib::metrics::{MetricsManager, MetricsConfig};

#[tokio::main]
async fn main() {
    let mut manager = MetricsManager::new("myapp");

    // Create metrics
    let requests = manager.counter("requests_total", "Total requests");
    let active = manager.gauge("active_connections", "Active connections");
    let latency = manager.histogram("request_duration_seconds", "Request latency");

    // Start metrics server (simple — built-in endpoints only)
    manager.start_server("0.0.0.0:9090").await.unwrap();

    // Record metrics
    requests.increment(1);
    active.set(42.0);
    latency.record(0.123);
}

§Advanced Example (with custom routes, scaling, memory)

Requires features: metrics, http-server, scaling, memory.

use std::sync::Arc;
use hyperi_rustlib::metrics::MetricsManager;
use hyperi_rustlib::scaling::{ScalingPressure, ScalingPressureConfig};
use hyperi_rustlib::memory::{MemoryGuard, MemoryGuardConfig};
use axum::{Router, routing::post};

let mut mgr = MetricsManager::new("myapp");

// Readiness callback
mgr.set_readiness_check(|| true);

// Attach scaling pressure (adds /scaling/pressure endpoint)
let scaling = Arc::new(ScalingPressure::new(ScalingPressureConfig::default(), vec![]));
mgr.set_scaling_pressure(scaling);

// Attach memory guard (adds /memory/pressure endpoint)
let guard = Arc::new(MemoryGuard::new(MemoryGuardConfig::default()));
mgr.set_memory_guard(guard);

// Service-specific routes
let custom = Router::new()
    .route("/test", post(|| async { "ok" }));

// Start with everything merged into one server
mgr.start_server_with_routes("0.0.0.0:9090", custom).await.unwrap();

Re-exports§

pub use dfe::DfeMetrics;
pub use manifest::ManifestResponse;
pub use manifest::MetricDescriptor;
pub use manifest::MetricRegistry;
pub use manifest::MetricType;

Modules§

dfe
Standard DFE metrics.
manifest
Metric manifest types for the /metrics/manifest endpoint.

Structs§

ContainerMetrics
Container metrics collector.
MetricsConfig
Metrics configuration.
MetricsManager
Metrics manager handling Prometheus and/or OTel exposition.
ProcessMetrics
Process metrics collector.
RenderHandle
Cloneable handle for rendering Prometheus metrics text.

Enums§

MetricsError
Metrics errors.

Functions§

latency_buckets
Standard latency histogram buckets.
size_buckets
Standard size histogram buckets.

Type Aliases§

ReadinessFn
Readiness check callback type.