rialo-telemetry
A comprehensive telemetry library for distributed tracing and metrics in Rialo applications. This crate provides a unified interface for setting up OpenTelemetry distributed tracing, Prometheus metrics, and console logging with minimal configuration.
Features
- OpenTelemetry Integration: Full support for distributed tracing with OTLP HTTP exporters
- OpenTelemetry Metrics: Configuration support for OTLP metrics export (implementation pending)
- Distributed Tracing: Centralized utilities for HTTP trace context propagation
- Baggage Support: Complete baggage manipulation utilities for distributed metadata propagation
- Prometheus Metrics: Optional span latency metrics and custom registry support
- Console Logging: Configurable structured logging to console
- Environment Variable Configuration: Automatic configuration from standard OpenTelemetry environment variables
- Flexible Configuration: Builder pattern for programmatic configuration
- Feature Gated: Optional dependencies based on your needs
Optional Features
axum-headers - Enables HTTP server trace context extraction utilities for axum
distributed-tracing - Enables OpenTelemetry distributed tracing support
env-context - Enables environment variable-based trace context propagation for subprocess communication
prometheus - Enables Prometheus metrics collection
reqwest-headers - Enables HTTP client trace context injection utilities for reqwest
Quick Start
Console-Only Logging
use rialo_telemetry::{TelemetryConfig, init_telemetry};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new();
let handle = init_telemetry(config).await?;
tracing::info!("Application started");
handle.shutdown()?;
Ok(())
}
OpenTelemetry with Environment Variables
Enable the distributed-tracing feature and set environment variables:
export OTEL_SERVICE_NAME="my-service"
export OTEL_SERVICE_VERSION="1.0.0"
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=your-key"
use rialo_telemetry::{TelemetryConfig, init_telemetry};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
tracing::info!("Application started with distributed tracing");
handle.shutdown()?;
Ok(())
}
Programmatic Configuration
use rialo_telemetry::{TelemetryConfig, OtlpConfig, init_telemetry};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let otlp_config = OtlpConfig::new()
.with_service_name("my-service")
.with_service_version("1.0.0")
.with_exporter_endpoint("https://api.honeycomb.io/v1/traces")
.with_console_enabled(true);
let config = TelemetryConfig::new()
.with_otlp_config(otlp_config)
.with_log_level("debug");
let handle = init_telemetry(config).await?;
handle.shutdown()?;
Ok(())
}
OpenTelemetry Metrics Configuration
The crate includes full configuration support for OpenTelemetry metrics export, though the actual metrics implementation is not yet active. All environment variables and configuration options are parsed and stored for future use:
use rialo_telemetry::{TelemetryConfig, OtlpConfig, init_telemetry};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let otlp_config = OtlpConfig::new()
.with_service_name("my-service")
.with_traces_endpoint("http://jaeger:14268/api/traces")
.with_exporter_endpoint("http://otel-collector:4318");
let config = TelemetryConfig::new()
.with_otlp_config(otlp_config);
let handle = init_telemetry(config).await?;
handle.shutdown()?;
Ok(())
}
Note: While metrics configuration is fully supported, the actual metrics export implementation is planned for a future release. Currently, only tracing is actively exported via OTLP.
Prometheus Metrics
Enable the prometheus feature:
use rialo_telemetry::{TelemetryConfig, PrometheusConfig, init_telemetry};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let registry = prometheus::Registry::new();
let prometheus_config = PrometheusConfig::new(registry.clone())
.with_span_latency_buckets(20)
.with_span_latency_enabled(true);
let config = TelemetryConfig::new()
.with_prometheus_config(prometheus_config);
let handle = init_telemetry(config).await?;
handle.shutdown()?;
Ok(())
}
Distributed Tracing Context Propagation
The crate provides utilities for propagating trace context across HTTP requests, enabling distributed tracing across microservices.
HTTP Client (reqwest) - Trace Context Injection
Enable the reqwest-headers feature to inject trace context into outgoing HTTP requests:
use rialo_telemetry::{inject_trace_headers, apply_trace_headers_to_reqwest};
use reqwest::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
let client = Client::new();
let span = tracing::info_span!("http_request", service = "api-call");
let _guard = span.enter();
let trace_headers = inject_trace_headers();
let request = client.post("https://api.example.com/data");
let request = apply_trace_headers_to_reqwest(request, trace_headers);
let response = request.send().await?;
handle.shutdown()?;
Ok(())
}
HTTP Server (axum) - Trace Context Extraction
Enable the axum-headers feature to extract trace context from incoming HTTP requests:
use rialo_telemetry::extract_and_set_trace_context_axum;
use axum::{extract::State, http::HeaderMap, Json, response::Json as ResponseJson};
#[tracing::instrument]
async fn handler(
headers: HeaderMap,
Json(payload): Json<serde_json::Value>
) -> ResponseJson<serde_json::Value> {
extract_and_set_trace_context_axum(&headers);
tracing::info!("Processing request with distributed trace context");
ResponseJson(serde_json::json!({"status": "ok"}))
}
End-to-End Distributed Tracing Example
Combining both client and server utilities for full distributed tracing:
use rialo_telemetry::{TelemetryConfig, init_telemetry, inject_trace_headers, apply_trace_headers_to_reqwest};
async fn call_service_b() -> Result<(), Box<dyn std::error::Error>> {
let span = tracing::info_span!("call_service_b");
let _guard = span.enter();
let client = reqwest::Client::new();
let trace_headers = inject_trace_headers();
let request = client.post("http://service-b:8080/api/process");
let request = apply_trace_headers_to_reqwest(request, trace_headers);
let response = request.send().await?;
tracing::info!("Received response from service B");
Ok(())
}
use rialo_telemetry::extract_and_set_trace_context_axum;
use axum::{http::HeaderMap, Json};
#[tracing::instrument]
async fn process_request(
headers: HeaderMap,
Json(data): Json<serde_json::Value>
) -> Json<serde_json::Value> {
extract_and_set_trace_context_axum(&headers);
tracing::info!("Processing request in service B");
let result = process_business_logic(data).await;
Json(result)
}
Note: Both utilities require the distributed-tracing feature to be enabled along with their respective feature flags (reqwest-headers or axum-headers).
Environment Variable Context Propagation
Enable the env-context feature to propagate trace context across process boundaries using environment variables.
Trace Context Inheritance
When the env-context feature is enabled, you can manually extract trace context from environment variables after initializing telemetry. This allows subprocesses to connect to their parent's trace:
use rialo_telemetry::{TelemetryConfig, init_telemetry, extract_and_set_trace_context_env};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
extract_and_set_trace_context_env();
tracing::info!("Child process started");
handle.shutdown()?;
Ok(())
}
Ergonomic Command Helper
Use inject_trace_env_to_cmd() for convenient one-liner subprocess trace propagation:
use rialo_telemetry::{TelemetryConfig, init_telemetry, inject_trace_env_to_cmd};
use std::process::Command;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
extract_and_set_trace_context_env();
let span = tracing::info_span!("subprocess_execution", command = "worker");
let _guard = span.enter();
let cmd = inject_trace_env_to_cmd(Command::new("./worker"));
let output = cmd.arg("--task=process").output()?;
tracing::info!("Subprocess completed with status: {}", output.status);
handle.shutdown()?;
Ok(())
}
Manual Control (Advanced Usage)
For fine-grained control, you can still use the manual functions:
use rialo_telemetry::{inject_trace_env, extract_and_set_trace_context_env};
use std::process::Command;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
extract_and_set_trace_context_env();
let trace_env = inject_trace_env();
let mut cmd = Command::new("./child-process");
for (key, value) in trace_env {
cmd.env(key, value);
}
let output = cmd.output()?;
handle.shutdown()?;
Ok(())
}
In the child process, initialize telemetry and extract trace context:
use rialo_telemetry::{TelemetryConfig, init_telemetry, extract_and_set_trace_context_env};
#[tracing::instrument]
fn main() -> Result<(), Box<dyn std::error::Error>> {
let rt = tokio::runtime::Runtime::new()?;
rt.block_on(async {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
extract_and_set_trace_context_env();
tracing::info!("Child process started with inherited trace context");
do_work().await;
tracing::info!("Child process completed");
handle.shutdown()
})
}
async fn do_work() {
let span = tracing::info_span!("child_work");
let _guard = span.enter();
tracing::info!("Performing work in child process");
}
You can also extract from a custom environment map instead of the current process environment:
use rialo_telemetry::extract_and_set_trace_context_from_env_map;
use std::collections::HashMap;
fn handle_custom_environment(custom_env: &HashMap<String, String>) {
extract_and_set_trace_context_from_env_map(custom_env);
tracing::info!("Working with custom trace context");
}
Cross-Process Distributed Tracing Example
Complete example showing trace propagation from parent to child process:
use rialo_telemetry::{TelemetryConfig, init_telemetry, inject_trace_env_to_cmd};
use std::process::Command;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
let span = tracing::info_span!("batch_job", job_id = "12345");
let _guard = span.enter();
tracing::info!("Starting batch job with multiple workers");
for worker_id in 1..=3 {
let worker_span = tracing::info_span!("launch_worker", worker_id = worker_id);
let _worker_guard = worker_span.enter();
let cmd = inject_trace_env_to_cmd(Command::new("./worker"))
.arg(worker_id.to_string());
tracing::info!("Launching worker {}", worker_id);
cmd.spawn()?;
}
tracing::info!("All workers launched");
handle.shutdown()?;
Ok(())
}
use rialo_telemetry::{TelemetryConfig, init_telemetry, extract_and_set_trace_context_env};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let worker_id = std::env::args().nth(1).unwrap_or("unknown".to_string());
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
extract_and_set_trace_context_env();
let worker_span = tracing::info_span!("worker_process", worker_id = worker_id);
let _guard = worker_span.enter();
tracing::info!("Worker {} started with inherited trace", worker_id);
process_batch_items().await;
tracing::info!("Worker {} completed", worker_id);
handle.shutdown()?;
Ok(())
}
async fn process_batch_items() {
for item in 1..=10 {
let item_span = tracing::info_span!("process_item", item_id = item);
let _guard = item_span.enter();
tracing::info!("Processing item {}", item);
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
}
}
Note: Environment variable context propagation requires the distributed-tracing feature to be enabled along with the env-context feature flag.
Baggage Support
Baggage provides a way to propagate key-value metadata across distributed systems alongside trace context. It's useful for passing cross-cutting concerns like user IDs, feature flags, request priorities, or any other data that should be available throughout a distributed trace.
The crate provides comprehensive baggage manipulation utilities when the distributed-tracing feature is enabled:
Basic Baggage Operations
use rialo_telemetry::{get_baggage, get_all_baggage};
use rialo_telemetry::{TelemetryConfig, init_telemetry};
use opentelemetry::{baggage::{Baggage, BaggageExt, BaggageMetadata}, Context, Key, Value};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
let span = tracing::info_span!("my_operation");
let _guard = span.enter();
{
let mut baggage = Baggage::new();
baggage.insert_with_metadata(
Key::new("user_id".to_string()),
Value::from("12345".to_string()),
BaggageMetadata::default(),
);
let context = Context::current().with_baggage(baggage);
let _baggage_guard = context.attach();
let user_id = get_baggage("user_id"); let request_id = get_baggage("request_id");
let all_baggage = get_all_baggage();
println!("Current baggage: {:?}", all_baggage);
}
handle.shutdown()?;
Ok(())
}
Distributed Baggage Propagation
Baggage automatically propagates across distributed systems through the same mechanisms as trace context:
use rialo_telemetry::{get_baggage, inject_trace_headers, apply_trace_headers_to_reqwest};
use reqwest::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
let span = tracing::info_span!("call_service_b");
let _guard = span.enter();
let client = Client::new();
let trace_headers = inject_trace_headers(); let request = client.post("http://service-b:8080/process");
let request = apply_trace_headers_to_reqwest(request, trace_headers);
let response = request.send().await?;
handle.shutdown()?;
Ok(())
}
use rialo_telemetry::{extract_and_set_trace_context_axum, get_baggage};
use axum::{http::HeaderMap, Json};
#[tracing::instrument]
async fn process_request(
headers: HeaderMap,
Json(data): Json<serde_json::Value>
) -> Json<serde_json::Value> {
extract_and_set_trace_context_axum(&headers);
let user_id = get_baggage("user_id"); let tenant_id = get_baggage("tenant_id"); let priority = get_baggage("request_priority");
tracing::info!("Processing request for user {:?} in tenant {:?} with priority {:?}",
user_id, tenant_id, priority);
if priority == Some("high".to_string()) {
process_with_high_priority(data).await
} else {
process_normally(data).await
}
}
Cross-Process Baggage Propagation
Baggage also propagates across process boundaries when using environment variable context propagation:
use rialo_telemetry::inject_trace_env_to_cmd;
use std::process::Command;
use opentelemetry::baggage::BaggageExt;
use opentelemetry::Context;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
let baggage = opentelemetry::baggage::Baggage::builder()
.with_entry("batch_id", "batch-2024-001")
.with_entry("processing_mode", "parallel")
.build();
let cx = Context::current().with_baggage(baggage);
let _guard = cx.attach();
let span = tracing::info_span!("launch_worker");
let _span_guard = span.enter();
let cmd = inject_trace_env_to_cmd(Command::new("./worker"));
let output = cmd.arg("--task=process").output()?;
handle.shutdown()?;
Ok(())
}
use rialo_telemetry::{TelemetryConfig, init_telemetry, extract_and_set_trace_context_env, get_baggage};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = TelemetryConfig::new().with_otlp();
let handle = init_telemetry(config).await?;
extract_and_set_trace_context_env();
let batch_id = get_baggage("batch_id"); let mode = get_baggage("processing_mode");
tracing::info!("Worker started for batch {:?} in mode {:?}", batch_id, mode);
if mode == Some("parallel".to_string()) {
process_in_parallel().await;
} else {
process_sequentially().await;
}
handle.shutdown()?;
Ok(())
}
Baggage Best Practices
Use Cases:
- User identification across services
- Feature flags and A/B testing
- Request prioritization and routing
- Tenant or organization context
- Debug flags and trace sampling decisions
Performance Considerations:
- Keep baggage small (recommended: < 1KB total)
- Use short, meaningful keys
- Create new contexts with updated baggage when values need to change
- Drop context guards when exiting scopes to restore previous context
Security Notes:
- Don't put sensitive data in baggage (it's propagated in headers)
- Baggage is visible to all services in the trace
- Consider baggage as public metadata within your distributed system
Baggage Configuration
Baggage propagation is enabled by default when OpenTelemetry is configured. The propagators include both trace context and baggage:
std::env::set_var("OTEL_PROPAGATORS", "tracecontext,baggage,b3");
Note: Baggage utilities require the distributed-tracing feature to be enabled.
Environment Variables
The crate supports all standard OpenTelemetry environment variables:
Service Configuration
OTEL_SERVICE_NAME - Service name (default: "rialo")
OTEL_SERVICE_VERSION - Service version (default: "unknown")
OTEL_RESOURCE_ATTRIBUTES - Additional resource attributes
Endpoint Configuration
OTEL_EXPORTER_OTLP_ENDPOINT - General OTLP endpoint (default: "http://localhost:4318")
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT - Traces-specific endpoint (overrides general)
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT - Metrics-specific endpoint (overrides general)
OTEL_EXPORTER_OTLP_INSECURE - Use insecure connection for general endpoint (default: false)
OTEL_EXPORTER_OTLP_TRACES_INSECURE - Use insecure connection for traces (default: false)
OTEL_EXPORTER_OTLP_METRICS_INSECURE - Use insecure connection for metrics (default: false)
Headers
OTEL_EXPORTER_OTLP_HEADERS - General headers for authentication
OTEL_EXPORTER_OTLP_TRACES_HEADERS - Traces-specific headers (merged with general)
OTEL_EXPORTER_OTLP_METRICS_HEADERS - Metrics-specific headers (merged with general)
Protocol Configuration
OTEL_EXPORTER_OTLP_PROTOCOL - General export protocol: "grpc", "http/protobuf", "http/json" (default: "http/protobuf")
OTEL_EXPORTER_OTLP_TRACES_PROTOCOL - Traces-specific protocol (overrides general)
OTEL_EXPORTER_OTLP_METRICS_PROTOCOL - Metrics-specific protocol (overrides general)
Feature Toggles
OTEL_TRACES_ENABLED - Enable/disable traces (default: true)
OTEL_METRICS_ENABLED - Enable/disable metrics (default: true)
OTEL_LOG_LEVEL - Log level (default: "info")
Metrics Configuration
OTEL_EXPORTER_OTLP_METRICS_PERIOD - Metrics reporting interval (default: "30s")
Propagation
OTEL_PROPAGATORS - Trace context propagators, comma-separated (default: "tracecontext,baggage")
Local Jaeger Setup
For local development and testing, you can easily connect to a local Jaeger instance to visualize your traces.
Running Jaeger
Jaeger can be run either as an all-in-one Docker container or built locally. A Nix recipe is included for convenience in the rialo-nix-toolchain.
Docker (Recommended for Quick Testing)
Run the all-in-one Jaeger container:
docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:1.6.0
This exposes the following ports:
- 16686: Jaeger UI (http://localhost:16686)
- 14268: Jaeger collector (HTTP)
- 4318: OTLP gRPC endpoint (if using OTLP collector)
Nix (For Development Environment)
If you're using the rialo-nix-toolchain, you can run Jaeger with:
nix run .#jaeger
Connecting Your Application
Once Jaeger is running, configure your application to send traces to it using environment variables:
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4318/v1/traces
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
Or create a .env file:
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4318/v1/traces
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
Viewing Traces
- Start your application with the environment variables set
- Generate some traces by using your application
- Open http://localhost:16686 in your browser
- Select your service from the dropdown
- Click "Find Traces" to see your traces
Configuration Precedence
Configuration values are resolved in this order (highest to lowest precedence):
- Programmatic configuration via builder methods
- Environment variables
- Default values
For endpoints and headers, signal-specific settings override general settings:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT overrides OTEL_EXPORTER_OTLP_ENDPOINT for traces
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT overrides OTEL_EXPORTER_OTLP_ENDPOINT for metrics
- Signal-specific headers are merged with general headers (signal-specific takes precedence for conflicts)
Examples
Honeycomb.io Integration
export OTEL_SERVICE_NAME="my-service"
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://api.honeycomb.io/v1/traces"
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=your-api-key,x-honeycomb-dataset=my-dataset"
Jaeger Integration
export OTEL_SERVICE_NAME="my-service"
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://localhost:14268/api/traces"
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
Separate Traces and Metrics Endpoints
export OTEL_SERVICE_NAME="my-service"
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://jaeger:14268/api/traces"
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="http://prometheus:9090/api/v1/otlp/v1/metrics"
export OTEL_EXPORTER_OTLP_TRACES_HEADERS="authorization=Bearer traces-token"
export OTEL_EXPORTER_OTLP_METRICS_HEADERS="authorization=Bearer metrics-token"
Development Setup
export RUST_LOG="debug"
Error Handling
The library handles common error scenarios gracefully:
- Invalid endpoints: Empty or invalid endpoints disable OpenTelemetry export
- Network failures: Export failures don't crash the application
- Configuration errors: Invalid environment variables fall back to defaults
- Global subscriber conflicts: Handles multiple initialization attempts gracefully
Performance Considerations
- Batched Export: Uses OpenTelemetry's batched span processor for efficient export
- Conditional Compilation: Feature gates ensure zero overhead when features are disabled
- Efficient Headers: Headers are parsed once and reused
- Resource Detection: Uses OpenTelemetry's resource detection for optimal metadata
Testing
cargo nextest run -p rialo-telemetry
cargo nextest run -p rialo-telemetry --features prometheus
License
Licensed under the Apache License, Version 2.0.
Contributing
This crate is part of the Rialo project. See the main repository for contribution guidelines.