Expand description
§init-tracing-opentelemetry
A set of helpers to initialize (and more) tracing + opentelemetry (compose your own or use opinionated preset)
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Simple preset
let _guard = init_tracing_opentelemetry::TracingConfig::production().init_subscriber()?;
//...
Ok(())
}#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// custom configuration
let _guard = init_tracing_opentelemetry::TracingConfig::default()
.with_json_format()
.with_stderr()
.with_log_directives("debug")
.init_subscriber()?;
//...
Ok(())
}The init_subscriber() function returns an OtelGuard instance. Following the guard pattern, this struct provides no functions but, when dropped, ensures that any pending traces/metrics are sent before it exits. The syntax let _guard is suggested to ensure that Rust does not drop the struct until the application exits.
§Configuration Options
§Presets
TracingConfig::development()- Pretty format, stderr, with debug infoTracingConfig::production()- JSON format, stdout, minimal metadataTracingConfig::debug()- Full verbosity with all span eventsTracingConfig::minimal()- Compact format, no OpenTelemetryTracingConfig::testing()- Minimal output for tests
§Custom Configuration
use init_tracing_opentelemetry::TracingConfig;
TracingConfig::default()
.with_pretty_format() // or .with_json_format(), .with_compact_format()
.with_stderr() // or .with_stdout(), .with_file(path)
.with_log_directives("debug") // Custom log levels
.with_line_numbers(true) // Include line numbers
.with_thread_names(true) // Include thread names
.with_otel(true) // Enable OpenTelemetry
.init_subscriber()
.expect("valid tracing configuration");§Add custom layer, modify subscriber
Use init_subscriber_ext(|subscriber| {...} ) to transform the subscriber (registry), before application of the configuration.
use init_tracing_opentelemetry::TracingConfig;
use tokio_blocked::TokioBlockedLayer;
use tracing::info;
use tracing_subscriber::layer::SubscriberExt;
#[tokio::main]
async fn main() {
let blocked = TokioBlockedLayer::new()
.with_warn_busy_single_poll(Some(std::time::Duration::from_micros(150)));
let _guard = TracingConfig::default()
.with_log_directives("info,tokio::task=trace,tokio::task::waker=warn")
.with_span_events(tracing_subscriber::fmt::format::FmtSpan::NONE)
.init_subscriber_ext(|subscriber| subscriber.with(blocked))
.unwrap();
tokio::task::spawn(async {
// BAD!
// This produces a warning log message.
info!("blocking!");
std::thread::sleep(std::time::Duration::from_secs(1));
})
.await
.unwrap();
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
}§Legacy API (deprecated)
For backward compatibility, the old API is still available:
pub fn build_loglevel_filter_layer() -> tracing_subscriber::filter::EnvFilter {
// filter what is output on log (fmt)
// std::env::set_var("RUST_LOG", "warn,axum_tracing_opentelemetry=info,otel=debug");
std::env::set_var(
"RUST_LOG",
format!(
// `otel::tracing` should be a level trace to emit opentelemetry trace & span
// `otel::setup` set to debug to log detected resources, configuration read and infered
"{},otel::tracing=trace,otel=debug",
std::env::var("RUST_LOG")
.or_else(|_| std::env::var("OTEL_LOG_LEVEL"))
.unwrap_or_else(|_| "info".to_string())
),
);
EnvFilter::from_default_env()
}
pub fn build_otel_layer<S>() -> Result<OpenTelemetryLayer<S, Tracer>, BoxError>
where
S: Subscriber + for<'a> LookupSpan<'a>,
{
use crate::{
init_propagator, //stdio,
otlp,
resource::DetectResource,
};
let otel_rsrc = DetectResource::default()
//.with_fallback_service_name(env!("CARGO_PKG_NAME"))
//.with_fallback_service_version(env!("CARGO_PKG_VERSION"))
.build();
let otel_tracer = otlp::init_tracer(otel_rsrc, otlp::identity)?;
// to not send trace somewhere, but continue to create and propagate,...
// then send them to `axum_tracing_opentelemetry::stdio::WriteNoWhere::default()`
// or to `std::io::stdout()` to print
//
// let otel_tracer =
// stdio::init_tracer(otel_rsrc, stdio::identity, stdio::WriteNoWhere::default())?;
init_propagator()?;
Ok(tracing_opentelemetry::layer().with_tracer(otel_tracer))
}To retrieve the current trace_id (eg to add it into error message (as header or attributes))
let trace_id = tracing_opentelemetry_instrumentation_sdk::find_current_trace_id();
//json!({ "error" : "xxxxxx", "trace_id": trace_id})§Configuration based on the environment variables
To ease setup and compliance with OpenTelemetry SDK configuration, the configuration can be done with the following environment variables (see sample init_tracing() above):
OTEL_EXPORTER_OTLP_TRACES_ENDPOINTfallback toOTEL_EXPORTER_OTLP_ENDPOINTfor the url of the exporter / collectorOTEL_EXPORTER_OTLP_TRACES_PROTOCOLfallback toOTEL_EXPORTER_OTLP_PROTOCOL, fallback to auto-detection based on ENDPOINT portOTEL_SERVICE_NAMEfor the name of the serviceOTEL_PROPAGATORSfor the configuration of the propagatorsOTEL_TRACES_SAMPLER&OTEL_TRACES_SAMPLER_ARGfor configuration of the sampler
Few other environment variables can also be used to configure OTLP exporter (eg to configure headers, authentication,, etc…):
# For GRPC:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://localhost:4317"
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="grpc"
export OTEL_TRACES_SAMPLER="always_on"
# For HTTP:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://127.0.0.1:4318/v1/traces"
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="http/protobuf"
export OTEL_TRACES_SAMPLER="always_on"In the context of kubernetes, some of the above environment variables can be injected by the Opentelemetry operator (via inject-sdk):
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
# to inject environment variables only by opentelemetry-operator
instrumentation.opentelemetry.io/inject-sdk: "opentelemetry-operator/instrumentation"
instrumentation.opentelemetry.io/container-names: "app"
containers:
- name: appOr if you don’t setup inject-sdk, you can manually set the environment variable eg
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
containers:
- name: app
env:
- name: OTEL_SERVICE_NAME
value: "app"
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "grpc"
# for otel collector in `deployment` mode, use the name of the service
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: "http://opentelemetry-collector.opentelemetry-collector:4317"
# for otel collector in sidecar mode (imply to deploy a sidecar CR per namespace)
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://localhost:4317"
# for `daemonset` mode: need to use the local daemonset (value interpolated by k8s: `$(...)`)
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: "http://$(HOST_IP):4317"
# - name: HOST_IP
# valueFrom:
# fieldRef:
# fieldPath: status.hostIP§Troubleshot why no trace?
-
check you only have a single version of opentelemtry (could be part of your CI/build), use
cargo-denyorcargo tree# Check only one version of opentelemetry should be used # else issue with setup of global (static variable) # check_single_version_opentelemtry: cargo tree -i opentelemetry -
check the code of your exporter and the integration with
tracing(as subscriber’s layer) -
check the environment variables of opentelemetry
OTEL_EXPORTER...andOTEL_TRACES_SAMPLER(values are logged on targetotel::setup) -
check that log target
otel::tracingenable log leveltrace(orinfoif you usetracing_level_infofeature) to generate span to send to opentelemetry collector.
§Metrics
To configure opentelemetry metrics, enable the metrics feature, this will initialize a SdkMeterProvider, set it globally and add a a MetricsLayer to allow using tracing events to produce metrics.
The opentelemetry_sdk can still be used to produce metrics as well, since we configured the SdkMeterProvider globally, so any Axum/Tonic middleware that does not use tracing but directly opentelemetry::metrics will work.
Configure the following set of environment variables to configure the metrics exporter (on top of those configured above):
OTEL_EXPORTER_OTLP_METRICS_ENDPOINToverride toOTEL_EXPORTER_OTLP_ENDPOINTfor the url of the exporter / collectorOTEL_EXPORTER_OTLP_METRICS_PROTOCOLoverride toOTEL_EXPORTER_OTLP_PROTOCOL, fallback to auto-detection based on ENDPOINT portOTEL_EXPORTER_OTLP_METRICS_TIMEOUTto set the timeout for the connection to the exporterOTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCEto set the temporality preference for the exporterOTEL_METRIC_EXPORT_INTERVALto set frequence of metrics export in milliseconds, defaults to 60s
§Changelog - History
Re-exports§
pub use config::FeatureSet;tracing_subscriber_extpub use config::Guard;tracing_subscriber_extpub use config::LevelConfig;tracing_subscriber_extpub use config::LogFormat;tracing_subscriber_extpub use config::LogTimer;tracing_subscriber_extpub use config::OtelConfig;tracing_subscriber_extpub use config::TracingConfig;tracing_subscriber_extpub use config::WriterConfig;tracing_subscriber_extpub use formats::CompactLayerBuilder;tracing_subscriber_extpub use formats::FullLayerBuilder;tracing_subscriber_extpub use formats::JsonLayerBuilder;tracing_subscriber_extpub use formats::LayerBuilder;tracing_subscriber_extpub use formats::PrettyLayerBuilder;tracing_subscriber_extpub use formats::LogfmtLayerBuilder;tracing_subscriber_extandlogfmtpub use opentelemetry_sdk;pub use tracing_opentelemetry;
Modules§
- config
tracing_subscriber_ext - Flexible tracing configuration with builder pattern.
- formats
tracing_subscriber_ext - Format-specific layer builders for tracing output.
- otlp
otlp - resource
tracer - stdio
stdout - tracing_
subscriber_ ext tracing_subscriber_ext
Enums§
Functions§
- init_
propagator - Configure the global propagator based on content of the env variable OTEL_PROPAGATORS
Specifies Propagators to be used in a comma-separated list.
Default value:
"tracecontext,baggage"Example:export OTEL_PROPAGATORS="b3"Accepted values forOTEL_PROPAGATORSare: