opentelemetry-configuration
Opinionated OpenTelemetry SDK configuration for Rust applications.
This crate wires together the OpenTelemetry SDK, OTLP exporters, and the tracing crate ecosystem into a cohesive configuration system. It handles initialisation, flushing, and shutdown of all signal providers (traces, metrics, logs).
Features
- Layered configuration - Combine defaults, config files, environment variables, and programmatic overrides using figment
- Sensible defaults - Protocol-specific endpoints (localhost:4318 for HTTP, localhost:4317 for gRPC)
- Drop-based lifecycle - Automatic flush and shutdown when guard goes out of scope
- Tracing integration - Automatic setup of
tracing-opentelemetryandopentelemetry-appender-tracinglayers
Quick Start
use ;
Configuration
Programmatic
use ;
let _guard = new
.endpoint
.protocol
.service_name
.service_version
.deployment_environment
.build?;
From Environment Variables
use OtelSdkBuilder;
let _guard = new
.with_standard_env // Reads OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_SERVICE_NAME, etc.
.build?;
From Config File
use OtelSdkBuilder;
let _guard = new
.with_file
.build?;
TOML Configuration Format
[]
= "http://collector:4318"
= "httpbinary" # or "grpc", "httpjson"
= "10s"
[]
= "Bearer token"
[]
= "my-service"
= "1.0.0"
= "production"
[]
= true
[]
= 2048
= 512
= "5s"
[]
= true
[]
= true
Batch Configuration
The batch processor settings control how telemetry data is batched before export:
| Setting | Default | Description |
|---|---|---|
max_queue_size |
2048 | Maximum spans/logs buffered before dropping |
max_export_batch_size |
512 | Maximum items per export batch |
scheduled_delay |
5s | Interval between export attempts |
Protocol Support
| Protocol | Default Port | Content-Type |
|---|---|---|
HttpBinary (default) |
4318 | application/x-protobuf |
HttpJson |
4318 | application/json |
Grpc |
4317 | gRPC |
Lifecycle Management
The OtelGuard returned by build() manages the lifecycle of all providers:
let guard = new
.service_name
.build?;
// Manual flush if needed
guard.flush;
// Explicit shutdown (consumes guard)
guard.shutdown?;
// Or let drop handle it automatically
Disabling Signals
let _guard = new
.service_name
.traces
.metrics
.logs
.build?;
Custom Resource Attributes
let _guard = new
.service_name
.resource_attribute
.resource_attribute
.build?;
Instrumentation Scope Name
By default, the instrumentation scope name (otel.library.name) is set to the service name. You can override it explicitly:
let _guard = new
.service_name
.instrumentation_scope_name
.build?;
Compute Environment Detection
Resource attributes are automatically detected based on the compute environment. By default (Auto), generic detectors run and the environment is probed:
use ;
// Explicit Lambda environment
let _guard = new
.service_name
.compute_environment
.build?;
// Kubernetes environment
let _guard = new
.service_name
.compute_environment
.build?;
// No automatic detection
let _guard = new
.service_name
.compute_environment
.build?;
Available environments:
Auto(default): Runs host/OS/process/Rust detectors, probes for Lambda and K8sLambda: Generic detectors + Rust detector + Lambda-specific attributes (faas., cloud.)Kubernetes: Generic detectors + Rust detector + K8s detectorNone: No automatic detection
Rust Build Information
Runtime Detection (Automatic)
All compute environments (except None) automatically detect Rust-specific attributes:
process.runtime.name= "rust"rust.target_os,rust.target_arch,rust.target_familyrust.debug(true for debug builds)process.executable.size(binary size in bytes)
Compile-Time Information (Optional)
To capture rustc version and channel, add to your build.rs:
Then in your application:
use ;
let _guard = new
.service_name
.with_rust_build_info
.build?;
This adds:
process.runtime.version(e.g., "1.84.0")process.runtime.description(full rustc version string)rust.channel("stable", "beta", or "nightly")
Error Handling
The SdkError enum covers all failure modes:
| Variant | Cause |
|---|---|
Config |
Invalid configuration (malformed TOML, type mismatches) |
TraceExporter |
Failed to create trace exporter (invalid endpoint, TLS issues) |
MetricExporter |
Failed to create metric exporter |
LogExporter |
Failed to create log exporter |
TracingSubscriber |
Failed to initialise tracing (already initialised) |
Flush |
Failed to flush pending data |
Shutdown |
Failed to shut down providers cleanly |
InvalidEndpoint |
Endpoint URL missing http:// or https:// scheme |
use ;
match new.service_name.build
Troubleshooting
No telemetry appearing in collector
- Check the endpoint - Ensure the collector is running and reachable
- Verify the protocol - HTTP uses port 4318, gRPC uses port 4317
- Check signal enablement - Signals default to enabled, but verify with
.traces(true)etc. - Ensure guard lives long enough - The guard must not be dropped before telemetry is generated
"tracing subscriber already initialised" error
Set init_tracing_subscriber to false if you manage tracing yourself:
let _guard = new
.service_name
.init_tracing_subscriber
.build?;
Connection refused / timeout errors
- Verify the collector endpoint is accessible from your application
- For gRPC, ensure TLS is configured if required
- Check firewall rules and network policies
Missing resource attributes
ComputeEnvironment::Nonedisables all automatic detection- Lambda attributes only appear when
AWS_LAMBDA_FUNCTION_NAMEis set (orComputeEnvironment::Lambdais explicit) - K8s attributes require the K8s downward API or
ComputeEnvironment::Kubernetes
Licence
MIT