greentic-telemetry
Structured JSON logging helpers built on top of tracing for Greentic services.
Quickstart
use ;
Run the included example to view JSON output:
Environment overview
| Variable | Description | Default |
|---|---|---|
TELEMETRY_EXPORT |
Export mode (json-stdout, otlp-grpc, otlp-http) |
json-stdout |
OTLP_ENDPOINT |
Collector endpoint (e.g. http://otel-collector:4317) |
unset |
OTLP_HEADERS |
Comma separated headers forwarded to the collector | unset |
TELEMETRY_SAMPLING |
parent or traceidratio:<ratio> |
parent |
CLOUD_PRESET |
Cloud preset (aws, gcp, azure, datadog, loki, none) |
none |
PII_REDACTION_MODE |
off, strict, allowlist |
off |
PII_ALLOWLIST_FIELDS |
Comma allowlist for PII fields (allowlist mode) | unset |
PII_MASK_REGEXES |
Extra regex masks applied to messages & string fields | unset |
When OTLP configuration fails, the crate logs a warning and keeps emitting JSON to stdout (if enabled).
Runnable examples
Context Propagation
Use inject_carrier / extract_carrier_into_span to round-trip span context and the Greentic cloud IDs across message boundaries:
;
let mut headers = Headers;
let span = info_span!;
let _guard = span.enter;
inject_carrier;
// Later, on the consumer side:
let span = info_span!;
extract_carrier_into_span;
let _guard = span.enter;
inject_carrier emits W3C traceparent / tracestate headers and the x-tenant, x-team, x-flow, x-run-id identifiers. extract_carrier_into_span restores the span parentage and rehydrates the context so subsequent logs include the inherited IDs. If you already entered the target span, extract_carrier will attempt to apply the context to the current span.
NATS propagation demo
use ;
use HashMap;
;
Cloud Presets
Set CLOUD_PRESET for quick-start wiring. Presets only prefill defaults—you can still override env vars manually.
| Preset | Default OTLP_ENDPOINT |
Notes |
|---|---|---|
aws |
http://aws-otel-collector:4317 |
Targets AWS Distro for OpenTelemetry collector. |
gcp |
http://otc-collector:4317 |
Example for Google Ops Agent’s OTLP receiver. |
azure |
http://otel-collector-azure:4317 |
Collector forwarding to Azure Monitor exporter. |
datadog |
http://datadog-agent:4317 |
If DD_API_KEY present, auto-inserts OTLP_HEADERS=DD_API_KEY=.... |
loki |
N/A | Keeps json-stdout; ship through Vector/Grafana Agent for Loki/Tempo. |
TELEMETRY_EXPORT remains respected. If unset, presets select otlp-grpc (except loki, which leaves JSON stdout).
Collector snippets
AWS ADOT sidecar (logs/traces):
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
awsxray:
local_mode: true
awscloudwatchlogs:
log_group_name: /greentic/services
service:
pipelines:
traces:
receivers:
exporters:
logs:
receivers:
exporters:
GCP Ops Agent OTLP collector (forward to Cloud Trace / Logging):
receivers:
otlp:
protocols:
grpc:
exporters:
googlecloud:
project: ${PROJECT_ID}
service:
pipelines:
traces:
receivers:
exporters:
logs:
receivers:
exporters:
Azure Monitor exporter via standalone collector:
receivers:
otlp:
protocols:
grpc:
exporters:
azuremonitor:
instrumentation_key: ${APP_INSIGHTS_KEY}
service:
pipelines:
traces:
receivers:
exporters:
logs:
receivers:
exporters:
Datadog agent OTLP:
receivers:
otlp:
protocols:
grpc:
exporters:
otlphttp:
endpoint: https://api.datadoghq.com
headers:
x-api-key: ${DD_API_KEY}
service:
pipelines:
traces:
receivers:
exporters:
logs:
receivers:
exporters:
Loki + Tempo via Vector:
sources:
otlp_grpc:
type: otlp
address: 0.0.0.0:4317
sinks:
loki:
type: loki
inputs:
endpoint: http://loki:3100
tempo:
type: tempo
inputs:
endpoint: http://tempo:4317
Metrics
- Counters, gauges, and histograms are exposed via
greentic_telemetry::metrics. - When
TELEMETRY_EXPORTresolves to an OTLP exporter, measurements are forwarded over the same gRPC channel. Withjson-stdout, metrics default to no-ops so instrumentation never needs guard clauses.
let requests = counter;
let latency = histogram;
requests.add;
latency.record;
Every data point automatically includes service.name, service.version, deployment.environment, and the active cloud context (tenant, team, flow, run_id). If a tracing span is in scope, exemplar hints (trace_id, span_id) ride along so compatible collectors can correlate metrics back to traces.
WASM guests / host tools
The wit/greentic-telemetry.wit package exposes a narrow logging interface that WASM guests can rely on. With wit-bindgen, the guest side becomes:
// wasm_guest.rs
use ;
A native host can forward the guest calls to tracing:
use ;
See examples/wasm_host_demo.rs for a runnable version.
PII Redaction
- Configure
PII_REDACTION_MODE=off|strict|allowlistto mask sensitive values before they reach collectors. strictmasks common tokens, emails, and phone numbers by default;allowlistkeeps only the fields inPII_ALLOWLIST_FIELDSunchanged.- Extend masking with
PII_MASK_REGEXES(comma-separated regexes) to scrub custom patterns.
OTLP demo
cargo run --example otlp_demo emits a span (demo.operation), structured logs, and metrics (demo.request.count, demo.request.duration_ms). Point TELEMETRY_EXPORT=otlp-grpc and OTLP_ENDPOINT at a collector before running.
Troubleshooting
- No logs: ensure
RUST_LOGincludesinfo(or higher) and that the collector has a logs pipeline when using OTLP. - Metrics missing: verify the collector has a metrics pipeline and that it isn’t filtering by resource attributes (
service.*,deployment.environment). - Context lost: make sure headers survive transport (case sensitivity, lower-case keys for NATS, etc.) and call
extract_carrier_into_spanbefore entering the span that should adopt the remote context. - Unexpected PII: enable
PII_REDACTION_MODE=strictand add custom regexes for service-specific tokens. - Snapshot tests: use
greentic_telemetry::dev::test_init_for_snapshot()andcapture_logsto gather deterministic JSON output with a fixed timestamp.