Expand description
LLM Observatory Rust SDK
This SDK provides trait-based instrumentation for Large Language Model (LLM) applications, enabling comprehensive observability through OpenTelemetry integration.
§Features
- Automatic tracing of LLM requests and responses
- Cost calculation based on token usage
- Support for streaming completions
- OpenTelemetry-based observability
- Provider-agnostic trait design
- Built-in support for OpenAI, Anthropic, and more
§Quick Start
use llm_observatory_sdk::{LLMObservatory, InstrumentedLLM, OpenAIClient};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize the observatory
let observatory = LLMObservatory::builder()
.with_service_name("my-app")
.with_otlp_endpoint("http://localhost:4317")
.build()?;
// Create an instrumented client
let client = OpenAIClient::new("your-api-key")
.with_observatory(observatory);
// Make an instrumented LLM call
let response = client.chat_completion()
.model("gpt-4")
.message("user", "Hello, world!")
.send()
.await?;
println!("Response: {}", response.content);
println!("Cost: ${:.6}", response.cost_usd);
Ok(())
}§Architecture
The SDK is built around several core concepts:
LLMObservatory: Central observability manager that handles OpenTelemetry setupInstrumentedLLM: Trait for LLM clients with automatic instrumentationOpenAIClient: OpenAI-specific implementation with full API support- Cost calculation: Automatic cost tracking based on provider pricing
§OpenTelemetry Integration
All LLM operations are automatically traced using OpenTelemetry semantic conventions for GenAI operations, making them compatible with standard observability tools like Jaeger, Prometheus, and Grafana.
Re-exports§
pub use error::Error;pub use error::Result;pub use instrument::InstrumentedSpan;pub use instrument::SpanBuilder;pub use observatory::LLMObservatory;pub use observatory::ObservatoryBuilder;pub use traits::ChatCompletionRequest;pub use traits::ChatCompletionResponse;pub use traits::InstrumentedLLM;pub use traits::StreamChunk;pub use openai::OpenAIClient;pub use openai::OpenAIConfig;
Modules§
- cost
- Cost calculation utilities for LLM operations.
- error
- Error types for the LLM Observatory SDK.
- instrument
- Instrumentation utilities for creating and managing OpenTelemetry spans.
- observatory
- LLM Observatory core implementation with OpenTelemetry integration.
- openai
- OpenAI client implementation with automatic instrumentation.
- traits
- Core traits for instrumented LLM clients.
Structs§
- Chat
Message - Chat message for conversational models.
- Cost
- Cost information for an LLM call.
- Latency
- Latency metrics for an LLM call.
- LlmOutput
- LLM output (completion).
- LlmSpan
- Represents a single LLM operation (request/response) as an OpenTelemetry span.
- Metadata
- Metadata for an LLM request/response.
- Pricing
- Pricing information for a model.
- Token
Usage - Token usage statistics for an LLM call.
Enums§
- Core
Error - Error types for LLM Observatory operations.
- LlmInput
- LLM input (prompt).
- Provider
- LLM provider identifier.
- Span
Status - Span status following OpenTelemetry conventions.
Constants§
- VERSION
- SDK version
Functions§
- init
- Initialize the SDK with default settings.
- init_
with_ endpoint - Initialize the SDK with a custom OTLP endpoint.
Type Aliases§
- Core
Result - Result type alias using LLM Observatory’s Error type.