langfuse-ergonomic
Ergonomic Rust client for Langfuse, the open-source LLM observability platform.
Features
- 🏗️ Builder Pattern - Intuitive API using the Bon builder pattern library
- 🔄 Async/Await - Full async support with Tokio
- 🔒 Type Safe - Strongly typed with compile-time guarantees
- 🚀 Easy Setup - Simple configuration from environment variables
- 📊 Comprehensive - Support for traces, observations, scores, and more
- 🔁 Batch Processing - Automatic batching with retry logic and chunking
- ⚡ Production Ready - Built-in timeouts, connection pooling, and error handling
- 🏠 Self-Hosted Support - Full support for self-hosted Langfuse instances
Installation
[]
= "*"
= { = "1", = ["full"] }
= "1"
Optional Features
[]
= { = "*", = ["compression"] }
compression- Enable gzip, brotli, and deflate compression for requests (reduces bandwidth usage)
Quick Start
use LangfuseClient;
use json;
async
Configuration
Set these environment variables:
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_BASE_URL=https://cloud.langfuse.com # Optional
Or configure explicitly with advanced options:
use Duration;
let client = builder
.public_key
.secret_key
.base_url
.timeout // Custom timeout
.connect_timeout // Connection timeout
.user_agent // Custom user agent
.build;
Examples
Check the examples/ directory for more usage examples:
# Trace examples
# Trace fetching and management
# Observations (spans, generations, events)
# Scoring and evaluation
# Dataset management
# Prompt management
# Batch processing
# Self-hosted configuration
Batch Processing
The client supports efficient batch processing with automatic chunking, retry logic, and comprehensive error handling:
Default Configuration
- Max events per batch: 100 events
- Max batch size: 3.5 MB (conservative limit for Langfuse Cloud's 5MB limit)
- Auto-flush interval: 5 seconds
- Max retries: 3 attempts with exponential backoff
- Retry jitter: Enabled by default (25% random jitter to avoid thundering herd)
- Backpressure policy: Block (waits when queue is full)
- Max queue size: 10,000 events
use ;
use Duration;
let client = from_env?;
// Create a batcher with custom configuration
let batcher = builder
.client
.max_events // Events per batch (default: 100)
.max_bytes // Max batch size in bytes (default: 3.5MB)
.flush_interval // Auto-flush interval (default: 5s)
.max_retries // Retry attempts (default: 3)
.max_queue_size // Max events to queue (default: 10,000)
.backpressure_policy // What to do when queue is full
.build
.await;
// Add events - they'll be automatically batched
for event in events
// Manual flush if needed
let response = batcher.flush.await?;
println!;
// Monitor metrics
let metrics = batcher.metrics;
println!;
// Graceful shutdown (flushes remaining events)
let final_response = batcher.shutdown.await?;
Advanced Features
207 Multi-Status Handling: Automatically handles partial failures where some events succeed and others fail.
Backpressure Policies:
Block: Wait when queue is full (default)DropNew: Drop new events when queue is fullDropOldest: Remove oldest events to make room
Metrics & Monitoring:
let metrics = batcher.metrics;
// Available metrics:
// - queued: Current events waiting to be sent
// - flushed: Total successfully sent
// - failed: Total failed after all retries
// - dropped: Total dropped due to backpressure
// - retries: Total retry attempts
// - last_error_ts: Unix timestamp of last error
Error Handling:
match batcher.flush.await
API Coverage
Implemented Features ✅
Traces
- Creation - Full trace creation with metadata support
- Fetching - Get individual traces by ID
- Listing - List traces with filtering and pagination
- Management - Delete single or multiple traces
- Session and user tracking
- Tags and custom timestamps
- Input/output data capture
Observations
- Spans - Track execution steps and nested operations
- Generations - Monitor LLM calls with token usage
- Events - Log important milestones and errors
- Nested observations with parent-child relationships
- Log levels (DEBUG, INFO, WARNING, ERROR)
Scoring
- Numeric scores - Evaluate with decimal values (0.0-1.0)
- Categorical scores - Text-based classifications
- Binary scores - Success/failure tracking
- Rating scores - Star ratings and scales
- Trace-level and observation-level scoring
- Score metadata and comments
Dataset Management
- Creation - Create datasets with metadata
- Listing - List all datasets with pagination
- Fetching - Get dataset details by name
- Run Management - Get, list, and delete dataset runs
Prompt Management
- Fetching - Get prompts by name and version
- Listing - List prompts with filtering
- Creation - Basic prompt creation (placeholder implementation)
Batch Processing
- Automatic Batching - Events are automatically grouped into optimal batch sizes
- Size Limits - Respects Langfuse's 3.5MB batch size limit
- Retry Logic - Exponential backoff for failed requests
- Partial Failures - Handles 207 Multi-Status responses
- Background Processing - Non-blocking event submission
Production Features
- Timeouts - Configurable request and connection timeouts
- Compression - Optional gzip, brotli, and deflate support (via
compressionfeature flag) - HTTP/2 - Efficient connection multiplexing
- Connection Pooling - Reuses connections for better performance
- Error Handling - Structured error types with retry metadata
- Self-Hosted Support - Full compatibility with self-hosted instances
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT license (LICENSE-MIT)
Contributing
See CONTRIBUTING.md for guidelines.