langfuse-ergonomic
Ergonomic Rust client for Langfuse, the open-source LLM observability platform.
Features
- 🏗️ Builder Pattern - Intuitive API using the Bon builder pattern library
- 🔄 Async/Await - Full async support with Tokio
- 🔒 Type Safe - Strongly typed with compile-time guarantees
- 🚀 Easy Setup - Simple configuration from environment variables
- 📊 Comprehensive - Support for traces, observations, scores, and more
- 🔁 Batch Processing - Automatic batching with retry logic and chunking
- ⚡ Production Ready - Built-in timeouts, compression, and error handling
- 🏠 Self-Hosted Support - Full support for self-hosted Langfuse instances
Installation
[]
= "*"
= { = "1", = ["full"] }
= "1"
Quick Start
use LangfuseClient;
use json;
async
Configuration
Set these environment variables:
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_BASE_URL=https://cloud.langfuse.com # Optional
Or configure explicitly with advanced options:
use Duration;
let client = builder
.public_key
.secret_key
.base_url
.timeout // Custom timeout
.connect_timeout // Connection timeout
.user_agent // Custom user agent
.build;
Examples
Check the examples/ directory for more usage examples:
# Trace examples
# Trace fetching and management
# Observations (spans, generations, events)
# Scoring and evaluation
# Dataset management
# Prompt management
# Batch processing
# Self-hosted configuration
Batch Processing
The client supports efficient batch processing with automatic chunking and retry logic:
use ;
use Duration;
let client = from_env?;
// Create a batcher with custom configuration
let batcher = builder
.client
.max_events // Events per batch
.flush_interval // Auto-flush interval
.max_retries // Retry attempts
.build;
// Add events - they'll be automatically batched
for event in events
// Manual flush if needed
let response = batcher.flush.await?;
println!;
// Graceful shutdown
batcher.shutdown.await?;
API Coverage
Implemented Features ✅
Traces
- Creation - Full trace creation with metadata support
- Fetching - Get individual traces by ID
- Listing - List traces with filtering and pagination
- Management - Delete single or multiple traces
- Session and user tracking
- Tags and custom timestamps
- Input/output data capture
Observations
- Spans - Track execution steps and nested operations
- Generations - Monitor LLM calls with token usage
- Events - Log important milestones and errors
- Nested observations with parent-child relationships
- Log levels (DEBUG, INFO, WARNING, ERROR)
Scoring
- Numeric scores - Evaluate with decimal values (0.0-1.0)
- Categorical scores - Text-based classifications
- Binary scores - Success/failure tracking
- Rating scores - Star ratings and scales
- Trace-level and observation-level scoring
- Score metadata and comments
Dataset Management
- Creation - Create datasets with metadata
- Listing - List all datasets with pagination
- Fetching - Get dataset details by name
- Run Management - Get, list, and delete dataset runs
Prompt Management
- Fetching - Get prompts by name and version
- Listing - List prompts with filtering
- Creation - Basic prompt creation (placeholder implementation)
Batch Processing
- Automatic Batching - Events are automatically grouped into optimal batch sizes
- Size Limits - Respects Langfuse's 3.5MB batch size limit
- Retry Logic - Exponential backoff for failed requests
- Partial Failures - Handles 207 Multi-Status responses
- Background Processing - Non-blocking event submission
Production Features
- Timeouts - Configurable request and connection timeouts
- Compression - Built-in gzip, brotli, and deflate support
- HTTP/2 - Efficient connection multiplexing
- Connection Pooling - Reuses connections for better performance
- Error Handling - Structured error types with retry metadata
- Self-Hosted Support - Full compatibility with self-hosted instances
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT license (LICENSE-MIT)
Contributing
See CONTRIBUTING.md for guidelines.