Expand description
§LiteLLM-RS
A Rust implementation of Python LiteLLM - call 100+ LLM APIs using OpenAI format. High-performance AI Gateway with unified interface for multiple providers.
§Features
- Python LiteLLM Compatible: Drop-in replacement with same API design
- OpenAI Compatible: Full compatibility with OpenAI API format
- Multi-Provider: Support for 100+ AI providers (OpenAI, Anthropic, Azure, Google, etc.)
- Unified Interface: Call any LLM using the same function signature
- High Performance: Built with Rust and Tokio for maximum throughput
- Intelligent Routing: Smart load balancing and failover across providers
- Cost Optimization: Automatic cost tracking and provider selection
- Streaming Support: Real-time response streaming
§Quick Start - Python LiteLLM Style
use litellm_rs::{completion, user_message, system_message};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Call OpenAI (default provider for gpt-* models)
let response = completion(
"gpt-4",
vec![
system_message("You are a helpful assistant."),
user_message("Hello, how are you?"),
],
None,
).await?;
println!("Response: {}", response.choices[0].message.content);
// Call Anthropic with explicit provider
let response = completion(
"anthropic/claude-3-sonnet-20240229",
vec![user_message("What is the capital of France?")],
None,
).await?;
println!("Claude says: {}", response.choices[0].message.content);
Ok(())
}§Gateway Mode
use litellm_rs::{Gateway, Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::from_file("config/gateway.yaml").await?;
let gateway = Gateway::new(config).await?;
gateway.run().await?;
Ok(())
}Re-exports§
pub use config::Config;pub use utils::error::GatewayError;pub use utils::error::Result;pub use core::completion::Choice;pub use core::completion::CompletionOptions;pub use core::completion::CompletionResponse;pub use core::completion::ContentPart;pub use core::completion::LiteLLMError;pub use core::completion::Message;pub use core::completion::Router;pub use core::completion::Usage;pub use core::completion::acompletion;pub use core::completion::assistant_message;pub use core::completion::completion;pub use core::completion::completion_stream;pub use core::completion::system_message;pub use core::completion::user_message;pub use core::types::MessageContent;pub use core::types::MessageRole;pub use core::models::RequestContext;pub use core::providers::Provider;pub use core::providers::ProviderError;pub use core::providers::ProviderRegistry;pub use core::providers::ProviderType;pub use core::providers::UnifiedProviderError;pub use core::models::openai::*;
Modules§
- config
- Configuration management for the Gateway
- core
- Core functionality for the Gateway
- sdk
- Unified LLM Provider SDK
- server
- HTTP server implementation
- services
- Services module
- storage
- Storage layer for the Gateway
- utils
- Utility modules for the LiteLLM Gateway
Macros§
- build_
request - Macro to implement standard HTTP request builder
- define_
provider_ config - Configuration
- dispatch_
all_ providers - Macro for unified provider dispatch that eliminates repetitive match statements
- extract_
usage - Macro to implement usage extraction from response
- global_
shared - Macro to create a global shared resource
- impl_
error_ conversion - Macro to implement error conversion for all provider errors This eliminates the 15 repetitive From implementations
- impl_
health_ check - Macro to implement health check using a simple API call
- impl_
provider_ basics - Macro to implement common provider methods
- impl_
streaming - Macro to implement streaming response handler This eliminates the repetitive 20-line streaming handler pattern
- log_
structured - Convenience macros for structured logging
- model_
list - Macro to generate model list
- not_
implemented - Macro to implement not-implemented methods
- provider_
config - Macro to generate provider configuration struct
- safe_
unwrap - Macro for safe unwrapping with context
- safe_
unwrap_ option - Macro for safe option unwrapping with context
- standard_
provider - Macro to create standard provider implementation
- time_
function - Macro for easy performance timing
- typed_
provider - Macro to create a typed provider with specific capabilities
- validate_
response - Macro to validate required fields in a response
- verify_
capability - Macro to verify provider capabilities at compile time
- with_
retry - Macro to implement retry logic
Structs§
Constants§
- DESCRIPTION
- Description of the crate
- NAME
- Name of the crate
- VERSION
- Current version of the crate
Functions§
- build_
info - Build