Expand description
Unified Rust SDK for chat completions, embeddings, images, and video across multiple LLM providers.
litellm-rust is a Rust port of LiteLLM.
It provides a single LiteLLM client that routes requests to OpenAI-compatible,
Anthropic, Gemini, and xAI backends using a "provider/model" format.
§Quick Start
use litellm_rs::{LiteLLM, ChatRequest};
let client = LiteLLM::new()?;
let resp = client
.completion(ChatRequest::new("openai/gpt-4o").message("user", "hello"))
.await?;
println!("{}", resp.content);§Streaming
use futures_util::StreamExt;
use litellm_rs::{LiteLLM, ChatRequest};
let client = LiteLLM::new()?;
let mut stream = client
.stream_completion(ChatRequest::new("openai/gpt-4o").message("user", "hello"))
.await?;
while let Some(chunk) = stream.next().await {
print!("{}", chunk?.content);
}§Supported Providers
| Provider | Chat | Streaming | Embeddings | Images | Video |
|---|---|---|---|---|---|
| OpenAI-compatible | yes | yes | yes | yes | yes |
| Anthropic | yes | yes | - | - | - |
| Gemini | yes | - | - | yes | yes |
| xAI | yes | yes | - | - | - |
Re-exports§
pub use client::LiteLLM;pub use config::Config;pub use config::ProviderConfig;pub use config::ProviderKind;pub use error::LiteLLMError;pub use error::Result;pub use stream::ChatStream;pub use stream::ChatStreamChunk;pub use types::*;