Skip to main content

Crate litellm_rs

Crate litellm_rs 

Source
Expand description

Unified Rust SDK for chat completions, embeddings, images, and video across multiple LLM providers.

litellm-rust is a Rust port of LiteLLM. It provides a single LiteLLM client that routes requests to OpenAI-compatible, Anthropic, Gemini, and xAI backends using a "provider/model" format.

§Quick Start

use litellm_rs::{LiteLLM, ChatRequest};

let client = LiteLLM::new()?;
let resp = client
    .completion(ChatRequest::new("openai/gpt-4o").message("user", "hello"))
    .await?;
println!("{}", resp.content);

§Streaming

use futures_util::StreamExt;
use litellm_rs::{LiteLLM, ChatRequest};

let client = LiteLLM::new()?;
let mut stream = client
    .stream_completion(ChatRequest::new("openai/gpt-4o").message("user", "hello"))
    .await?;
while let Some(chunk) = stream.next().await {
    print!("{}", chunk?.content);
}

§Supported Providers

ProviderChatStreamingEmbeddingsImagesVideo
OpenAI-compatibleyesyesyesyesyes
Anthropicyesyes---
Geminiyes--yesyes
xAIyesyes---

Re-exports§

pub use client::LiteLLM;
pub use config::Config;
pub use config::ProviderConfig;
pub use config::ProviderKind;
pub use error::LiteLLMError;
pub use error::Result;
pub use stream::ChatStream;
pub use stream::ChatStreamChunk;
pub use types::*;

Modules§

client
config
error
http
providers
registry
router
stream
types