Crate rsllm

Source
Expand description

§RSLLM - Rust LLM Client Library

RSLLM is a Rust-native client library for Large Language Models with multi-provider support, streaming capabilities, and type-safe interfaces.

§Design Philosophy

RSLLM embraces Rust’s core principles:

  • Type Safety: Compile-time guarantees for API contracts
  • Memory Safety: Zero-copy operations where possible
  • Async-First: Built around async/await and streaming
  • Multi-Provider: Unified interface for OpenAI, Claude, Ollama, etc.
  • Composable: Easy integration with frameworks like RRAG

§Architecture

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Application   │───▶│    RSLLM        │───▶│   LLM Provider  │
│   (RRAG, etc)   │    │    Client       │    │  (OpenAI/etc)   │
└─────────────────┘    └─────────────────┘    └─────────────────┘
                                │
                                ▼
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Streaming     │◀───│   Provider      │◀───│    HTTP/API     │
│   Response      │    │   Abstraction   │    │    Transport    │
└─────────────────┘    └─────────────────┘    └─────────────────┘

§Quick Start

use rsllm::{Client, Provider, ChatMessage, MessageRole};
 
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create client with OpenAI provider
    let client = Client::builder()
        .provider(Provider::OpenAI)
        .api_key("your-api-key")
        .model("gpt-4")
        .build()?;
     
    // Simple chat completion
    let messages = vec![
        ChatMessage::new(MessageRole::User, "What is Rust?")
    ];
     
    let response = client.chat_completion(messages).await?;
    println!("Response: {}", response.content);
     
    Ok(())
}

§Streaming Example

use rsllm::{Client, Provider, ChatMessage, MessageRole};
use futures_util::StreamExt;
 
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = Client::builder()
        .provider(Provider::OpenAI)
        .api_key("your-api-key")
        .build()?;
     
    let messages = vec![
        ChatMessage::new(MessageRole::User, "Tell me a story")
    ];
     
    let mut stream = client.chat_completion_stream(messages).await?;
     
    while let Some(chunk) = stream.next().await {
        match chunk? {
            chunk if chunk.is_delta() => {
                print!("{}", chunk.content);
            }
            chunk if chunk.is_done() => {
                println!("\n[DONE]");
                break;
            }
            _ => {}
        }
    }
     
    Ok(())
}

Re-exports§

pub use client::Client;
pub use client::ClientBuilder;
pub use provider::Provider;
pub use provider::ProviderConfig;
pub use provider::LLMProvider;
pub use message::ChatMessage;
pub use message::MessageRole;
pub use message::MessageContent;
pub use message::ToolCall;
pub use response::ChatResponse;
pub use response::CompletionResponse;
pub use response::StreamChunk;
pub use response::EmbeddingResponse;
pub use response::Usage;
pub use streaming::ChatStream;
pub use streaming::CompletionStream;
pub use error::RsllmError;
pub use error::RsllmResult;
pub use config::ClientConfig;
pub use config::ModelConfig;

Modules§

client
RSLLM Client
config
RSLLM Configuration
error
RSLLM Error Handling
message
RSLLM Message Types
prelude
Prelude module for convenient imports
provider
RSLLM Provider Abstraction
response
RSLLM Response Types
streaming
RSLLM Streaming Support

Constants§

DESCRIPTION
Framework description
NAME
Framework name
VERSION
Version information