Crate rexis_llm

Crate rexis_llm 

Source
Expand description

§RSLLM - Rust LLM Client Library

RSLLM is a Rust-native client library for Large Language Models with multi-provider support, streaming capabilities, and type-safe interfaces.

§Design Philosophy

RSLLM embraces Rust’s core principles:

  • Type Safety: Compile-time guarantees for API contracts
  • Memory Safety: Zero-copy operations where possible
  • Async-First: Built around async/await and streaming
  • Multi-Provider: Unified interface for OpenAI, Claude, Ollama, etc.
  • Composable: Easy integration with frameworks like RRAG

§Architecture

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Application   │───▶│    RSLLM        │───▶│   LLM Provider  │
│   (RRAG, etc)   │    │    Client       │    │  (OpenAI/etc)   │
└─────────────────┘    └─────────────────┘    └─────────────────┘
                                │
                                ▼
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Streaming     │◀───│   Provider      │◀───│    HTTP/API     │
│   Response      │    │   Abstraction   │    │    Transport    │
└─────────────────┘    └─────────────────┘    └─────────────────┘

§Quick Start

use rsllm::{Client, Provider, ChatMessage, MessageRole};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create client with OpenAI provider
    let client = Client::builder()
        .provider(Provider::OpenAI)
        .api_key("your-api-key")
        .model("gpt-4")
        .build()?;

    // Simple chat completion
    let messages = vec![
        ChatMessage::new(MessageRole::User, "What is Rust?")
    ];

    let response = client.chat_completion(messages).await?;
    tracing::debug!("Response: {}", response.content);

    Ok(())
}

§Streaming Example

use rsllm::{Client, Provider, ChatMessage, MessageRole};
use futures_util::StreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = Client::builder()
        .provider(Provider::OpenAI)
        .api_key("your-api-key")
        .build()?;

    let messages = vec![
        ChatMessage::new(MessageRole::User, "Tell me a story")
    ];

    let mut stream = client.chat_completion_stream(messages).await?;

    while let Some(chunk) = stream.next().await {
        match chunk? {
            chunk if chunk.is_delta() => {
                tracing::debug!("{}", chunk.content);
            }
            chunk if chunk.is_done() => {
                tracing::debug!("\n[DONE]");
                break;
            }
            _ => {}
        }
    }

    Ok(())
}

Re-exports§

pub use client::Client;
pub use client::ClientBuilder;
pub use config::ClientConfig;
pub use config::ModelConfig;
pub use error::RsllmError;
pub use error::RsllmResult;
pub use message::ChatMessage;
pub use message::MessageContent;
pub use message::MessageRole;
pub use message::ToolCall;
pub use provider::LLMProvider;
pub use provider::Provider;
pub use provider::ProviderConfig;
pub use response::ChatResponse;
pub use response::CompletionResponse;
pub use response::EmbeddingResponse;
pub use response::StreamChunk;
pub use response::Usage;
pub use streaming::ChatStream;
pub use streaming::CompletionStream;

Modules§

client
RSLLM Client
config
RSLLM Configuration
error
RSLLM Error Handling
message
RSLLM Message Types
prelude
Prelude module for convenient imports
provider
RSLLM Provider Abstraction
response
RSLLM Response Types
streaming
RSLLM Streaming Support
tools
Tool Calling Support

Macros§

simple_tool

Constants§

DESCRIPTION
Framework description
NAME
Framework name
VERSION
Version information

Attribute Macros§

arg
The #[arg] attribute for marking individual tool parameters
context
The #[context] attribute for marking context parameters
tool
The #[tool] attribute macro for easy tool definition