RSLLM - Rust LLM Client Library
RSLLM is a Rust-native client library for Large Language Models with multi-provider support, streaming capabilities, and type-safe interfaces.
π Features
- π€ Multi-Provider Support: OpenAI, Anthropic Claude, Ollama, and more
- β‘ Streaming Responses: Real-time token streaming with async iterators
- π‘οΈ Type Safety: Compile-time guarantees for API contracts
- π Memory Efficient: Zero-copy operations where possible
- π Easy Integration: Seamless integration with RAG frameworks like RRAG
- βοΈ Configurable: Flexible configuration with builder patterns
- π Async-First: Built around async/await from the ground up
ποΈ Architecture
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Application βββββΆβ RSLLM βββββΆβ LLM Provider β
β (RRAG, etc) β β Client β β (OpenAI/etc) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Streaming ββββββ Provider ββββββ HTTP/API β
β Response β β Abstraction β β Transport β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
π Quick Start
Add RSLLM to your Cargo.toml:
[]
= "0.1"
= { = "1.0", = ["full"] }
Basic Chat Completion
use ;
async
Streaming Responses
use ;
use StreamExt;
async
Multiple Providers
use ;
// OpenAI
let openai_client = builder
.provider
.api_key
.model
.build?;
// Anthropic Claude
let claude_client = builder
.provider
.api_key
.model
.build?;
// Local Ollama
let ollama_client = builder
.provider
.base_url
.model
.build?;
π§ Configuration
RSLLM supports extensive configuration options:
use ;
use Duration;
let client = builder
.provider
.api_key
.model
.base_url
.timeout
.max_tokens
.temperature
.build?;
π Supported Providers
| Provider | Status | Models | Streaming |
|---|---|---|---|
| OpenAI | β | GPT-4, GPT-3.5 | β |
| Anthropic Claude | β | Claude-3 (Sonnet, Opus, Haiku) | β |
| Ollama | β | Llama, Mistral, CodeLlama | β |
| Azure OpenAI | π§ | GPT-4, GPT-3.5 | π§ |
| Cohere | π | Command | π |
| Google Gemini | π | Gemini Pro | π |
Legend: β Supported | π§ In Progress | π Planned
π Documentation
- API Documentation - Complete API reference
- Examples - Working code examples
- RRAG Integration - RAG framework integration
π§ Feature Flags
[]
= "0.1"
= [
"openai", # OpenAI provider support
"claude", # Anthropic Claude support
"ollama", # Ollama local model support
"streaming", # Streaming response support
"json-schema", # JSON schema support for structured outputs
]
π€ Integration with RRAG
RSLLM is designed to work seamlessly with the RRAG framework:
use *;
use Client;
let llm_client = builder
.provider
.api_key
.build?;
let rag_system = new
.with_llm_client
.build
.await?;
π License
This project is licensed under the MIT License - see the LICENSE file for details.
π€ Contributing
Contributions are welcome! Please see our Contributing Guidelines for details.
Part of the RRAG ecosystem - Build powerful RAG applications with Rust.