# RSLLM - Rust LLM Client Library
[](https://crates.io/crates/rsllm)
[](https://docs.rs/rsllm)
[](https://opensource.org/licenses/MIT)
**RSLLM** is a Rust-native client library for Large Language Models with multi-provider support, streaming capabilities, and type-safe interfaces.
## π Features
- **π€ Multi-Provider Support**: OpenAI, Anthropic Claude, Ollama, and more
- **β‘ Streaming Responses**: Real-time token streaming with async iterators
- **π‘οΈ Type Safety**: Compile-time guarantees for API contracts
- **π Memory Efficient**: Zero-copy operations where possible
- **π Easy Integration**: Seamless integration with RAG frameworks like RRAG
- **βοΈ Configurable**: Flexible configuration with builder patterns
- **π Async-First**: Built around async/await from the ground up
## ποΈ Architecture
```text
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Application βββββΆβ RSLLM βββββΆβ LLM Provider β
β (RRAG, etc) β β Client β β (OpenAI/etc) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Streaming ββββββ Provider ββββββ HTTP/API β
β Response β β Abstraction β β Transport β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
```
## π Quick Start
Add RSLLM to your `Cargo.toml`:
```toml
[dependencies]
rsllm = "0.1"
tokio = { version = "1.0", features = ["full"] }
```
### Basic Chat Completion
```rust
use rsllm::{Client, Provider, ChatMessage, MessageRole};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create client with OpenAI provider
let client = Client::builder()
.provider(Provider::OpenAI)
.api_key("your-api-key")
.model("gpt-4")
.build()?;
// Simple chat completion
let messages = vec![
ChatMessage::new(MessageRole::User, "What is Rust?")
];
let response = client.chat_completion(messages).await?;
println!("Response: {}", response.content);
Ok(())
}
```
### Streaming Responses
```rust
use rsllm::{Client, Provider, ChatMessage, MessageRole};
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
.provider(Provider::OpenAI)
.api_key("your-api-key")
.model("gpt-4")
.build()?;
let messages = vec![
ChatMessage::new(MessageRole::User, "Tell me a story")
];
let mut stream = client.chat_completion_stream(messages).await?;
while let Some(chunk) = stream.next().await {
print!("{}", chunk?.content);
}
Ok(())
}
```
### Multiple Providers
```rust
use rsllm::{Client, Provider};
// OpenAI
let openai_client = Client::builder()
.provider(Provider::OpenAI)
.api_key("openai-api-key")
.model("gpt-4")
.build()?;
// Anthropic Claude
let claude_client = Client::builder()
.provider(Provider::Claude)
.api_key("claude-api-key")
.model("claude-3-sonnet")
.build()?;
// Local Ollama
let ollama_client = Client::builder()
.provider(Provider::Ollama)
.base_url("http://localhost:11434")
.model("llama3.1")
.build()?;
```
## π§ Configuration
RSLLM supports extensive configuration options:
```rust
use rsllm::{Client, Provider, ClientConfig};
use std::time::Duration;
let client = Client::builder()
.provider(Provider::OpenAI)
.api_key("your-api-key")
.model("gpt-4")
.base_url("https://api.openai.com/v1")
.timeout(Duration::from_secs(60))
.max_tokens(4096)
.temperature(0.7)
.build()?;
```
## π Supported Providers
| OpenAI | β
| GPT-4, GPT-3.5 | β
|
| Anthropic Claude | β
| Claude-3 (Sonnet, Opus, Haiku) | β
|
| Ollama | β
| Llama, Mistral, CodeLlama | β
|
| Azure OpenAI | π§ | GPT-4, GPT-3.5 | π§ |
| Cohere | π | Command | π |
| Google Gemini | π | Gemini Pro | π |
## π Documentation
- [API Documentation](https://docs.rs/rsllm) - Complete API reference
- [Examples](examples/) - Working code examples
- [RRAG Integration](https://github.com/leval-ai/rrag) - RAG framework integration
## π§ Feature Flags
```toml
[dependencies.rsllm]
version = "0.1"
features = [
"openai", # OpenAI provider support
"claude", # Anthropic Claude support
"ollama", # Ollama local model support
"streaming", # Streaming response support
"json-schema", # JSON schema support for structured outputs
]
```
## π€ Integration with RRAG
RSLLM is designed to work seamlessly with the [RRAG framework](https://github.com/leval-ai/rrag):
```rust
use rrag::prelude::*;
use rsllm::Client;
let llm_client = Client::builder()
.provider(Provider::OpenAI)
.api_key("your-api-key")
.build()?;
let rag_system = RragSystemBuilder::new()
.with_llm_client(llm_client)
.build()
.await?;
```
## π License
This project is licensed under the MIT License - see the [LICENSE](../../LICENSE) file for details.
## π€ Contributing
Contributions are welcome! Please see our [Contributing Guidelines](../../CONTRIBUTING.md) for details.
---
**Part of the [RRAG](https://github.com/leval-ai/rrag) ecosystem - Build powerful RAG applications with Rust.**