RLLM
RLLM is a Rust library that lets you use multiple LLM backends in a single project: OpenAI, Anthropic (Claude), Ollama, DeepSeek, xAI. With a unified API and builder style - similar to the Stripe experience - you can easily create chat or text completion requests without multiplying structures and crates.
Key Features
- Multi-backend: Manage OpenAI, Anthropic, Ollama, DeepSeek, xAI through a single entry point.
- Multi-step chains: Create multi-step chains with different backends at each step.
- Templates: Use templates to create complex prompts with variables.
- Builder pattern: Configure your LLM (model, temperature, max_tokens, timeouts...) with a few simple calls.
- Chat & Completions: Two unified traits (
ChatProvider
andCompletionProvider
) to cover most use cases. - Extensible: Easily add new backends.
- Rust-friendly: Designed with clear traits, unified error handling, and conditional compilation via features.
Installation
Simply add RLLM to your Cargo.toml
:
Examples
Name | Description |
---|---|
anthropic_example |
Demonstrates integration with Anthropic's Claude model for chat completion |
chain_example |
Shows how to create multi-step prompt chains for exploring programming language features |
multi_backend_example |
Illustrates chaining multiple LLM backends (OpenAI, Anthropic, DeepSeek) together in a single workflow |
ollama_example |
Example of using local LLMs through Ollama integration |
openai_example |
Basic OpenAI chat completion example with GPT models |
xai_example |
Basic xAI chat completion example with Grok models |
deepseek_example |
Basic DeepSeek chat completion example with deepseek-chat models |
Usage
use ;
let messages = vec!;
let chat_resp = llm.chat;
match chat_resp