Expand description
RLLM (Rust LLM) is a unified interface for interacting with Large Language Model providers.
§Overview
This crate provides a consistent API for working with different LLM backends by abstracting away provider-specific implementation details. It supports:
- Chat-based interactions
- Text completion
- Embeddings generation
- Multiple providers (OpenAI, Anthropic, etc.)
- Request validation and retry logic
§Architecture
The crate is organized into modules that handle different aspects of LLM interactions:
Modules§
- Backend implementations for supported LLM providers like OpenAI, Anthropic, etc.
- Builder pattern for configuring and instantiating LLM providers Builder module for configuring and instantiating LLM providers.
- Chain multiple LLM providers together for complex workflows
- Chat-based interactions with language models (e.g. ChatGPT style)
- Text completion capabilities (e.g. GPT-3 style completion)
- Vector embeddings generation for text
- Error types and handling
- Evaluator for LLM providers Module for evaluating and comparing responses from multiple LLM providers.
- Validation wrapper for LLM providers with retry capabilities A module providing validation capabilities for LLM responses through a wrapper implementation.
Traits§
- Core trait that all LLM providers must implement, combining chat, completion and embedding capabilities into a unified interface