Expand description
LLM (Rust LLM) is a unified interface for interacting with Large Language Model providers.
§Overview
This crate provides a consistent API for working with different LLM backends by abstracting away provider-specific implementation details. It supports:
- Chat-based interactions
- Text completion
- Embeddings generation
- Multiple providers (OpenAI, Anthropic, etc.)
- Request validation and retry logic
§Architecture
The crate is organized into modules that handle different aspects of LLM interactions:
Modules§
- api
- Server module for exposing LLM functionality via REST API
- backends
- Backend implementations for supported LLM providers like OpenAI, Anthropic, etc.
- builder
- Builder pattern for configuring and instantiating LLM providers Builder module for configuring and instantiating LLM providers.
- chain
- Chain multiple LLM providers together for complex workflows
- chat
- Chat-based interactions with language models (e.g. ChatGPT style)
- completion
- Text completion capabilities (e.g. GPT-3 style completion)
- embedding
- Vector embeddings generation for text
- error
- Error types and handling
- evaluator
- Evaluator for LLM providers Module for evaluating and comparing responses from multiple LLM providers.
- secret_
store - Secret store for storing API keys and other sensitive information
- stt
- Speech-to-text support
- validated_
llm - Validation wrapper for LLM providers with retry capabilities A module providing validation capabilities for LLM responses through a wrapper implementation.
Structs§
- Function
Call - FunctionCall contains details about which function to call and with what arguments.
- Tool
Call - Tool call represents a function call that an LLM wants to make. This is a standardized structure used across all providers.
Traits§
- LLMProvider
- Core trait that all LLM providers must implement, combining chat, completion and embedding capabilities into a unified interface