Expand description
AutoAgents LLM is a unified interface for interacting with Large Language Model providers.
§Overview
This crate provides a consistent API for working with different LLM backends by abstracting away provider-specific implementation details. It supports:
- Chat-based interactions
- Text completion
- Embeddings generation
- Multiple providers (OpenAI, Anthropic, etc.)
- Request validation and retry logic
§Architecture
The crate is organized into modules that handle different aspects of LLM interactions:
Modules§
- backends
- Backend implementations for supported LLM providers like OpenAI, Anthropic, etc.
- builder
- Builder pattern for configuring and instantiating LLM providers Builder module for configuring and instantiating LLM providers.
- chat
- Chat-based interactions with language models (e.g. ChatGPT style)
- completion
- Text completion capabilities (e.g. GPT-3 style completion)
- embedding
- Vector embeddings generation for text
- error
- Error types and handling
- evaluator
- Evaluator for LLM providers Module for evaluating and comparing responses from multiple LLM providers.
- models
- Listing models support
- secret_
store - Secret store for storing API keys and other sensitive information
Structs§
- Function
Call - FunctionCall contains details about which function to call and with what arguments.
- Tool
Call - Tool call represents a function call that an LLM wants to make. This is a standardized structure used across all providers.
Traits§
- LLMProvider
- Core trait that all LLM providers must implement, combining chat, completion and embedding capabilities into a unified interface