Expand description
Unified LLM client module.
This module provides a unified interface for all LLM operations across the codebase:
- Summarization — Generating document summaries
- Retrieval — Document tree navigation
- TOC Processing — Table of contents extraction
§Features
- Unified configuration with purpose-specific presets
- Automatic retry with exponential backoff
- JSON response parsing
- Unified error handling
§Architecture
┌─────────────────────────────────────────────────────────────────┐
│ LlmPool │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ summary │ │ retrieval │ │ toc │ │
│ │ LlmClient │ │ LlmClient │ │ LlmClient │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────┐ │
│ │ async-openai │ │
│ └─────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘§Example
use vectorless::llm::{LlmPool, LlmConfig, RetryConfig};
// Create a pool with default configurations
let pool = LlmPool::from_defaults();
// Use summary client
let summary = pool.summary().complete(
"You summarize text concisely.",
"Long text to summarize..."
).await?;
// Use retrieval client with JSON output
#[derive(serde::Deserialize)]
struct NavDecision { section: usize }
let decision: NavDecision = pool.retrieval().complete_json(
"You navigate documents.",
"Find section about X..."
).await?;
Structs§
- Fallback
Chain - Fallback chain manager.
- Fallback
Config - Runtime fallback configuration (converted from config::FallbackConfig).
- Fallback
Result - Result from a fallback-aware LLM call.
- Fallback
Step - A single step in the fallback chain.
- LlmClient
- Unified LLM client.
- LlmConfig
- LLM client configuration.
- LlmConfigs
- Pool of LLM configurations for different purposes.
- LlmPool
- Pool of LLM clients for different purposes.
- Retry
Config - Retry configuration for LLM calls.
Enums§
- LlmError
- LLM error types.
Type Aliases§
- LlmResult
- Specialized result type for LLM operations.