Module resilient_llm

Module resilient_llm 

Source
Expand description

Resilience wrapper (retry/backoff) for LLM providers Resilience wrapper providing retry with exponential backoff for LLM providers.

This wrapper retries transient failures with exponential backoff and jitter. It does not retry on permanent errors like authentication or invalid requests.

§Example

use llm::builder::{LLMBackend, LLMBuilder};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let llm = LLMBuilder::new()
        .backend(LLMBackend::OpenAI)
        .api_key(std::env::var("OPENAI_API_KEY").unwrap_or_default())
        .model("gpt-4o-mini")
        .resilient(true)
        .resilient_attempts(3)
        .resilient_backoff(200, 2_000)
        .build()?;

    let msgs = [
        llm::chat::ChatMessage::user().content("Say hi succinctly").build(),
    ];
    let resp = llm.chat(&msgs).await?;
    println!("{}", resp);
    Ok(())
}

Structs§

ResilienceConfig
Configuration for retry and backoff behavior.
ResilientLLM
Resilient wrapper that retries transient failures using exponential backoff.