Skip to main content

TextGenerator

Trait TextGenerator 

Source
pub trait TextGenerator:
    Send
    + Sync
    + Debug {
    // Required methods
    fn model(&self) -> &str;
    fn generate(
        &self,
        prompt: &str,
        opts: &GenOptions,
    ) -> Result<Vec<String>, LlmError>;
}
Expand description

Text-generation primitive: given a user prompt (and optional system preamble), return one or more completions.

The returned Vec<String> has length exactly opts.n; adapters that only support n=1 natively MUST batch-call n times and surface a coherent error if one sub-call fails. Completion content is implementation-defined: callers who need structure should post-parse (e.g. split on newlines for multi-query).

Required Methods§

Source

fn model(&self) -> &str

Provider + model identifier. Lowercase, colon-separated by convention (e.g. "openai:gpt-4o-mini", "ollama:llama3.2:3b").

Source

fn generate( &self, prompt: &str, opts: &GenOptions, ) -> Result<Vec<String>, LlmError>

Generate completions for prompt.

§Errors

Any LlmError the adapter surfaces. Callers that use this for HyDE / multi-query SHOULD fall back gracefully to the plain query on error (same policy as the reranker fallback), so an LLM outage does not break retrieval.

Implementors§