Skip to main content

InferenceProvider

Trait InferenceProvider 

Source
pub trait InferenceProvider: Send + Sync {
    // Required methods
    fn complete<'life0, 'life1, 'life2, 'async_trait>(
        &'life0 self,
        conversation: &'life1 Conversation,
        options: &'life2 InferenceOptions,
    ) -> Pin<Box<dyn Future<Output = Result<InferenceResponse, InferenceError>> + Send + 'async_trait>>
       where Self: 'async_trait,
             'life0: 'async_trait,
             'life1: 'async_trait,
             'life2: 'async_trait;
    fn provider_name(&self) -> &str;
    fn default_model(&self) -> &str;
    fn supports_native_tools(&self) -> bool;
    fn supports_structured_output(&self) -> bool;
}
Expand description

Unified trait for inference providers (cloud LLMs and local SLMs).

Wraps existing LlmClient and SlmRunner to add:

  • Multi-turn conversation support
  • Tool calling
  • Structured output (response_format)
  • Token usage tracking

Required Methods§

Source

fn complete<'life0, 'life1, 'life2, 'async_trait>( &'life0 self, conversation: &'life1 Conversation, options: &'life2 InferenceOptions, ) -> Pin<Box<dyn Future<Output = Result<InferenceResponse, InferenceError>> + Send + 'async_trait>>
where Self: 'async_trait, 'life0: 'async_trait, 'life1: 'async_trait, 'life2: 'async_trait,

Run inference on a conversation with the given options.

Source

fn provider_name(&self) -> &str

Get the provider’s name for logging and routing.

Source

fn default_model(&self) -> &str

Get the default model ID for this provider.

Source

fn supports_native_tools(&self) -> bool

Check if this provider supports tool calling natively.

Source

fn supports_structured_output(&self) -> bool

Check if this provider supports structured output natively.

Implementors§