Trait InferenceGatewayAPI

Source
pub trait InferenceGatewayAPI {
    // Required methods
    fn list_models(
        &self,
    ) -> impl Future<Output = Result<Vec<ProviderModels>, GatewayError>> + Send;
    fn list_models_by_provider(
        &self,
        provider: Provider,
    ) -> impl Future<Output = Result<ProviderModels, GatewayError>> + Send;
    fn generate_content(
        &self,
        provider: Provider,
        model: &str,
        messages: Vec<Message>,
    ) -> impl Future<Output = Result<GenerateResponse, GatewayError>> + Send;
    fn generate_content_stream(
        &self,
        provider: Provider,
        model: &str,
        messages: Vec<Message>,
    ) -> impl Stream<Item = Result<SSEvents, GatewayError>> + Send;
    fn health_check(
        &self,
    ) -> impl Future<Output = Result<bool, GatewayError>> + Send;
}
Expand description

Core API interface for the Inference Gateway

Required Methods§

Source

fn list_models( &self, ) -> impl Future<Output = Result<Vec<ProviderModels>, GatewayError>> + Send

Lists available models from all providers

§Errors
§Returns

A list of models available from all providers

Source

fn list_models_by_provider( &self, provider: Provider, ) -> impl Future<Output = Result<ProviderModels, GatewayError>> + Send

Lists available models by a specific provider

§Arguments
  • provider - The LLM provider to list models for
§Errors
§Returns

A list of models available from the specified provider

Source

fn generate_content( &self, provider: Provider, model: &str, messages: Vec<Message>, ) -> impl Future<Output = Result<GenerateResponse, GatewayError>> + Send

Generates content using a specified model

§Arguments
  • provider - The LLM provider to use
  • model - Name of the model
  • messages - Conversation history and prompt
  • tools - Optional tools to use for generation
§Errors
§Returns

The generated response

Source

fn generate_content_stream( &self, provider: Provider, model: &str, messages: Vec<Message>, ) -> impl Stream<Item = Result<SSEvents, GatewayError>> + Send

Stream content generation directly using the backend SSE stream.

§Arguments
  • provider - The LLM provider to use
  • model - Name of the model
  • messages - Conversation history and prompt
§Returns

A stream of Server-Sent Events (SSE) from the Inference Gateway API

Source

fn health_check( &self, ) -> impl Future<Output = Result<bool, GatewayError>> + Send

Checks if the API is available

Dyn Compatibility§

This trait is not dyn compatible.

In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.

Implementors§