pub trait LanguageModel: Send + Sync {
// Required methods
fn provider(&self) -> &str;
fn model_id(&self) -> &str;
fn do_generate<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<GenerateResponse, Box<dyn Error + Sync + Send>>> + Send + 'async_trait>>
where 'life0: 'async_trait,
Self: 'async_trait;
fn do_stream<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<StreamResponse, Box<dyn Error + Sync + Send>>> + Send + 'async_trait>>
where 'life0: 'async_trait,
Self: 'async_trait;
// Provided methods
fn specification_version(&self) -> &str { ... }
fn supported_urls<'life0, 'async_trait>(
&'life0 self,
) -> Pin<Box<dyn Future<Output = HashMap<String, Vec<String>>> + Send + 'async_trait>>
where 'life0: 'async_trait,
Self: 'async_trait { ... }
fn generate<P>(&self, prompt: P) -> GenerateBuilder<'_, Self>
where Self: Sized,
P: TryInto<Prompt>,
Box<dyn Error + Sync + Send>: From<<P as TryInto<Prompt>>::Error> { ... }
fn stream<P>(&self, prompt: P) -> StreamBuilder<'_, Self>
where Self: Sized,
P: TryInto<Prompt>,
Box<dyn Error + Sync + Send>: From<<P as TryInto<Prompt>>::Error> { ... }
}Expand description
Core trait for language model providers.
This trait defines the standard interface that all language model implementations
must satisfy. It provides both high-level builder methods for ergonomic usage and
low-level do_* methods for direct provider integration.
§Implementation Requirements
Implementors must provide:
provider(): Provider identifier (e.g., “openai”, “anthropic”)model_id(): Model identifier (e.g., “gpt-4”, “claude-3-opus”)do_generate(): Non-streaming generation implementationdo_stream(): Streaming generation implementation
§Example Implementation
use ai_sdk_provider::{LanguageModel, CallOptions, GenerateResponse};
use async_trait::async_trait;
struct MyModel {
api_key: String,
}
#[async_trait]
impl LanguageModel for MyModel {
fn provider(&self) -> &str { "my-provider" }
fn model_id(&self) -> &str { "my-model-v1" }
async fn do_generate(&self, options: CallOptions) -> Result<GenerateResponse> {
// Implementation
}
async fn do_stream(&self, options: CallOptions) -> Result<StreamResponse> {
// Implementation
}
}Required Methods§
Sourcefn provider(&self) -> &str
fn provider(&self) -> &str
Returns the provider identifier for this model.
This should be a stable, lowercase identifier for the AI provider (e.g., “openai”, “anthropic”, “cohere”).
Sourcefn model_id(&self) -> &str
fn model_id(&self) -> &str
Returns the specific model identifier.
This should be the exact model name as recognized by the provider (e.g., “gpt-4-turbo”, “claude-3-opus-20240229”).
Sourcefn do_generate<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<GenerateResponse, Box<dyn Error + Sync + Send>>> + Send + 'async_trait>>where
'life0: 'async_trait,
Self: 'async_trait,
fn do_generate<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<GenerateResponse, Box<dyn Error + Sync + Send>>> + Send + 'async_trait>>where
'life0: 'async_trait,
Self: 'async_trait,
Executes a non-streaming generation request.
This method must be implemented by providers to perform the actual API call.
It receives fully configured CallOptions and returns a complete GenerateResponse
containing all generated content, usage statistics, and metadata.
Most users should prefer the high-level generate() method over calling this directly.
Sourcefn do_stream<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<StreamResponse, Box<dyn Error + Sync + Send>>> + Send + 'async_trait>>where
'life0: 'async_trait,
Self: 'async_trait,
fn do_stream<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<StreamResponse, Box<dyn Error + Sync + Send>>> + Send + 'async_trait>>where
'life0: 'async_trait,
Self: 'async_trait,
Executes a streaming generation request.
This method must be implemented by providers to establish a streaming connection.
It receives fully configured CallOptions and returns a StreamResponse containing
an async stream of response parts.
Most users should prefer the high-level stream() method over calling this directly.
Provided Methods§
Sourcefn specification_version(&self) -> &str
fn specification_version(&self) -> &str
Returns the specification version implemented by this model.
The default implementation returns “v3”. Override this only if implementing a different specification version.
Sourcefn supported_urls<'life0, 'async_trait>(
&'life0 self,
) -> Pin<Box<dyn Future<Output = HashMap<String, Vec<String>>> + Send + 'async_trait>>where
'life0: 'async_trait,
Self: 'async_trait,
fn supported_urls<'life0, 'async_trait>(
&'life0 self,
) -> Pin<Box<dyn Future<Output = HashMap<String, Vec<String>>> + Send + 'async_trait>>where
'life0: 'async_trait,
Self: 'async_trait,
Returns URLs supported by this model for various operations.
The default implementation returns an empty map. Providers can override this to expose endpoint URLs for debugging or advanced configuration.
Sourcefn generate<P>(&self, prompt: P) -> GenerateBuilder<'_, Self>
fn generate<P>(&self, prompt: P) -> GenerateBuilder<'_, Self>
Creates a builder for a non-streaming generation request.
This high-level method provides a fluent interface for configuring and executing
generation requests. The builder supports both explicit execution via .send()
and implicit execution by awaiting the builder directly.
§Example
let response = model.generate("Explain photosynthesis")
.temperature(0.7)
.max_tokens(500)
.await?;
println!("{}", response.text);Sourcefn stream<P>(&self, prompt: P) -> StreamBuilder<'_, Self>
fn stream<P>(&self, prompt: P) -> StreamBuilder<'_, Self>
Creates a builder for a streaming generation request.
Streaming enables processing partial responses as they arrive from the provider,
allowing for real-time display and reduced latency to first token. The stream
yields StreamPart items containing text deltas, tool calls, and metadata.
§Example
use tokio_stream::StreamExt;
let mut stream = model.stream("Write a poem")
.max_tokens(500)
.await?;
while let Some(part) = stream.next().await {
match part? {
StreamPart::TextDelta(delta) => print!("{}", delta),
_ => {}
}
}