pub struct GenAIProvider { /* private fields */ }Expand description
Thread-safe LLM provider implementation using Arc<RwLock<>>.
This provider can be cheaply cloned and shared across multiple agents. Each clone shares the same underlying client and rate limiting state.
Implementations§
Source§impl GenAIProvider
impl GenAIProvider
Sourcepub fn new_with_config(
provider_type: Option<&str>,
api_key: Option<&str>,
) -> Result<Self>
pub fn new_with_config( provider_type: Option<&str>, api_key: Option<&str>, ) -> Result<Self>
Creates a new GenAI provider with explicit configuration.
Sourcepub async fn get_total_tokens_used(&self) -> usize
pub async fn get_total_tokens_used(&self) -> usize
Get total tokens used across all requests
Sourcepub async fn get_request_count(&self) -> usize
pub async fn get_request_count(&self) -> usize
Get total request count
Sourcepub async fn add_tokens(&self, count: usize)
pub async fn add_tokens(&self, count: usize)
Add tokens to the total count
Sourcepub async fn get_available_models(&self, provider: &str) -> Result<Vec<String>>
pub async fn get_available_models(&self, provider: &str) -> Result<Vec<String>>
Retrieves all available models for a specific provider.
Sourcepub async fn generate_response_simple(
&self,
model: &str,
prompt: &str,
) -> Result<String>
pub async fn generate_response_simple( &self, model: &str, prompt: &str, ) -> Result<String>
Generates a simple text response without streaming. Includes exponential backoff retry for rate limits and transient errors.
Sourcepub async fn generate_response_with_retry(
&self,
model: &str,
prompt: &str,
max_retries: usize,
) -> Result<String>
pub async fn generate_response_with_retry( &self, model: &str, prompt: &str, max_retries: usize, ) -> Result<String>
Generates a response with configurable retry count and exponential backoff.
Sourcepub async fn generate_response_stream_to_channel(
&self,
model: &str,
prompt: &str,
tx: UnboundedSender<String>,
) -> Result<()>
pub async fn generate_response_stream_to_channel( &self, model: &str, prompt: &str, tx: UnboundedSender<String>, ) -> Result<()>
Generates a streaming response and sends chunks via mpsc channel.
Sourcepub async fn generate_response_with_history(
&self,
model: &str,
messages: Vec<ChatMessage>,
) -> Result<String>
pub async fn generate_response_with_history( &self, model: &str, messages: Vec<ChatMessage>, ) -> Result<String>
Generate response with conversation history
Sourcepub async fn generate_response_with_options(
&self,
model: &str,
prompt: &str,
options: ChatOptions,
) -> Result<String>
pub async fn generate_response_with_options( &self, model: &str, prompt: &str, options: ChatOptions, ) -> Result<String>
Generate response with custom chat options
Sourcepub fn get_supported_providers() -> Vec<&'static str>
pub fn get_supported_providers() -> Vec<&'static str>
Get a list of supported providers
Sourcepub async fn get_available_providers(&self) -> Result<Vec<String>>
pub async fn get_available_providers(&self) -> Result<Vec<String>>
Get all available providers
Sourcepub async fn test_model(&self, model: &str) -> Result<bool>
pub async fn test_model(&self, model: &str) -> Result<bool>
Test if a model is available and working
Trait Implementations§
Source§impl Clone for GenAIProvider
impl Clone for GenAIProvider
Source§fn clone(&self) -> GenAIProvider
fn clone(&self) -> GenAIProvider
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more