pub struct AiClient {
pub manifest: ProtocolManifest,
pub transport: Arc<HttpTransport>,
pub pipeline: Arc<Pipeline>,
pub loader: Arc<ProtocolLoader>,
/* private fields */
}Expand description
Unified AI client that works with any provider through protocol configuration.
Fields§
§manifest: ProtocolManifest§transport: Arc<HttpTransport>§pipeline: Arc<Pipeline>§loader: Arc<ProtocolLoader>Implementations§
Source§impl AiClient
impl AiClient
Sourcepub async fn signals(&self) -> SignalsSnapshot
pub async fn signals(&self) -> SignalsSnapshot
Snapshot current runtime signals (facts only) for application-layer orchestration.
Sourcepub async fn new(model: &str) -> Result<Self>
pub async fn new(model: &str) -> Result<Self>
Create a new client for a specific model.
The model identifier should be in the format “provider/model-name” (e.g., “anthropic/claude-3-5-sonnet”)
Sourcepub fn chat(&self) -> ChatRequestBuilder<'_>
pub fn chat(&self) -> ChatRequestBuilder<'_>
Create a chat request builder.
Sourcepub async fn chat_batch(
&self,
requests: Vec<ChatBatchRequest>,
concurrency_limit: Option<usize>,
) -> Vec<Result<UnifiedResponse>> ⓘ
pub async fn chat_batch( &self, requests: Vec<ChatBatchRequest>, concurrency_limit: Option<usize>, ) -> Vec<Result<UnifiedResponse>> ⓘ
Execute multiple chat requests concurrently with an optional concurrency limit.
Notes:
- Results preserve input order.
- Internally uses the same “streaming → UnifiedResponse” path for consistency.
Sourcepub async fn chat_batch_smart(
&self,
requests: Vec<ChatBatchRequest>,
) -> Vec<Result<UnifiedResponse>> ⓘ
pub async fn chat_batch_smart( &self, requests: Vec<ChatBatchRequest>, ) -> Vec<Result<UnifiedResponse>> ⓘ
Smart batch execution with a conservative, developer-friendly default heuristic.
- For very small batches, run sequentially to reduce overhead.
- For larger batches, run with a bounded concurrency.
You can override the chosen concurrency via env:
AI_LIB_BATCH_CONCURRENCY
Sourcepub async fn report_feedback(&self, event: FeedbackEvent) -> Result<()>
pub async fn report_feedback(&self, event: FeedbackEvent) -> Result<()>
Report user feedback (optional). This delegates to the injected FeedbackSink.
Sourcepub async fn update_rate_limits(&self, headers: &HeaderMap)
pub async fn update_rate_limits(&self, headers: &HeaderMap)
Update rate limiter state from response headers using protocol-mapped names.
This method is public for testing purposes.
Sourcepub async fn call_model(
&self,
request: UnifiedRequest,
) -> Result<UnifiedResponse>
pub async fn call_model( &self, request: UnifiedRequest, ) -> Result<UnifiedResponse>
Unified entry point for calling a model. Handles text, streaming, and error fallback automatically.
Sourcepub async fn call_model_with_stats(
&self,
request: UnifiedRequest,
) -> Result<(UnifiedResponse, CallStats)>
pub async fn call_model_with_stats( &self, request: UnifiedRequest, ) -> Result<(UnifiedResponse, CallStats)>
Call a model and also return per-call stats (latency, retries, request ids, endpoint, usage, etc.).
This is intended for higher-level model selection and observability. Call a model and also return per-call stats (latency, retries, request ids, endpoint, usage, etc.).
This is intended for higher-level model selection and observability.
Sourcepub fn validate_request(&self, request: &ChatRequestBuilder<'_>) -> Result<()>
pub fn validate_request(&self, request: &ChatRequestBuilder<'_>) -> Result<()>
Validate request capabilities.
Trait Implementations§
Source§impl EndpointExt for AiClient
impl EndpointExt for AiClient
Source§async fn call_service(&self, service_name: &str) -> Result<Value>
async fn call_service(&self, service_name: &str) -> Result<Value>
Call a generic service by name.
Source§async fn list_remote_models(&self) -> Result<Vec<String>>
async fn list_remote_models(&self) -> Result<Vec<String>>
List models available from the provider.