Expand description
§ChatDelta AI Client Library
A Rust library for connecting to multiple AI APIs (OpenAI, Google Gemini, Anthropic Claude) with a unified interface. Supports parallel execution, retry logic, and configurable parameters.
§Example
use chatdelta::{AiClient, ClientConfig, create_client};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = ClientConfig::builder()
.timeout(Duration::from_secs(30))
.retries(3)
.temperature(0.7)
.max_tokens(1024)
.build();
let client = create_client("openai", "your-api-key", "gpt-4o", config)?;
let response = client.send_prompt("Hello, world!").await?;
println!("{}", response);
Ok(())
}Re-exports§
pub use http::HttpConfig;pub use http::get_provider_client;pub use http::SHARED_CLIENT;pub use metrics::ClientMetrics;pub use metrics::MetricsSnapshot;pub use metrics::RequestTimer;pub use utils::execute_with_retry;pub use utils::RetryStrategy;pub use clients::*;pub use error::*;
Modules§
- clients
- AI client implementations
- error
- Error types for the ChatDelta AI client library
- http
- Optimized HTTP client configuration for AI providers
- metrics
- Performance metrics collection for ChatDelta clients
- middleware
- Common middleware for AI provider clients
- observability
- Observability pipeline for metrics export and structured logging
- utils
Structs§
- AiResponse
- AI response with content and metadata
- Chat
Session - A session for managing multi-turn conversations with an AI client.
- Client
Config - Configuration for AI clients
- Client
Config Builder - Builder for ClientConfig
- Conversation
- Represents a conversation with message history
- Message
- Represents a single message in a conversation
- Response
Metadata - Response metadata containing additional information from the AI provider
- Stream
Chunk - Streaming response chunk
Traits§
- AiClient
- Common trait implemented by all AI clients
Functions§
- create_
client - Factory function to create AI clients
- execute_
parallel - Execute multiple AI clients in parallel and return all results
- execute_
parallel_ conversation - Execute multiple AI clients in parallel with a conversation and return all results
- generate_
summary - Generate a summary using one of the provided clients