ChatDelta
A unified Rust library for connecting to multiple AI APIs (OpenAI, Google Gemini, Anthropic Claude) with a common interface. Supports parallel execution, conversations, streaming, retry logic, and extensive configuration options.
Features
- Unified Interface: Single trait (
AiClient) for all AI providers - Multiple Providers: OpenAI ChatGPT, Google Gemini, Anthropic Claude
- Conversation Support: Multi-turn conversations with message history
- Streaming Responses: Real-time streaming support (where available)
- Parallel Execution: Run multiple AI models concurrently
- Builder Pattern: Fluent configuration with
ClientConfig::builder() - Advanced Error Handling: Detailed error types with specific categories
- Retry Logic: Configurable retry attempts with exponential backoff
- Async/Await: Built with tokio for efficient async operations
- Type Safety: Full Rust type safety with comprehensive error handling
Quick Start
Add this to your Cargo.toml:
[]
= "0.2"
= { = "1", = ["full"] }
Usage
Basic Example
use ;
use Duration;
async
Conversation Example
use ;
async
Parallel Execution
use ;
async
Streaming Responses
use ;
use StreamExt;
async
Supported Providers
OpenAI
- Provider:
"openai","gpt", or"chatgpt" - Models:
"gpt-4","gpt-3.5-turbo", etc. - API Key: OpenAI API key
Google Gemini
- Provider:
"google"or"gemini" - Models:
"gemini-1.5-pro","gemini-1.5-flash", etc. - API Key: Google AI API key
Anthropic Claude
- Provider:
"anthropic"or"claude" - Models:
"claude-3-5-sonnet-20241022","claude-3-haiku-20240307", etc. - API Key: Anthropic API key
Configuration
The ClientConfig supports extensive configuration through a builder pattern:
use ClientConfig;
use Duration;
let config = builder
.timeout // Request timeout
.retries // Number of retry attempts
.temperature // Response creativity (0.0-2.0)
.max_tokens // Maximum response length
.top_p // Top-p sampling (0.0-1.0)
.frequency_penalty // Frequency penalty (-2.0 to 2.0)
.presence_penalty // Presence penalty (-2.0 to 2.0)
.system_message // System message for conversation context
.build;
Configuration Options
| Parameter | Description | Default | Supported By |
|---|---|---|---|
timeout |
HTTP request timeout | 30 seconds | All |
retries |
Number of retry attempts | 0 | All |
temperature |
Response creativity (0.0-2.0) | None | All |
max_tokens |
Maximum response length | 1024 | All |
top_p |
Top-p sampling (0.0-1.0) | None | OpenAI |
frequency_penalty |
Frequency penalty (-2.0 to 2.0) | None | OpenAI |
presence_penalty |
Presence penalty (-2.0 to 2.0) | None | OpenAI |
system_message |
System message for conversations | None | All |
Error Handling
The library provides comprehensive error handling through the ClientError enum with detailed error types:
use ;
match result
Error Categories
- Network: Connection issues, timeouts, DNS resolution failures
- API: Rate limits, quota exceeded, invalid models, server errors
- Authentication: Invalid API keys, expired tokens, insufficient permissions
- Configuration: Invalid parameters, missing required fields
- Parse: JSON parsing errors, missing response fields
- Stream: Streaming-specific errors, connection lost, invalid chunks
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contributing
We welcome contributions! To get started, clone the repository and install the Rust toolchain. Before opening a pull request, run the following commands:
# Check formatting
# Run the linter
# Execute tests
This project uses GitHub Actions to run the same checks automatically on every pull request.