ai-lib π¦β¨
Unified, Reliable & Performant MultiβProvider AI SDK for Rust
A productionβgrade, providerβagnostic SDK that gives you one coherent Rust API for 17+ AI platforms (OpenAI, Groq, Anthropic, Gemini, Mistral, Cohere, Azure OpenAI, Ollama, DeepSeek, Qwen, Wenxin, Hunyuan, iFlytek Spark, Kimi, HuggingFace, TogetherAI, xAI Grok, etc.).
Eliminate fragmented auth flows, streaming formats, error semantics, model naming quirks, and inconsistent function calling. Scale from a oneβline script to a multiβregion, multiβvendor system without rewriting integration code.
π Elevator Pitch (TL;DR)
ai-lib unifies:
- Chat & multimodal requests across heterogeneous model providers
- Streaming (SSE + emulated) with consistent deltas
- Function calling semantics
- Batch workflows
- Reliability primitives (retry, backoff, timeout, proxy, health, load strategies)
- Model selection (cost / performance / health / weighted)
- Observability hooks
- Progressive configuration (env β builder β explicit injection β custom transport)
You focus on product logic; ai-lib handles infrastructure friction.
π Table of Contents
- When to Use / When Not To
- Architecture Overview
- Progressive Complexity Ladder
- Quick Start
- Core Concepts
- Key Feature Clusters
- Code Examples (Essentials)
- Configuration & Diagnostics
- Reliability & Resilience
- Model Management & Load Balancing
- Observability & Metrics
- Security & Privacy
- Supported Providers
- Examples Catalog
- Performance Characteristics
- Roadmap
- FAQ
- Contributing
- License & Citation
- Why Choose ai-lib?
π― When to Use / When Not To
Scenario | β Use ai-lib | β οΈ Probably Not |
---|---|---|
Rapidly switch between AI providers | β | |
Unified streaming output | β | |
Production reliability (retry, proxy, timeout) | β | |
Load balancing / cost / performance strategies | β | |
Hybrid local (Ollama) + cloud vendors | β | |
One-off script calling only OpenAI | β οΈ Use official SDK | |
Deep vendor-exclusive beta APIs | β οΈ Use vendor SDK directly |
ποΈ Architecture Overview
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Your Application β
βββββββββββββββββ²ββββββββββββββββββββββββββ²ββββββββββββββββββ
β β
High-Level API Advanced Controls
β β
AiClient / Builder β Model Mgmt / Metrics / Batch / Tools
β
βββββββββββ Unified Abstraction Layer βββββββββββββ
β Provider Adapters (Hybrid: Config + Independent)β
ββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββββββ
β β β
OpenAI / Groq Gemini / Mistral Ollama / Regional / Others
β
Transport (HTTP + Streaming + Retry + Proxy + Timeout)
β
Common Types (Request / Messages / Content / Tools / Errors)
Design principles:
- Hybrid adapter model (config-driven where possible, custom where necessary)
- Strict core types = consistent ergonomics
- Extensible: plug custom transport & metrics without forking
- Progressive layering: start simple, scale safely
πͺ Progressive Complexity Ladder
Level | Intent | API Surface |
---|---|---|
L1 | One-off / scripting | AiClient::quick_chat_text() |
L2 | Basic integration | AiClient::new(provider) |
L3 | Controlled runtime | AiClientBuilder (timeout, proxy, base URL) |
L4 | Reliability & scale | Connection pool, batch, streaming, retries |
L5 | Optimization | Model arrays, selection strategies, metrics |
L6 | Extension | Custom transport, custom metrics, instrumentation |
βοΈ Quick Start
Install
[]
= "0.2.12"
= { = "1", = ["full"] }
= "0.3"
Fastest Possible
use Provider;
async
Standard Chat
use ;
async
Streaming
use StreamExt;
let mut stream = client.chat_completion_stream.await?;
while let Some = stream.next.await
π§ Core Concepts
Concept | Purpose |
---|---|
Provider | Enumerates all supported vendors |
AiClient / Builder | Main entrypoint; configuration envelope |
ChatCompletionRequest | Unified request payload |
Message / Content | Text / Image / Audio / (future structured) |
Function / Tool | Unified function calling semantics |
Streaming Event | Provider-normalized delta stream |
ModelManager / ModelArray | Strategy-driven model orchestration |
ConnectionOptions | Explicit runtime overrides |
Metrics Trait | Custom observability integration |
Transport | Injectable HTTP + streaming implementation |
π‘ Key Feature Clusters
- Unified provider abstraction (no per-vendor branching)
- Universal streaming (SSE + fallback emulation)
- Multimodal primitives (text/image/audio)
- Function calling (consistent tool schema)
- Batch processing (sequential / bounded concurrency / smart strategy)
- Reliability: retry, error classification, timeout, proxy, pool
- Model management: performance / cost / health / round-robin / weighted
- Observability: pluggable metrics & timing
- Security: isolation, no default content logging
- Extensibility: custom transport, metrics, strategy injection
π§ͺ Essential Examples (Condensed)
Provider Switching
let groq = new?;
let gemini = new?;
let claude = new?;
Function Calling
use ;
let tool = new_json;
let req = new
.with_functions
.with_function_call;
Batch
let responses = client.chat_completion_batch.await?;
let smart = client.chat_completion_batch_smart.await?;
Multimodal (Image)
let msg = user;
Retry Awareness
match client.chat_completion.await
π Configuration & Diagnostics
Environment Variables (Convention-Based)
# API Keys
# Optional base URLs
# Proxy
# Global timeout (seconds)
Explicit Overrides
use ;
let client = with_options?;
Config Validation
π‘οΈ Reliability & Resilience
Aspect | Capability |
---|---|
Retry | Exponential backoff + classification |
Errors | Distinguishes transient vs permanent |
Timeout | Per-request configurable |
Proxy | Global / per-connection / disable |
Connection Pool | Tunable size + lifetime |
Health | Endpoint state + strategy-based avoidance |
Load Strategies | Round-robin / weighted / health / performance / cost |
Fallback | Multi-provider arrays / manual layering |
π§ Model Management & Load Balancing
use ;
let mut manager = new
.with_strategy;
let mut array = new
.with_strategy;
array.add_endpoint;
Supports:
- Performance tiers
- Cost comparison
- Health-based filtering
- Weighted distributions
- Future-ready for adaptive strategies
π Observability & Metrics
Implement the Metrics
trait to bridge Prometheus, OpenTelemetry, StatsD, etc.
;
let client = new_with_metrics?;
π Security & Privacy
Feature | Description |
---|---|
No implicit logging | Requests/responses not logged by default |
Key isolation | API keys sourced from env or explicit struct |
Proxy control | Allow / disable / override |
TLS | Standard HTTPS with validation |
Auditing hooks | Use metrics layer for compliance audit counters |
Local-first | Ollama integration for sensitive contexts |
π Supported Providers (Snapshot)
Provider | Adapter Type | Streaming | Notes |
---|---|---|---|
Groq | config-driven | β | Ultra-low latency |
OpenAI | independent | β | Function calling |
Anthropic (Claude) | config-driven | β | High quality |
Google Gemini | independent | π (unified) | Multimodal focus |
Mistral | independent | β | European models |
Cohere | independent | β | RAG optimized |
HuggingFace | config-driven | β | Open models |
TogetherAI | config-driven | β | Cost-efficient |
DeepSeek | config-driven | β | Reasoning models |
Qwen | config-driven | β | Chinese ecosystem |
Baidu Wenxin | config-driven | β | Enterprise CN |
Tencent Hunyuan | config-driven | β | Cloud integration |
iFlytek Spark | config-driven | β | Voice + multimodal |
Moonshot Kimi | config-driven | β | Long context |
Azure OpenAI | config-driven | β | Enterprise compliance |
Ollama | config-driven | β | Local / airgapped |
xAI Grok | config-driven | β | Real-time oriented |
(Streaming column: π = unified adaptation / fallback)
ποΈ Examples Catalog (in /examples)
Category | Examples |
---|---|
Getting Started | quickstart / basic_usage / builder_pattern |
Configuration | explicit_config / proxy_example / custom_transport_config |
Streaming | test_streaming / cohere_stream |
Reliability | custom_transport |
Multi-provider | config_driven_example / model_override_demo |
Model Mgmt | model_management |
Batch | batch_processing |
Function Calling | function_call_openai / function_call_exec |
Multimodal | multimodal_example |
Architecture Demo | architecture_progress |
Specialized | ascii_horse / hello_groq |
π Performance (Indicative & Methodology-Based)
The figures below describe the SDK layer overhead of ai-lib itself, not model inference time.
They are representative (not guarantees) and come from controlled benchmarks using a mock transport unless otherwise noted.
Metric | Observed Range (Typical) | Precise Definition | Measurement Context |
---|---|---|---|
SDK overhead per request | ~0.6β0.9 ms | Time from building a ChatCompletionRequest to handing off the HTTP request | Release build, mock transport, 256B prompt, single thread warm |
Streaming added latency | <2 ms | Additional latency introduced by ai-lib's streaming parsing vs direct reqwest SSE | 500 runs, Groq llama3-8b, averaged |
Baseline memory footprint | ~1.7 MB | Resident set after initializing one AiClient + connection pool | Linux (x86_64), pool=16, no batching |
Sustainable mock throughput | 11Kβ13K req/s | Completed request futures per second (short prompt) | Mock transport, concurrency=512, pool=32 |
Real provider shortβprompt throughput | Provider-bound | End-to-end including network + provider throttling | Heavily dependent on vendor limits |
Streaming chunk parse cost | ~8β15 Β΅s / chunk | Parsing + dispatch of one SSE delta | Synthetic 30β50 token streams |
Batch concurrency scaling | Near-linear to ~512 tasks | Degradation point before scheduling contention | Tokio multi-threaded runtime |
π¬ Methodology
- Hardware: AMD 7950X (32 threads), 64GB RAM, NVMe SSD, Linux 6.x
- Toolchain: Rust 1.79 (stable),
--release
, LTO=thin, default allocator - Isolation: Mock transport used to exclude network + provider inference variance
- Warm-up: Discard first 200 iterations (JIT, cache, allocator stabilization)
- Timing:
std::time::Instant
for macro throughput; Criterion for micro overhead - Streaming: Synthetic SSE frames with realistic token cadence (8β25 ms)
- Provider tests: Treated as illustrative only (subject to rate limiting & regional latency)
π§ͺ Reproducing (Once Bench Suite Is Added)
# Micro overhead (request build + serialize)
# Mock high-concurrency throughput
# Streaming parsing cost
Planned benchmark layout (forthcoming):
/bench
micro/
bench_overhead.rs
bench_stream_parse.rs
macro/
mock_throughput.rs
streaming_latency.rs
provider/ (optional gated)
groq_latency.rs
π Interpretation Guidelines
- "SDK overhead" = ai-lib internal processing (type construction, serialization, dispatch prep) β excludes remote model latency.
- "Throughput" figures assume fast-returning mock responses; real-world cloud throughput is usually constrained by provider rate limits.
- Memory numbers are resident set snapshots; production systems with logging/metrics may add overhead.
- Results will vary on different hardware, OS schedulers, allocator strategies, and runtime tuning.
β οΈ Disclaimers
These metrics are indicative, not contractual guarantees. Always benchmark with your workload, prompt sizes, model mix, and deployment environment.
A reproducible benchmark harness and JSON snapshot baselines will be versioned in the repository to track regressions.
π‘ Optimization Tips
- Use
.with_pool_config(size, idle_timeout)
for high-throughput scenarios - Prefer streaming for low-latency UX
- Batch related short prompts with concurrency limits
- Avoid redundant client instantiation (reuse clients)
- Consider provider-specific rate limits and regional latency
πΊοΈ Roadmap (Planned Sequence)
Stage | Planned Feature |
---|---|
1 | Advanced backpressure & adaptive rate coordination |
2 | Built-in caching layer (request/result stratified) |
3 | Live configuration hot-reload |
4 | Plugin / interceptor system |
5 | GraphQL surface |
6 | WebSocket native streaming |
7 | Enhanced security (key rotation, KMS integration) |
8 | Public benchmark harness + nightly regression checks |
π§ͺ Performance Monitoring Roadmap
Public benchmark harness + nightly (mock-only) regression checks are planned to:
- Detect performance regressions early
- Provide historical trend data
- Allow contributors to validate impact of PRs
β FAQ
Question | Answer |
---|---|
How do I A/B test providers? | Use ModelArray with a load strategy |
Is retry built-in? | Automatic classification + backoff; you can layer custom loops |
Can I disable the proxy? | .without_proxy() or disable_proxy = true in options |
Can I mock for tests? | Inject a custom transport |
Do you log PII? | No logging of content by default |
Function calling differences? | Normalized via Tool + FunctionCallPolicy |
Local inference supported? | Yes, via Ollama (self-hosted) |
How to know if an error is retryable? | error.is_retryable() helper |
π€ Contributing
- Fork & clone repo
- Create a feature branch:
git checkout -b feature/your-feature
- Run tests:
cargo test
- Add example if introducing new capability
- Follow adapter layering (prefer config-driven before custom)
- Open PR with rationale + benchmarks (if performance-affecting)
We value: clarity, test coverage, minimal surface area creep, incremental composability.
π License
Dual licensed under either:
- MIT
- Apache License (Version 2.0)
You may choose the license that best fits your project.
π Citation
π Why Choose ai-lib?
Dimension | Value |
---|---|
Engineering Velocity | One abstraction = fewer bespoke adapters |
Risk Mitigation | Multi-provider fallback & health routing |
Operational Robustness | Retry, pooling, diagnostics, metrics |
Cost Control | Cost/performance strategy knobs |
Extensibility | Pluggable transport & metrics |
Future-Proofing | Clear roadmap + hybrid adapter pattern |
Ergonomics | Progressive APIβno premature complexity |
Performance | Minimal latency & memory overhead |