StakAI
A provider-agnostic Rust SDK for AI completions with streaming support
Built by Stakpak 🚀
Features
- 🔌 Multi-provider: Unified interface for OpenAI, Anthropic, and Google Gemini
- 🌊 Streaming support: Real-time streaming responses with unified event types
- 🦀 Type-safe: Strong typing with compile-time guarantees
- ⚡ Zero-cost abstractions: Static dispatch for optimal performance
- 🎯 Ergonomic API: Builder patterns and intuitive interfaces
- 🔧 Custom headers: Full control over HTTP headers for all providers
- 🔄 Auto-registration: Providers automatically registered from environment variables
Quick Start
Add to your Cargo.toml:
[]
= "0.1"
= { = "1", = ["full"] }
Basic Usage
use ;
async
Streaming
use ;
use StreamExt;
async
Supported Providers
| Provider | Status | Models | Features |
|---|---|---|---|
| OpenAI | ✅ | GPT-5, GPT-4.1, o3/o4, GPT-4o | Streaming, Tools, Vision, Reasoning |
| Anthropic | ✅ | Claude 4.5, Claude 4.1 | Streaming, Extended Thinking |
| Google Gemini | ✅ | Gemini 3, Gemini 2.5, Gemini 2.0 | Streaming, Vision, Agentic Coding |
See PROVIDERS.md for detailed provider documentation.
Configuration
Environment Variables
The SDK automatically registers providers when their API keys are found:
Custom Configuration
use ;
// Custom provider configuration
let config = new
.with_version
.with_beta_feature;
let provider = new?;
// Custom registry
let registry = new
.register;
let client = builder
.with_registry
.build;
Custom Headers
use ;
let request = builder
.add_message
.add_header
.add_header
.build;
Examples
OpenAI
use ;
let client = new;
let request = builder
.add_message
.temperature
.build;
let response = client.generate.await?;
println!;
Anthropic (Claude)
use ;
let client = new;
let request = builder
.add_message
.max_tokens // Required for Anthropic
.build;
let response = client.generate.await?;
println!;
Google Gemini
use ;
let client = new;
let request = builder
.add_message
.temperature
.build;
let response = client.generate.await?;
println!;
Multi-Provider Comparison
let question = "What is the meaning of life?";
let request = builder
.add_message
.build;
// Try all providers
for model in
Provider Options (Reasoning/Thinking)
Provider-specific options follow the Vercel AI SDK pattern using an enum:
use ;
let client = new;
// Anthropic extended thinking
let request = new
.with_provider_options;
let response = client.generate.await?;
// Access reasoning output
if let Some = response.reasoning
println!;
For OpenAI reasoning models:
use ;
let request = new
.with_provider_options;
For streaming, reasoning is delivered via ReasoningDelta events:
while let Some = stream.next.await
Run Examples
# Set your API keys
# Run examples
Architecture
The SDK uses a provider-agnostic design:
Client API → Provider Registry → Provider Trait → OpenAI/Anthropic/etc.
- Client: High-level ergonomic API
- Registry: Runtime provider management
- Provider Trait: Unified interface for all providers
- Providers: Concrete implementations (OpenAI, Anthropic, etc.)
Roadmap
Completed ✅
- OpenAI provider with full support
- Anthropic provider (Claude) with full support
- Google Gemini provider with full support
- Streaming support for all providers
- Tool/function calling for all providers
- Multi-modal support (vision/images)
- Extended thinking/reasoning support (Anthropic)
- Provider-specific options (Vercel AI SDK pattern)
- Custom headers support
- Auto-registration from environment
- Unified error handling
- Provider-specific configurations
Planned 📋
- OpenAI reasoning effort support (o1/o3/o4 models)
- Gemini thinking config support
- Embeddings API
- Rate limiting & retries
- Response caching
- Prompt caching (Anthropic)
- Audio support
- Batch API support
- More providers (Cohere, Mistral, xAI, etc.)
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
About Stakpak
This SDK is built and maintained by Stakpak - DevOps automation and infrastructure management platform.
License
MIT OR Apache-2.0