Turbine LLM
One interface, all LLMs - A unified Rust library for calling multiple LLM providers with growing model support.
🚀 Switch between OpenAI, Anthropic, Gemini, and Groq with minimal code changes. Perfect for building AI applications that need provider flexibility.
Sponsored by Renaiss AI
Turbine LLM is developed and maintained with support from Renaiss AI, bridging the gap between AI potential and business reality.
Features
- Unified API: Single interface for multiple LLM providers
- Simple & Clean: Minimal, straightforward code - no complexity
- Text & JSON Output: Support for both text and structured JSON responses
- Async/Await: Built with Tokio for high-performance async operations
- Type-Safe: Full Rust type safety with proper error handling
- Growing Support: New providers and models added regularly
Why Turbine?
- Provider Independence: Easily switch providers or use multiple simultaneously
- Consistent Interface: Same code works across all providers
- Production Ready: Proper error handling, async support, comprehensive docs
- Actively Maintained: Regular updates with new models and providers
Supported Providers
Currently integrated:
- ✅ OpenAI (GPT-4, GPT-3.5, etc.)
- ✅ Anthropic (Claude 3.5 Sonnet, Haiku, etc.)
- ✅ Google Gemini (Gemini 2.0, 1.5, etc.)
- ✅ Groq (Llama, Mixtral, etc.)
Coming soon:
- 🔜 Cohere
- 🔜 Mistral AI
- 🔜 Perplexity
New providers and models added regularly. Check CHANGELOG.md for updates.
Installation
Add this to your Cargo.toml:
[]
= "0.2"
= { = "1", = ["full"] }
Quick Start
1. Simplified API (Recommended) 🚀
The easiest way to get started - just pass a model string:
use TurbineClient;
async
Supported model string formats:
- Explicit provider:
"openai/gpt-4o-mini","google/gemini-flash","anthropic/claude-3-5-sonnet" - Inferred from name:
"gpt-4o","claude-3-5-sonnet","gemini-flash","llama-3.3-70b"
If the API key isn't in your environment, you'll be prompted to enter it interactively.
With system prompt:
let response = client.send_with_system.await?;
2. Traditional API (Full Control)
For advanced use cases, use the traditional builder pattern:
use ;
async
3. JSON Output
Request structured JSON from any provider:
use ;
async
4. Multi-turn Conversations
let request = new
.with_messages;
API Reference
TurbineClient
Simplified Constructor (Recommended)
// Automatic provider detection from model string
let client = from_model?;
// Simple message sending
let response = client.send.await?;
// With system prompt
let response = client.send_with_system.await?;
With Explicit API Key
Pass the API key directly instead of using environment variables:
// With provider enum
let client = new_with_key;
// With model string
let client = from_model_with_key?;
let response = client.send.await?;
Traditional Constructor
let client = new?;
let response = client.send_request.await?;
Provider
Select which LLM provider to use (for traditional API):
Note: When using from_model(), the provider is automatically detected from the model string.
LLMRequest Builder
Construct requests with optional parameters:
new
.with_system_prompt // Optional
.with_message // Add single message
.with_messages // Add multiple messages
.with_max_tokens // Optional, default: 1024
.with_temperature // Optional, 0.0-2.0
.with_top_p // Optional
.with_output_format // Text (default) or Json
Message Helpers
user
assistant
system
Model Examples
OpenAI
gpt-4o- Latest GPT-4 Omnigpt-4o-mini- Faster, cost-effectivegpt-3.5-turbo- Fast and efficient
Anthropic
claude-3-5-sonnet-20241022- Most capableclaude-3-5-haiku-20241022- Fast and affordable
Gemini
gemini-2.0-flash-exp- Latest experimentalgemini-1.5-pro- Production ready
Groq
llama-3.3-70b-versatile- Powerful Llama modelmixtral-8x7b-32768- Mixtral with large context
Error Handling
match client.send_request.await
Examples
Run the included examples:
# Simplified API (recommended for beginners)
# Basic text generation
# JSON output
# Multi-turn conversation
Documentation
- API Documentation
- Context7 Docs - AI-powered documentation
- Examples
- Changelog
- Contributing
Troubleshooting
API Key Not Found
Error: API key not found for provider: OpenAI
Solution: Make sure the environment variable is set:
Model Not Found
Different providers use different model names. Check the Model Examples section for correct model identifiers.
Rate Limiting
If you hit rate limits, implement exponential backoff or switch providers temporarily.
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Acknowledgments
Developed with ❤️ by the Rust community and sponsored by Renaiss AI.