converge-provider
Multi-provider LLM abstraction layer for the Converge runtime.
Website: converge.zone | Docs: docs.rs
Supported Providers
| Provider | Models |
|---|---|
| Anthropic | Claude 3.5 Sonnet, Haiku, Opus |
| OpenAI | GPT-4o, GPT-4o-mini |
| Google Gemini | Gemini Pro, Flash |
| Qwen | Qwen-Max, Qwen-Plus |
| DeepSeek | DeepSeek Chat, Coder |
| Mistral | Mistral Large, Medium |
| Grok | xAI models |
| Perplexity | Online models |
| OpenRouter | Multi-provider gateway |
| Baidu | ERNIE |
| Zhipu | GLM-4 |
| Kimi | Moonshot |
Features
Model Selection
- Cost-aware selection (
CostClass: VeryLow → VeryHigh) - Latency requirements (tokens/second thresholds)
- Quality requirements (capability levels)
AgentRequirementsbuilder for declarative selection
Prompt Engineering
- EDN format for token-efficient prompts (~40% reduction)
- XML format for Claude models
- JSON format for OpenAI models
- Structured response parsing
Installation
[]
= "0.2"
Example
use ;
// Create provider from environment
let provider = from_env?;
// Make a request
let request = LlmRequest ;
let response = provider.complete?;
println!;
Model Selection
use ;
let selector = default;
let requirements = fast_and_cheap
.with_max_cost;
let = selector.select?;
License
MIT