Converge Provider
Providers produce observations, never decisions. Converge converges; providers adapt.
Multi-provider capability adapters for the Converge runtime.
Website: converge.zone | Docs: docs.rs | Crates.io: converge-provider
What Is a Provider?
A provider is an adapter that connects Converge workflows to external systems without leaking non-determinism into the core engine.
Providers implement capability ports (traits) that define how to:
- Call LLMs (Claude, GPT, Gemini, etc.)
- Search the web (Perplexity)
- Query vector stores (LanceDB, Qdrant)
- Access external APIs (future: email, CRM, payments)
What a Provider IS
| Aspect | Description |
|---|---|
| Adapter | Translates Converge requests to vendor-specific API calls |
| Capability port | Implements traits like LlmProvider, Embedding, VectorRecall |
| Observation producer | Returns structured results with provenance metadata |
| Stateless | No hidden lifecycle state; each call is independent |
| Traceable | All calls include request hashes, latency, cost, and correlation IDs |
What a Provider is NOT
| Anti-pattern | Why Not |
|---|---|
| Not an agent | Agents live in converge-core; providers are tools agents use |
| Not orchestration | No workflows, no scheduling, no control flow |
| Not domain logic | Business rules live in converge-domain |
| Not a decision maker | Providers return observations; validators promote to Facts |
| Not a background worker | No queues, no pub/sub, no async loops |
Trust Model
Provider outputs are untrusted by default.
┌─────────────────────────────────────────────────────────────────┐
│ TRUST BOUNDARY │
├─────────────────────────────────────────────────────────────────┤
│ Provider (untrusted) │ Core Engine (authoritative) │
│ ───────────────────────── │ ───────────────────────────── │
│ Returns observations │ Validates observations │
│ May hallucinate │ Enforces invariants │
│ May fail or timeout │ Promotes to Facts │
│ Has no authority │ Owns Context │
└─────────────────────────────────────────────────────────────────┘
The Flow
- Agent (in
converge-domain) calls a provider - Provider returns a
ProviderObservation(never aFact) - Agent wraps observation in
ProposedFactwith provenance - Validator (in
converge-core) checks constraints - Engine promotes valid proposals to authoritative
Facts
Provenance
Every provider call returns metadata:
ProviderObservation
Integration with Converge
Dependency Rules
converge-core ← Defines traits (LlmProvider, Embedding, etc.)
│ Engine, Context, Agent, Fact, ProposedFact
│
▼
converge-provider ← Implements traits (adapters)
│ AnthropicProvider, OpenAiProvider, etc.
│ NO dependency on converge-domain
│
▼
converge-domain ← Uses providers in agents
Domain logic, use cases, validators
Key rules:
converge-providerdepends onconverge-core(for traits only)converge-providerdoes NOT depend onconverge-domainconverge-domaindepends on bothconverge-coreandconverge-provider
Using Providers in Agents
use ;
use ;
use AnthropicProvider;
Installation
[]
= "0.2"
Related Crates
| Crate | Version | Description |
|---|---|---|
| converge-core | 0.6.1 | Runtime engine, agent traits, capabilities |
| converge-provider | 0.2.3 | 14+ LLM providers, model selection |
| converge-domain | 0.2.3 | 12 business use cases |
Supported Providers
LLM Providers
| Provider | Models | Region | Env Variable |
|---|---|---|---|
| Anthropic | Claude 3.5 Sonnet, Haiku, Opus 4 | US | ANTHROPIC_API_KEY |
| OpenAI | GPT-4o, GPT-4o-mini, GPT-4 Turbo | US | OPENAI_API_KEY |
| Google Gemini | Gemini Pro, Flash | US | GOOGLE_API_KEY |
| Alibaba Qwen | Qwen-Max, Qwen-Plus, Qwen3-VL | CN | DASHSCOPE_API_KEY |
| DeepSeek | DeepSeek Chat, Coder | CN | DEEPSEEK_API_KEY |
| Mistral | Mistral Large, Medium | EU | MISTRAL_API_KEY |
| xAI Grok | Grok models | US | XAI_API_KEY |
| Perplexity | Online models (web search) | US | PERPLEXITY_API_KEY |
| OpenRouter | Multi-provider gateway | US | OPENROUTER_API_KEY |
| Baidu ERNIE | ERNIE models | CN | BAIDU_API_KEY |
| Zhipu GLM | GLM-4 models | CN | ZHIPU_API_KEY |
| Kimi (Moonshot) | Moonshot models | CN | MOONSHOT_API_KEY |
| Apertus | EU digital sovereignty | EU | APERTUS_API_KEY |
| Ollama | Local models (Llama, Mistral, etc.) | Local | (none) |
Capability Providers
| Capability | Implementations |
|---|---|
| Embedding | Qwen3-VL multimodal |
| Reranking | Qwen3-VL |
| VectorRecall | In-memory, LanceDB, Qdrant (planned) |
| GraphRecall | In-memory, Neo4j (planned) |
Quick Start
Basic LLM Call
use AnthropicProvider;
use ;
// Create provider from environment variable
let provider = from_env?;
// Make a request
let request = new
.with_max_tokens
.with_temperature;
let response = provider.complete?;
println!;
println!;
Model Selection
use ;
use AgentRequirements;
let selector = default;
// Fast and cheap for simple tasks
let requirements = fast_extraction;
let result: SelectionResult = selector.select?;
println!;
Factory Pattern
use ;
// Check if provider is available (API key set)
if can_create_provider
Adding a New Provider
1. Implement the Trait
use ;
2. Add Environment Support
3. Register in Factory
Add to src/factory.rs:
"my-provider" => Ok,
Safety Checklist
- No global state or singletons
- No background threads or async loops
- No dependency on
converge-domain - All errors are typed (
LlmError) - Provenance included in responses
- Timeouts are explicit and bounded
- No implicit retries (caller decides)
Feature Flags
[]
= { = "0.2", = ["lancedb"] }
| Feature | Description |
|---|---|
lancedb |
LanceDB embedded vector store |
qdrant |
Qdrant distributed vector store (planned) |
neo4j |
Neo4j graph store (planned) |
all-vector |
All vector stores |
all-stores |
All stores |
Error Handling
Providers use explicit, typed errors:
No hidden retries. If a call fails, the agent decides whether to retry.
Observability
All provider calls emit tracing spans:
Fields available in traces:
provider— Provider namemodel— Model identifierrequest_hash— Canonical request fingerprintlatency_ms— Call durationtokens_in/tokens_out— Token usagecost_estimate— Estimated cost (if known)
Architecture
See docs/ARCHITECTURE.md for:
- Detailed layering rules
- Error taxonomy
- Observability requirements
- Caching policy
Repository
This crate is part of the Converge project.
Standalone repo: github.com/kpernyer/converge-provider
License
MIT