Capability adapters for the Converge runtime.
Providers produce observations, never decisions. Converge converges; providers adapt.
This crate provides capability adapters (providers) that connect Converge
workflows to external systems. Each provider implements
ChatBackend for LLM completions,
or other capability traits for embedding, search, etc.
What Is a Provider?
A provider is an adapter that:
- Implements capability traits (
ChatBackend,Embedding,VectorRecall, etc.) - Returns observations (not facts, not decisions)
- Includes provenance metadata for tracing
- Is stateless (no hidden lifecycle state)
A provider is NOT:
- An agent (agents live in
converge-core) - Orchestration (no workflows, no scheduling)
- Domain logic (business rules live in
converge-domain)
Available Backends
LLM Backends (ChatBackend implementations)
- [
AnthropicBackend] - Claude API (Anthropic) - [
OpenAiBackend] - GPT-4, GPT-3.5 (OpenAI) - [
OpenRouterBackend] - Any model via OpenRouter (openrouter.ai) - [
GeminiBackend] - Gemini Pro (Google) - [
MistralBackend] - Mistral chat completions (Mistral AI)
Structured Output
All live chat backends accept ResponseFormat::Json,
but providers do not enforce it identically at request time:
- OpenAI and Mistral use native
response_formatAPI fields - Gemini uses native
response_mime_type - Anthropic uses the documented system-instruction JSON pattern
All live chat backends then apply a shared response contract before returning content:
Json,Yaml, andTomloutputs are validated centrally- trivial outer code fences are stripped for those machine formats
- a provider that returns prose for a structured request fails with
LlmError::ResponseFormatMismatch
Anthropic's instruction-based JSON handling is provider-native and correct for Claude. The difference is enforcement strength at request time, not post-response correctness.