converge-provider 0.2.4

LLM provider implementations for Converge
Documentation

Converge Provider

Providers produce observations, never decisions. Converge converges; providers adapt.

Multi-provider capability adapters for the Converge runtime.

Crates.io Documentation

Website: converge.zone | Docs: docs.rs | Crates.io: converge-provider


What Is a Provider?

A provider is an adapter that connects Converge workflows to external systems without leaking non-determinism into the core engine.

Providers implement capability ports (traits) that define how to:

  • Call LLMs (Claude, GPT, Gemini, etc.)
  • Search the web (Perplexity)
  • Query vector stores (LanceDB, Qdrant)
  • Access external APIs (future: email, CRM, payments)

What a Provider IS

Aspect Description
Adapter Translates Converge requests to vendor-specific API calls
Capability port Implements traits like LlmProvider, Embedding, VectorRecall
Observation producer Returns structured results with provenance metadata
Stateless No hidden lifecycle state; each call is independent
Traceable All calls include request hashes, latency, cost, and correlation IDs

What a Provider is NOT

Anti-pattern Why Not
Not an agent Agents live in converge-core; providers are tools agents use
Not orchestration No workflows, no scheduling, no control flow
Not domain logic Business rules live in converge-domain
Not a decision maker Providers return observations; validators promote to Facts
Not a background worker No queues, no pub/sub, no async loops

Trust Model

Provider outputs are untrusted by default.

┌─────────────────────────────────────────────────────────────────┐
│                        TRUST BOUNDARY                           │
├─────────────────────────────────────────────────────────────────┤
│  Provider (untrusted)        │  Core Engine (authoritative)    │
│  ─────────────────────────   │  ─────────────────────────────  │
│  Returns observations        │  Validates observations         │
│  May hallucinate             │  Enforces invariants            │
│  May fail or timeout         │  Promotes to Facts              │
│  Has no authority            │  Owns Context                   │
└─────────────────────────────────────────────────────────────────┘

The Flow

  1. Agent (in converge-domain) calls a provider
  2. Provider returns a ProviderObservation (never a Fact)
  3. Agent wraps observation in ProposedFact with provenance
  4. Validator (in converge-core) checks constraints
  5. Engine promotes valid proposals to authoritative Facts

Provenance

Every provider call returns metadata:

ProviderObservation {
    observation_id: "obs-abc123",      // Stable reference ID
    request_hash: "sha256:...",        // Canonical request fingerprint
    vendor: "anthropic",
    model: "claude-3-5-sonnet-20241022",
    latency_ms: 1234,
    cost_estimate: Some(0.003),        // USD if known
    content: "...",                    // The actual response
}

Integration with Converge

Dependency Rules

converge-core          ← Defines traits (LlmProvider, Embedding, etc.)
       │                  Engine, Context, Agent, Fact, ProposedFact
       │
       ▼
converge-provider      ← Implements traits (adapters)
       │                  AnthropicProvider, OpenAiProvider, etc.
       │                  NO dependency on converge-domain
       │
       ▼
converge-domain        ← Uses providers in agents
                          Domain logic, use cases, validators

Key rules:

  • converge-provider depends on converge-core (for traits only)
  • converge-provider does NOT depend on converge-domain
  • converge-domain depends on both converge-core and converge-provider

Using Providers in Agents

use converge_core::{Agent, AgentEffect, Context, ProposedFact, ContextKey};
use converge_core::llm::{LlmProvider, LlmRequest};
use converge_provider::AnthropicProvider;

struct MarketAnalysisAgent {
    provider: Box<dyn LlmProvider>,
}

impl Agent for MarketAnalysisAgent {
    fn execute(&self, ctx: &Context) -> AgentEffect {
        // 1. Build request from context
        let prompt = format!("Analyze: {}", ctx.get_seeds());
        let request = LlmRequest::new(prompt);

        // 2. Call provider (returns observation, not fact)
        let response = self.provider.complete(&request);

        // 3. Wrap as ProposedFact with provenance
        match response {
            Ok(obs) => AgentEffect::with_proposed_fact(ProposedFact::new(
                ContextKey::Hypotheses,
                &format!("{}-analysis", self.name()),
                &obs.content,
            ).with_provenance(&obs.provenance())),
            Err(e) => AgentEffect::with_trace(format!("Provider error: {}", e)),
        }
    }
}

Installation

[dependencies]
converge-provider = "0.2"

Related Crates

Crate Version Description
converge-core 0.6.1 Runtime engine, agent traits, capabilities
converge-provider 0.2.3 14+ LLM providers, model selection
converge-domain 0.2.3 12 business use cases

Supported Providers

LLM Providers

Provider Models Region Env Variable
Anthropic Claude 3.5 Sonnet, Haiku, Opus 4 US ANTHROPIC_API_KEY
OpenAI GPT-4o, GPT-4o-mini, GPT-4 Turbo US OPENAI_API_KEY
Google Gemini Gemini Pro, Flash US GOOGLE_API_KEY
Alibaba Qwen Qwen-Max, Qwen-Plus, Qwen3-VL CN DASHSCOPE_API_KEY
DeepSeek DeepSeek Chat, Coder CN DEEPSEEK_API_KEY
Mistral Mistral Large, Medium EU MISTRAL_API_KEY
xAI Grok Grok models US XAI_API_KEY
Perplexity Online models (web search) US PERPLEXITY_API_KEY
OpenRouter Multi-provider gateway US OPENROUTER_API_KEY
Baidu ERNIE ERNIE models CN BAIDU_API_KEY
Zhipu GLM GLM-4 models CN ZHIPU_API_KEY
Kimi (Moonshot) Moonshot models CN MOONSHOT_API_KEY
Apertus EU digital sovereignty EU APERTUS_API_KEY
Ollama Local models (Llama, Mistral, etc.) Local (none)

Capability Providers

Capability Implementations
Embedding Qwen3-VL multimodal
Reranking Qwen3-VL
VectorRecall In-memory, LanceDB, Qdrant (planned)
GraphRecall In-memory, Neo4j (planned)

Quick Start

Basic LLM Call

use converge_provider::AnthropicProvider;
use converge_core::llm::{LlmProvider, LlmRequest};

// Create provider from environment variable
let provider = AnthropicProvider::from_env("claude-3-5-sonnet-20241022")?;

// Make a request
let request = LlmRequest::new("Analyze market trends for Q4")
    .with_max_tokens(1000)
    .with_temperature(0.7);

let response = provider.complete(&request)?;
println!("Response: {}", response.content);
println!("Provenance: {}", response.provenance());

Model Selection

use converge_provider::{ModelSelector, SelectionResult};
use converge_core::llm::AgentRequirements;

let selector = ModelSelector::default();

// Fast and cheap for simple tasks
let requirements = AgentRequirements::fast_extraction();

let result: SelectionResult = selector.select(&requirements)?;
println!("Selected: {} / {}", result.selected.provider, result.selected.model);

Factory Pattern

use converge_provider::{create_provider, can_create_provider};

// Check if provider is available (API key set)
if can_create_provider("anthropic") {
    let provider = create_provider("anthropic", "claude-3-5-sonnet-20241022")?;
    // Use provider...
}

Adding a New Provider

1. Implement the Trait

use converge_core::llm::{LlmProvider, LlmRequest, LlmResponse, LlmError};

pub struct MyProvider {
    api_key: String,
    model: String,
    client: reqwest::blocking::Client,
}

impl LlmProvider for MyProvider {
    fn name(&self) -> &str { "my-provider" }
    fn model(&self) -> &str { &self.model }

    fn complete(&self, request: &LlmRequest) -> Result<LlmResponse, LlmError> {
        // 1. Transform request to vendor format
        // 2. Make HTTP call
        // 3. Transform response to LlmResponse
        // 4. Include provenance metadata
    }

    fn provenance(&self, request_id: &str) -> String {
        format!("my-provider:{}:{}", self.model, request_id)
    }
}

2. Add Environment Support

impl MyProvider {
    pub fn from_env(model: &str) -> Result<Self, LlmError> {
        let api_key = std::env::var("MY_PROVIDER_API_KEY")
            .map_err(|_| LlmError::auth("MY_PROVIDER_API_KEY not set"))?;
        Ok(Self::new(api_key, model))
    }
}

3. Register in Factory

Add to src/factory.rs:

"my-provider" => Ok(Box::new(MyProvider::from_env(model)?)),

Safety Checklist

  • No global state or singletons
  • No background threads or async loops
  • No dependency on converge-domain
  • All errors are typed (LlmError)
  • Provenance included in responses
  • Timeouts are explicit and bounded
  • No implicit retries (caller decides)

Feature Flags

[dependencies]
converge-provider = { version = "0.2", features = ["lancedb"] }
Feature Description
lancedb LanceDB embedded vector store
qdrant Qdrant distributed vector store (planned)
neo4j Neo4j graph store (planned)
all-vector All vector stores
all-stores All stores

Error Handling

Providers use explicit, typed errors:

pub enum LlmError {
    Auth(String),           // API key invalid or missing
    RateLimit(String),      // Rate limited, include retry-after if known
    Network(String),        // Connection failed
    Parse(String),          // Response parsing failed
    Provider(String),       // Vendor-specific error
    Timeout(String),        // Request timed out
    Budget(String),         // Cost/token budget exceeded
}

No hidden retries. If a call fails, the agent decides whether to retry.


Observability

All provider calls emit tracing spans:

#[tracing::instrument(skip(self, request), fields(
    provider = %self.name(),
    model = %self.model(),
    request_hash = %request.hash(),
))]
fn complete(&self, request: &LlmRequest) -> Result<LlmResponse, LlmError> {
    // ...
}

Fields available in traces:

  • provider — Provider name
  • model — Model identifier
  • request_hash — Canonical request fingerprint
  • latency_ms — Call duration
  • tokens_in / tokens_out — Token usage
  • cost_estimate — Estimated cost (if known)

Architecture

See docs/ARCHITECTURE.md for:

  • Detailed layering rules
  • Error taxonomy
  • Observability requirements
  • Caching policy

Repository

This crate is part of the Converge project.

Standalone repo: github.com/kpernyer/converge-provider

License

MIT