Model Provider API
fprovider defines the model provider abstraction for Fiddlesticks.
Its job is simple:
Provide a clean, provider-agnostic way to talk to language models.
Everything else in the system (chat, agents, tools) depends on this layer instead of directly coupling to OpenAI, Claude, or anything else.
What lives here
- Core provider traits
- Provider-agnostic request / response types
- Streaming abstractions (tokens, tool calls, events)
- Provider-specific adapters (behind features)
This crate does not:
- Define agent logic
- Define conversation state machines
- Execute tools
- Manage memory or persistence
Those concerns live higher up the stack.
Supported Providers
The currently supported providers are:
- OpenCode Zen
- OpenAI
- Claude (Anthropic)
Each provider implements the same core traits so they can be swapped without changing agent or chat logic.
Design Goals
- Minimal surface area – only what every provider must support
- Async-first – providers are expected to be network-bound
- Streaming-friendly – even if some providers start non-streaming
- Feature-gated implementations – avoid pulling heavy deps unless needed
- No provider leakage – downstream crates should not need provider-specific types
High-Level Flow
fharness / fchat
|
v
fprovider (traits + adapters)
|
v
External model APIs
Using fprovider from other crates
1) Add dependency
Provider-agnostic usage (recommended default):
[]
= { = "../fprovider" }
If your crate needs OpenAI adapter support, enable the feature:
[]
= { = "../fprovider", = ["provider-openai"] }
2) Build requests with provider-agnostic types
use ;
let request = builder
.message
.temperature
.max_tokens
.build?;
3) Depend on traits, not SDK types
Higher crates should accept dyn ModelProvider so provider choice is runtime-configurable:
use Arc;
use ;
pub async
4) Register and resolve providers
use ;
let mut registry = new;
// registry.register(openai_provider);
let provider = registry
.get
.expect;
5) OpenAI adapter example
use Arc;
use Client;
use ;
use ;
let credentials = new;
credentials.set_openai_api_key?;
let transport = new;
let openai = new;
let mut registry = new;
registry.register;
6) Streaming consumption
stream(...) returns a stream implementing futures_core::Stream<Item = Result<StreamEvent, ProviderError>>.
This is provider-agnostic and works with standard async ecosystem helpers.
Stream invariants:
- Events are emitted in provider/source order.
- Delta events (
TextDelta,ToolCallDelta) can appear zero or more times. - Completion milestones (
MessageComplete,ResponseComplete) when present arrive after deltas. - Once the stream returns
None, no additional events are emitted.
use StreamExt;
use *;
let mut events = provider.stream.await?;
while let Some = events.next.await
7) OpenAI auth precedence policy
When provider-openai is enabled, OpenAiProvider resolves credentials in this strict order:
- API key configured via
SecureCredentialManager::set_openai_api_key - Browser session configured via
SecureCredentialManager::set_openai_browser_session
Browser sessions are only used if no API key is configured. If a browser session has expires_at set and the timestamp is in the past, authentication fails with an authentication error instead of falling through to transport calls.
8) Standard retry/backoff and operational hooks
fprovider exposes provider-agnostic resilience primitives:
RetryPolicy: standardized retry attempt limits and exponential backoff settingsProviderOperationHooks: lifecycle hooks for attempts, retries, success, and failureexecute_with_retry(...): helper that applies policy + hooks around async operations
Example:
use Duration;
use *;
let policy = RetryPolicy ;
let hooks = NoopOperationHooks;
let value = execute_with_retry
.await?;
let _ = value;
Feature flags
provider-openai: OpenAI adapter and HTTP transportprovider-claude: Claude adapter surface (in progress)provider-opencode-zen: OpenCode Zen adapter over OpenAI-compatible transport