pub struct ProviderConfig {
pub name: String,
pub model: String,
pub base_url: Option<String>,
pub api_key: Option<String>,
pub retry: Option<RetryProviderConfig>,
pub prompt_caching: bool,
pub cascade: Option<CascadeConfig>,
pub circuit: ProviderCircuitConfig,
}Expand description
LLM provider configuration.
When running as a cloud-delegated runtime (daemon mode with no agents), the provider section can be omitted — per-request provider keys are used instead.
Fields§
§name: String§model: String§base_url: Option<String>Custom API endpoint URL (overrides the default for the provider). Useful for self-hosted models, Azure, or proxies.
api_key: Option<String>Direct API key (alternative to environment variable). Prefer env vars in production; this is for testing/local dev.
retry: Option<RetryProviderConfig>Retry configuration for transient LLM API failures.
prompt_caching: boolEnable Anthropic prompt caching (system prompt + tool definitions).
Only effective for the anthropic provider. Defaults to false.
cascade: Option<CascadeConfig>Model cascading configuration. When enabled, tries cheaper models first and escalates to the main model only when the confidence gate rejects.
circuit: ProviderCircuitConfigCircuit breaker configuration for this provider. When absent, sensible defaults are used (5 failures → 30 s open, max 300 s).