pub struct Backend {
pub kind: Kind,
pub base_url: String,
pub model: String,
/* private fields */
}Expand description
A multi-provider LLM backend.
Supports Ollama (via OpenAI-compatible API), OpenAI, and Anthropic.
All three providers use the same internal Message/ToolCall format —
Anthropic’s content-block schema is translated at the wire boundary.
§Example
use agnt_net::Backend;
let ollama = Backend::ollama("gemma4:e4b");
let openai = Backend::openai("gpt-4o-mini", "sk-...");
let anthropic = Backend::anthropic("claude-sonnet-4-6", "sk-ant-...");Fields§
§kind: KindWhich provider schema to use on the wire.
base_url: StringBase URL for the provider’s API.
model: StringModel identifier passed in every request.
Implementations§
Source§impl Backend
impl Backend
Sourcepub fn ollama(model: &str) -> Backend
pub fn ollama(model: &str) -> Backend
Create a backend pointing at a local Ollama server.
Uses http://localhost:11434/v1 by default (the OpenAI-compatible endpoint).
Sourcepub fn anthropic(model: &str, api_key: &str) -> Backend
pub fn anthropic(model: &str, api_key: &str) -> Backend
Create a backend for Anthropic’s native API.
Message format is automatically translated to Anthropic’s content-block
schema at the wire boundary — you still work with the OpenAI-style
Message type internally.
Sourcepub fn with_timeouts(
self,
connect: Duration,
read: Duration,
) -> Result<Backend, String>
pub fn with_timeouts( self, connect: Duration, read: Duration, ) -> Result<Backend, String>
Override the HTTP timeouts for this backend instance.
Builds a fresh ureq Agent with the supplied connect/read timeouts and
attaches it to this Backend. Subsequent requests made via this
instance will use the custom Agent instead of the process-wide shared
one.
Returns an error if TLS initialization fails.