pub struct LlmConfig { /* private fields */ }Expand description
Reusable LLM configuration. Each agent that needs an LLM holds its own
LlmConfig and calls LlmConfig::request to start a chat request.
Build one with LlmConfig::builder for explicit settings, or with
LlmConfig::from_env to read from AGENT_LINE_* environment variables.
Multiple agents can share one config or each hold their own (cheap fast
model for one step, strong reasoning model for another).
Implementations§
Source§impl LlmConfig
impl LlmConfig
Sourcepub fn builder() -> LlmConfigBuilder
pub fn builder() -> LlmConfigBuilder
Start building an explicit LLM configuration.
Sourcepub fn from_env() -> Self
pub fn from_env() -> Self
Build an LLM configuration from AGENT_LINE_* environment variables.
Reads AGENT_LINE_PROVIDER, AGENT_LINE_LLM_URL, AGENT_LINE_MODEL,
AGENT_LINE_API_KEY, AGENT_LINE_NUM_CTX (Ollama context window),
and AGENT_LINE_MAX_TOKENS (OpenAI/Anthropic response cap; falls back
to AGENT_LINE_NUM_CTX if unset). Defaults to a local Ollama
configuration when nothing is set.
If AGENT_LINE_DEBUG is set, the resolved config is logged to stderr
once.
Sourcepub fn with_model(self, model: impl Into<String>) -> Self
pub fn with_model(self, model: impl Into<String>) -> Self
Return a copy of this config with a different model name. All other fields (provider, base URL, API key, token budgets) are preserved.
Sourcepub fn request(&self) -> LlmRequestBuilder
pub fn request(&self) -> LlmRequestBuilder
Start building an LLM chat request that uses this config.
Each call creates a fresh LlmRequestBuilder; chain .system(),
.user(), and .send() on the result. The config itself is not
consumed, so an agent can call self.llm.request() repeatedly.