pub struct AgentBuilder { /* private fields */ }Implementations§
Source§impl AgentBuilder
impl AgentBuilder
pub fn new(llm: Arc<dyn StreamingModelProvider>) -> Self
Sourcepub async fn from_spec(
spec: &AgentSpec,
base_prompts: Vec<Prompt>,
) -> Result<Self>
pub async fn from_spec( spec: &AgentSpec, base_prompts: Vec<Prompt>, ) -> Result<Self>
Create a builder from a resolved AgentSpec.
The LLM provider is derived from spec.model via ModelProviderParser.
base_prompts are prepended before the spec’s own prompts.
Sourcepub fn system_prompt(self, prompt: Prompt) -> Self
pub fn system_prompt(self, prompt: Prompt) -> Self
Add a prompt to the system prompt.
Multiple prompts are concatenated with double newlines.
pub fn tools(self, tx: Sender<McpCommand>, tools: Vec<ToolDefinition>) -> Self
Sourcepub fn tool_timeout(self, timeout: Duration) -> Self
pub fn tool_timeout(self, timeout: Duration) -> Self
Set the timeout for tool execution
If a tool does not return a result within this duration, it will be marked as failed and the agent will continue processing.
Default: 20 minutes
Sourcepub fn compaction(self, config: CompactionConfig) -> Self
pub fn compaction(self, config: CompactionConfig) -> Self
Configure context compaction settings.
By default, agents automatically compact context when token usage exceeds 85% of the context window, preventing overflow during long-running tasks.
§Examples
// Custom threshold
agent(llm).compaction(CompactionConfig::with_threshold(0.9))
// Disable compaction entirely
agent(llm).compaction(CompactionConfig::disabled())
// Full customization
agent(llm).compaction(
CompactionConfig::with_threshold(0.85)
.keep_recent_tool_results(3)
.min_messages(20)
)Sourcepub fn disable_compaction(self) -> Self
pub fn disable_compaction(self) -> Self
Disable context compaction entirely.
Overflow errors from the model will be surfaced directly to callers.
Sourcepub fn max_auto_continues(self, max: u32) -> Self
pub fn max_auto_continues(self, max: u32) -> Self
Configure the maximum number of auto-continue attempts.
When the LLM stops without making tool calls, the agent may inject a continuation prompt and restart the LLM stream for resumable stop reasons (for example, token length limits).
This setting limits how many times the agent will attempt to continue
before giving up and returning AgentMessage::Done.
Default: 3
§Example
// Allow up to 5 auto-continue attempts
agent(llm).max_auto_continues(5)
// Disable auto-continue entirely
agent(llm).max_auto_continues(0)Sourcepub fn prompt_cache_key(self, key: String) -> Self
pub fn prompt_cache_key(self, key: String) -> Self
Set a prompt cache key for LLM provider request routing.
This is typically a session ID (UUID) that remains stable across all turns within a conversation, improving prompt cache hit rates.
Sourcepub fn messages(self, messages: Vec<ChatMessage>) -> Self
pub fn messages(self, messages: Vec<ChatMessage>) -> Self
Pre-populate the context with conversation history (e.g. from a restored session).
These messages are inserted after the system prompt.