pub struct AgentLoop { /* private fields */ }Expand description
The core ReAct agent loop.
Drives an LLM through iterative reasoning and tool execution until the model produces a final text answer or the iteration limit is reached.
The loop does not own the LLM client; instead, run() takes
&dyn LlmClient so the same loop can be reused with different backends.
Implementations§
Source§impl AgentLoop
impl AgentLoop
Sourcepub fn builder() -> AgentLoopBuilder
pub fn builder() -> AgentLoopBuilder
Create a builder with default settings.
Sourcepub fn channel_config(&self) -> &ChannelConfig
pub fn channel_config(&self) -> &ChannelConfig
Returns the channel configuration for this agent loop.
Used by AgentSession to create bounded
channels with the configured capacities and policies.
Sourcepub async fn run(
&self,
llm: &dyn LlmClient,
messages: Vec<Message>,
on_event: impl Fn(AgentEvent) + Send + Sync,
) -> Result<AgentResult>
pub async fn run( &self, llm: &dyn LlmClient, messages: Vec<Message>, on_event: impl Fn(AgentEvent) + Send + Sync, ) -> Result<AgentResult>
Run the agent loop to completion.
Iteratively calls the LLM, executes any requested tools, and feeds
tool results back into the conversation until the LLM produces a
final text response or max_iterations is exceeded.
The on_event callback is invoked for each notable event (tool
started, tool completed, text chunk). Pass |_| {} for a no-op.
Sourcepub async fn run_with_ops(
&self,
llm: &dyn LlmClient,
messages: Vec<Message>,
ops_rx: Option<Receiver<AgentOp>>,
on_event: impl Fn(AgentEvent) + Send + Sync,
) -> Result<AgentResult>
pub async fn run_with_ops( &self, llm: &dyn LlmClient, messages: Vec<Message>, ops_rx: Option<Receiver<AgentOp>>, on_event: impl Fn(AgentEvent) + Send + Sync, ) -> Result<AgentResult>
Run with an optional interactive operations channel.
Pass ops_rx: None for behavior equivalent to run.
Sourcepub async fn run_streaming(
&self,
llm: &dyn LlmClient,
messages: Vec<Message>,
on_event: impl Fn(AgentEvent) + Send + Sync,
) -> Result<AgentResult>
pub async fn run_streaming( &self, llm: &dyn LlmClient, messages: Vec<Message>, on_event: impl Fn(AgentEvent) + Send + Sync, ) -> Result<AgentResult>
Streaming variant of run.
Uses LlmClient::chat_stream so that on_event receives
AgentEvent::TextChunk for each text delta as it arrives, plus
AgentEvent::TextDone with the full accumulated text and
AgentEvent::IterationStarted at each reasoning iteration.
Tool execution is identical to the non-streaming path.