pub struct ChatAgent { /* private fields */ }Expand description
A simple chat agent that processes messages through an LLM provider with tool support.
This is the framework’s ready-to-use agent for text message -> response flows.
It manages conversation history, streams responses from the provider, and
automatically dispatches tool calls through a BuiltinToolExecutor.
§Example
use brainwires_agents::ChatAgent;
use brainwires_tools::{BuiltinToolExecutor, ToolRegistry};
use brainwires_core::{ChatOptions, ToolContext};
use std::sync::Arc;
let provider = /* create a provider */;
let registry = ToolRegistry::with_builtins();
let context = ToolContext::default();
let executor = Arc::new(BuiltinToolExecutor::new(registry, context));
let options = ChatOptions::default();
let mut agent = ChatAgent::new(provider, executor, options)
.with_system_prompt("You are a helpful assistant.")
.with_max_tool_rounds(5);
let response = agent.process_message("Hello!").await?;
println!("{}", response);Implementations§
Source§impl ChatAgent
impl ChatAgent
Sourcepub fn new(
provider: Arc<dyn Provider>,
executor: Arc<BuiltinToolExecutor>,
options: ChatOptions,
) -> Self
pub fn new( provider: Arc<dyn Provider>, executor: Arc<BuiltinToolExecutor>, options: ChatOptions, ) -> Self
Create a new ChatAgent.
Defaults max_tool_rounds to 10.
Sourcepub fn with_max_tool_rounds(self, rounds: usize) -> Self
pub fn with_max_tool_rounds(self, rounds: usize) -> Self
Set the maximum number of tool-call rounds before the agent stops.
Sourcepub fn with_pre_execute_hook(self, hook: Arc<dyn ToolPreHook>) -> Self
pub fn with_pre_execute_hook(self, hook: Arc<dyn ToolPreHook>) -> Self
Attach a pre-execution hook that can allow or reject tool calls before they run.
Sourcepub fn with_system_prompt(self, prompt: &str) -> Self
pub fn with_system_prompt(self, prompt: &str) -> Self
Add a system prompt as the first message in the conversation.
If messages already exist, the system message is inserted at position 0.
Sourcepub async fn process_message(&mut self, input: &str) -> Result<String>
pub async fn process_message(&mut self, input: &str) -> Result<String>
Process a user message and return the final assistant text response.
This is the core completion loop:
- Adds the user message to history
- Streams the provider response, collecting text and tool calls
- If tool calls are present, executes them and loops
- Returns the final accumulated text once no more tool calls remain
(or
max_tool_roundsis reached)
Sourcepub async fn process_message_streaming<F>(
&mut self,
input: &str,
on_chunk: F,
) -> Result<String>
pub async fn process_message_streaming<F>( &mut self, input: &str, on_chunk: F, ) -> Result<String>
Process a user message with streaming — calls on_chunk for each text
fragment as it arrives from the provider.
Returns the full accumulated text once the completion loop finishes.
Sourcepub fn restore_messages(&mut self, messages: Vec<Message>)
pub fn restore_messages(&mut self, messages: Vec<Message>)
Replace the entire message history with the provided messages.
This is used by session persistence to restore a previously saved conversation when an agent session is recreated.
Sourcepub fn clear_history(&mut self)
pub fn clear_history(&mut self)
Clear all messages (including any system prompt).
Sourcepub fn trim_history(&mut self, max_messages: usize)
pub fn trim_history(&mut self, max_messages: usize)
Keep only the last max_messages messages, preserving the system prompt
at position 0 if one exists.
Sourcepub fn message_count(&self) -> usize
pub fn message_count(&self) -> usize
Return the number of messages in the conversation.
Sourcepub fn cumulative_usage(&self) -> &Usage
pub fn cumulative_usage(&self) -> &Usage
Return the accumulated token usage for this agent session.
Counts prompt + completion tokens across all completions. Updated
whenever the provider emits a StreamChunk::Usage event.
Sourcepub fn reset_usage(&mut self)
pub fn reset_usage(&mut self)
Reset the cumulative token usage counter.
Sourcepub async fn compact_history(&mut self) -> Result<()>
pub async fn compact_history(&mut self) -> Result<()>
Compact conversation history by trimming older messages.
This is a simple, LLM-free compaction that keeps the system prompt
(if any) and the most recent keep messages. For LLM-powered
summarisation, use the DreamSummarizer from brainwires-autonomy.