#[non_exhaustive]pub struct Agent<M>where
M: CompletionModel,{
pub name: Option<String>,
pub description: Option<String>,
pub model: Arc<M>,
pub preamble: Option<String>,
pub static_context: Vec<Document>,
pub temperature: Option<f64>,
pub max_tokens: Option<u64>,
pub additional_params: Option<Value>,
pub tool_server_handle: ToolServerHandle,
pub dynamic_context: Arc<RwLock<Vec<(usize, Box<dyn VectorStoreIndexDyn>)>>>,
pub tool_choice: Option<ToolChoice>,
}Expand description
Struct representing an LLM agent. An agent is an LLM model combined with a preamble (i.e.: system prompt) and a static set of context documents and tools. All context documents and tools are always provided to the agent when prompted.
§Example
use rig::{completion::Prompt, providers::openai};
let openai = openai::Client::from_env();
let comedian_agent = openai
.agent("gpt-4o")
.preamble("You are a comedian here to entertain the user using humour and jokes.")
.temperature(0.9)
.build();
let response = comedian_agent.prompt("Entertain me!")
.await
.expect("Failed to prompt the agent");Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.name: Option<String>Name of the agent used for logging and debugging
description: Option<String>Agent description. Primarily useful when using sub-agents as part of an agent workflow and converting agents to other formats.
model: Arc<M>Completion model (e.g.: OpenAI’s gpt-3.5-turbo-1106, Cohere’s command-r)
preamble: Option<String>System prompt
static_context: Vec<Document>Context documents always available to the agent
temperature: Option<f64>Temperature of the model
max_tokens: Option<u64>Maximum number of tokens for the completion
additional_params: Option<Value>Additional parameters to be passed to the model
tool_server_handle: ToolServerHandle§dynamic_context: Arc<RwLock<Vec<(usize, Box<dyn VectorStoreIndexDyn>)>>>List of vector store, with the sample number
tool_choice: Option<ToolChoice>Whether or not the underlying LLM should be forced to use a tool before providing a response.
Trait Implementations§
Source§impl<M> Chat for Agent<M>where
M: CompletionModel,
impl<M> Chat for Agent<M>where
M: CompletionModel,
Source§impl<M> Completion<M> for Agent<M>where
M: CompletionModel,
impl<M> Completion<M> for Agent<M>where
M: CompletionModel,
Source§async fn completion(
&self,
prompt: impl Into<Message> + WasmCompatSend,
chat_history: Vec<Message>,
) -> Result<CompletionRequestBuilder<M>, CompletionError>
async fn completion( &self, prompt: impl Into<Message> + WasmCompatSend, chat_history: Vec<Message>, ) -> Result<CompletionRequestBuilder<M>, CompletionError>
prompt and chat_history.
This function is meant to be called by the user to further customize the
request at prompt time before sending it. Read more