#[non_exhaustive]pub struct Agent<M: CompletionModel> {
pub name: Option<String>,
pub model: M,
pub preamble: String,
pub static_context: Vec<Document>,
pub static_tools: Vec<String>,
pub temperature: Option<f64>,
pub max_tokens: Option<u64>,
pub additional_params: Option<Value>,
pub dynamic_context: Vec<(usize, Box<dyn VectorStoreIndexDyn>)>,
pub dynamic_tools: Vec<(usize, Box<dyn VectorStoreIndexDyn>)>,
pub tools: ToolSet,
}
Expand description
Struct representing an LLM agent. An agent is an LLM model combined with a preamble (i.e.: system prompt) and a static set of context documents and tools. All context documents and tools are always provided to the agent when prompted.
§Example
use rig::{completion::Prompt, providers::openai};
let openai = openai::Client::from_env();
let comedian_agent = openai
.agent("gpt-4o")
.preamble("You are a comedian here to entertain the user using humour and jokes.")
.temperature(0.9)
.build();
let response = comedian_agent.prompt("Entertain me!")
.await
.expect("Failed to prompt the agent");
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.name: Option<String>
Name of the agent used for logging and debugging
model: M
Completion model (e.g.: OpenAI’s gpt-3.5-turbo-1106, Cohere’s command-r)
preamble: String
System prompt
static_context: Vec<Document>
Context documents always available to the agent
static_tools: Vec<String>
Tools that are always available to the agent (identified by their name)
temperature: Option<f64>
Temperature of the model
max_tokens: Option<u64>
Maximum number of tokens for the completion
additional_params: Option<Value>
Additional parameters to be passed to the model
dynamic_context: Vec<(usize, Box<dyn VectorStoreIndexDyn>)>
List of vector store, with the sample number
dynamic_tools: Vec<(usize, Box<dyn VectorStoreIndexDyn>)>
Dynamic tools
tools: ToolSet
Actual tool implementations
Trait Implementations§
Source§impl<M: CompletionModel> Chat for Agent<M>
impl<M: CompletionModel> Chat for Agent<M>
Source§impl<M: CompletionModel> Completion<M> for Agent<M>
impl<M: CompletionModel> Completion<M> for Agent<M>
Source§async fn completion(
&self,
prompt: impl Into<Message> + Send,
chat_history: Vec<Message>,
) -> Result<CompletionRequestBuilder<M>, CompletionError>
async fn completion( &self, prompt: impl Into<Message> + Send, chat_history: Vec<Message>, ) -> Result<CompletionRequestBuilder<M>, CompletionError>
Generates a completion request builder for the given
prompt
and chat_history
.
This function is meant to be called by the user to further customize the
request at prompt time before sending it. Read moreSource§impl<M: CompletionModel> Prompt for &Agent<M>
impl<M: CompletionModel> Prompt for &Agent<M>
Source§impl<M: CompletionModel> Prompt for Agent<M>
impl<M: CompletionModel> Prompt for Agent<M>
Source§impl<M> StreamingChat<M, <M as CompletionModel>::StreamingResponse> for Agent<M>
impl<M> StreamingChat<M, <M as CompletionModel>::StreamingResponse> for Agent<M>
Source§fn stream_chat(
&self,
prompt: impl Into<Message> + Send,
chat_history: Vec<Message>,
) -> StreamingPromptRequest<'_, M>
fn stream_chat( &self, prompt: impl Into<Message> + Send, chat_history: Vec<Message>, ) -> StreamingPromptRequest<'_, M>
Stream a chat with history to the model
Source§impl<M: CompletionModel> StreamingCompletion<M> for Agent<M>
impl<M: CompletionModel> StreamingCompletion<M> for Agent<M>
Source§async fn stream_completion(
&self,
prompt: impl Into<Message> + Send,
chat_history: Vec<Message>,
) -> Result<CompletionRequestBuilder<M>, CompletionError>
async fn stream_completion( &self, prompt: impl Into<Message> + Send, chat_history: Vec<Message>, ) -> Result<CompletionRequestBuilder<M>, CompletionError>
Generate a streaming completion from a request
Source§impl<M> StreamingPrompt<M, <M as CompletionModel>::StreamingResponse> for Agent<M>
impl<M> StreamingPrompt<M, <M as CompletionModel>::StreamingResponse> for Agent<M>
Source§fn stream_prompt(
&self,
prompt: impl Into<Message> + Send,
) -> StreamingPromptRequest<'_, M>
fn stream_prompt( &self, prompt: impl Into<Message> + Send, ) -> StreamingPromptRequest<'_, M>
Stream a simple prompt to the model
Source§impl<M: CompletionModel> Tool for Agent<M>
impl<M: CompletionModel> Tool for Agent<M>
Source§type Error = PromptError
type Error = PromptError
The error type of the tool.
Source§async fn definition(&self, _prompt: String) -> ToolDefinition
async fn definition(&self, _prompt: String) -> ToolDefinition
A method returning the tool definition. The user prompt can be used to
tailor the definition to the specific use case.
Auto Trait Implementations§
impl<M> Freeze for Agent<M>where
M: Freeze,
impl<M> !RefUnwindSafe for Agent<M>
impl<M> Send for Agent<M>
impl<M> Sync for Agent<M>
impl<M> Unpin for Agent<M>where
M: Unpin,
impl<M> !UnwindSafe for Agent<M>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more