pub struct StreamingPromptRequest<'a, M>where
M: CompletionModel,{ /* private fields */ }
Expand description
A builder for creating prompt requests with customizable options. Uses generics to track which options have been set during the build process.
If you expect to continuously call tools, you will want to ensure you use the .multi_turn()
argument to add more turns as by default, it is 0 (meaning only 1 tool round-trip). Otherwise,
attempting to await (which will send the prompt request) can potentially return
crate::completion::request::PromptError::MaxDepthError
if the agent decides to call tools
back to back.
Implementations§
Source§impl<'a, M> StreamingPromptRequest<'a, M>where
M: CompletionModel + 'static,
<M as CompletionModel>::StreamingResponse: Send + GetTokenUsage,
impl<'a, M> StreamingPromptRequest<'a, M>where
M: CompletionModel + 'static,
<M as CompletionModel>::StreamingResponse: Send + GetTokenUsage,
Sourcepub fn new(agent: &'a Agent<M>, prompt: impl Into<Message>) -> Self
pub fn new(agent: &'a Agent<M>, prompt: impl Into<Message>) -> Self
Create a new PromptRequest with the given prompt and model
Sourcepub fn multi_turn(self, depth: usize) -> Self
pub fn multi_turn(self, depth: usize) -> Self
Set the maximum depth for multi-turn conversations (ie, the maximum number of turns an LLM can have calling tools before writing a text response).
If the maximum turn number is exceeded, it will return a crate::completion::request::PromptError::MaxDepthError
.
Sourcepub fn with_history(self, history: Vec<Message>) -> Self
pub fn with_history(self, history: Vec<Message>) -> Self
Add chat history to the prompt request