pub struct PromptRequest<'a, S: PromptType, M: CompletionModel> { /* private fields */ }
Expand description
A builder for creating prompt requests with customizable options. Uses generics to track which options have been set during the build process.
If you expect to continuously call tools, you will want to ensure you use the .multi_turn()
argument to add more turns as by default, it is 0 (meaning only 1 tool round-trip). Otherwise,
attempting to await (which will send the prompt request) can potentially return
crate::completion::request::PromptError::MaxDepthError
if the agent decides to call tools
back to back.
Implementations§
Source§impl<'a, M: CompletionModel> PromptRequest<'a, Standard, M>
impl<'a, M: CompletionModel> PromptRequest<'a, Standard, M>
Sourcepub fn new(agent: &'a Agent<M>, prompt: impl Into<Message>) -> Self
pub fn new(agent: &'a Agent<M>, prompt: impl Into<Message>) -> Self
Create a new PromptRequest with the given prompt and model
Sourcepub fn extended_details(self) -> PromptRequest<'a, Extended, M>
pub fn extended_details(self) -> PromptRequest<'a, Extended, M>
Enable returning extended details for responses (includes aggregated token usage)
Note: This changes the type of the response from .send
to return a PromptResponse
struct
instead of a simple String
. This is useful for tracking token usage across multiple turns
of conversation.
Source§impl<'a, S: PromptType, M: CompletionModel> PromptRequest<'a, S, M>
impl<'a, S: PromptType, M: CompletionModel> PromptRequest<'a, S, M>
Sourcepub fn multi_turn(self, depth: usize) -> PromptRequest<'a, S, M>
pub fn multi_turn(self, depth: usize) -> PromptRequest<'a, S, M>
Set the maximum depth for multi-turn conversations (ie, the maximum number of turns an LLM can have calling tools before writing a text response).
If the maximum turn number is exceeded, it will return a crate::completion::request::PromptError::MaxDepthError
.
Sourcepub fn with_history(
self,
history: &'a mut Vec<Message>,
) -> PromptRequest<'a, S, M>
pub fn with_history( self, history: &'a mut Vec<Message>, ) -> PromptRequest<'a, S, M>
Add chat history to the prompt request
Trait Implementations§
Source§impl<'a, M: CompletionModel> IntoFuture for PromptRequest<'a, Extended, M>
impl<'a, M: CompletionModel> IntoFuture for PromptRequest<'a, Extended, M>
Source§type Output = Result<PromptResponse, PromptError>
type Output = Result<PromptResponse, PromptError>
Source§type IntoFuture = Pin<Box<dyn Future<Output = <PromptRequest<'a, Extended, M> as IntoFuture>::Output> + Send + 'a>>
type IntoFuture = Pin<Box<dyn Future<Output = <PromptRequest<'a, Extended, M> as IntoFuture>::Output> + Send + 'a>>
Source§fn into_future(self) -> Self::IntoFuture
fn into_future(self) -> Self::IntoFuture
Source§impl<'a, M: CompletionModel> IntoFuture for PromptRequest<'a, Standard, M>
Due to: RFC 2515, we have to use a BoxFuture
for the IntoFuture
implementation. In the future, we should be able to use impl Future<...>
directly via the associated type.
impl<'a, M: CompletionModel> IntoFuture for PromptRequest<'a, Standard, M>
Due to: RFC 2515, we have to use a BoxFuture
for the IntoFuture
implementation. In the future, we should be able to use impl Future<...>
directly via the associated type.