pub struct GenerateTextBuilder<M, P> { /* private fields */ }Expand description
Builder for text generation.
This builder allows configuring the model, prompt, tools, and other settings before executing the generation.
Implementations§
Source§impl GenerateTextBuilder<(), ()>
impl GenerateTextBuilder<(), ()>
Source§impl<M, P> GenerateTextBuilder<M, P>
impl<M, P> GenerateTextBuilder<M, P>
Sourcepub fn tools(self, tools: Vec<Arc<dyn Tool>>) -> Self
pub fn tools(self, tools: Vec<Arc<dyn Tool>>) -> Self
Configures tools that the language model can invoke during generation.
When tools are provided, the model gains the ability to call them to retrieve information, perform actions, or integrate with external systems. The model must support tool use for this to have effect.
§Arguments
tools- A vector of tool implementations available to the model.
Sourcepub fn max_steps(self, max_steps: u32) -> Self
pub fn max_steps(self, max_steps: u32) -> Self
Sets the maximum number of sequential generation steps.
A step is a single invocation of the language model. This controls how many times the model will be called in succession. Useful for tool use scenarios where the model needs multiple rounds to complete a task.
§Arguments
max_steps- The maximum number of model invocations to allow.
Sourcepub fn temperature(self, temperature: f32) -> Self
pub fn temperature(self, temperature: f32) -> Self
Sets the sampling temperature for controlling output randomness.
Temperature controls the randomness of the model’s outputs. Valid ranges are typically 0.0 to 2.0, though this depends on the model. Lower values (closer to 0.0) make output more deterministic and focused, while higher values produce more diverse and creative output.
§Arguments
temperature- The temperature value to use.
Sourcepub fn max_tokens(self, max_tokens: u32) -> Self
pub fn max_tokens(self, max_tokens: u32) -> Self
Sets the maximum number of tokens to generate in the output.
This controls the maximum length of the model’s response. Generation will stop when the model reaches this limit. The exact interpretation of “tokens” depends on the tokenizer used by the model provider.
§Arguments
max_tokens- The maximum number of tokens to generate.
Sourcepub fn retry_policy(self, retry_policy: RetryPolicy) -> Self
pub fn retry_policy(self, retry_policy: RetryPolicy) -> Self
Sets the retry policy for handling transient API failures.
The retry policy defines how the SDK should handle temporary failures such as rate limiting or service unavailability. Permanent errors are not retried regardless of policy.
§Arguments
retry_policy- The retry policy to apply.
Source§impl<P> GenerateTextBuilder<(), P>
impl<P> GenerateTextBuilder<(), P>
Sourcepub fn model<Mod: LanguageModel + 'static>(
self,
model: Mod,
) -> GenerateTextBuilder<Arc<dyn LanguageModel>, P>
pub fn model<Mod: LanguageModel + 'static>( self, model: Mod, ) -> GenerateTextBuilder<Arc<dyn LanguageModel>, P>
Sets the language model to use for generation.
This method is required before calling execute() or execute_async().
The model is wrapped in an Arc to allow thread-safe shared ownership.
§Arguments
model- An implementation ofLanguageModelthat will perform the generation.
§Returns
A new builder with the model configured, transitioning to a state where only a prompt is required.
Source§impl<M> GenerateTextBuilder<M, ()>
impl<M> GenerateTextBuilder<M, ()>
Sourcepub fn prompt(
self,
prompt: impl Into<String>,
) -> GenerateTextBuilder<M, Vec<Message>>
pub fn prompt( self, prompt: impl Into<String>, ) -> GenerateTextBuilder<M, Vec<Message>>
Sets the prompt from a string.
This convenience method converts the string into a user message
and stores it as the conversation prompt. Equivalent to calling
messages() with a single user message.
§Arguments
prompt- A string that will be used as the user message.
§Returns
A new builder with the prompt configured, transitioning to a state where the builder is ready for execution if a model has been set.
Sourcepub fn messages(
self,
messages: Vec<Message>,
) -> GenerateTextBuilder<M, Vec<Message>>
pub fn messages( self, messages: Vec<Message>, ) -> GenerateTextBuilder<M, Vec<Message>>
Sets the prompt from a vector of conversation messages.
This method allows providing a full conversation history, including previous user messages, assistant responses, and tool results. The messages will be sent to the model in order.
§Arguments
messages- A vector of messages representing the conversation history.
§Returns
A new builder with the prompt configured, transitioning to a state where the builder is ready for execution if a model has been set.
Source§impl<M, P> GenerateTextBuilder<M, P>
impl<M, P> GenerateTextBuilder<M, P>
Sourcepub fn on_preliminary_tool_result(
self,
callback: Arc<dyn Fn(ToolResultPart) -> BoxFuture<'static, ()> + Send + Sync>,
) -> Self
pub fn on_preliminary_tool_result( self, callback: Arc<dyn Fn(ToolResultPart) -> BoxFuture<'static, ()> + Send + Sync>, ) -> Self
Set a callback for preliminary tool results.
This is useful when you want to receive updates about tool execution (e.g., partial outputs from long-running tools) while the tool is still running, even in a non-streaming generation context.
Source§impl GenerateTextBuilder<Arc<dyn LanguageModel>, Vec<Message>>
impl GenerateTextBuilder<Arc<dyn LanguageModel>, Vec<Message>>
Sourcepub async fn execute(self) -> Result<GenerateTextResult>
pub async fn execute(self) -> Result<GenerateTextResult>
Execute the text generation.
This will send the prompt to the model, handle tool calls if tools are provided, and return the final result.
§Returns
A Result containing GenerateTextResult on success, or a GenerateError on failure.