pub struct CreateResponseArgs { /* private fields */ }
Expand description
Builder for CreateResponse
.
Implementations§
Source§impl CreateResponseArgs
impl CreateResponseArgs
Sourcepub fn input<VALUE: Into<Input>>(&mut self, value: VALUE) -> &mut Self
pub fn input<VALUE: Into<Input>>(&mut self, value: VALUE) -> &mut Self
Text, image, or file inputs to the model, used to generate a response.
Sourcepub fn model<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
pub fn model<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
Model ID used to generate the response, like gpt-4o
.
OpenAI offers a wide range of models with different capabilities,
performance characteristics, and price points.
Sourcepub fn include<VALUE: Into<Vec<String>>>(&mut self, value: VALUE) -> &mut Self
pub fn include<VALUE: Into<Vec<String>>>(&mut self, value: VALUE) -> &mut Self
Specify additional output data to include in the model response.
Supported values:
file_search_call.results
Include the search results of the file search tool call.message.input_image.image_url
Include image URLs from the input message.computer_call_output.output.image_url
Include image URLs from the computer call output.reasoning.encrypted_content
Include an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (for example, when thestore
parameter is set tofalse
, or when an organization is enrolled in the zero-data- retention program).
If None
, no additional data is returned.
Sourcepub fn instructions<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
pub fn instructions<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
Inserts a system (or developer) message as the first item in the model’s context.
When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.
Sourcepub fn max_output_tokens<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
pub fn max_output_tokens<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
Sourcepub fn metadata<VALUE: Into<HashMap<String, String>>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn metadata<VALUE: Into<HashMap<String, String>>>( &mut self, value: VALUE, ) -> &mut Self
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Sourcepub fn parallel_tool_calls<VALUE: Into<bool>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn parallel_tool_calls<VALUE: Into<bool>>( &mut self, value: VALUE, ) -> &mut Self
Whether to allow the model to run tool calls in parallel.
Sourcepub fn previous_response_id<VALUE: Into<String>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn previous_response_id<VALUE: Into<String>>( &mut self, value: VALUE, ) -> &mut Self
The unique ID of the previous response to the model. Use this to create multi-turn conversations.
Sourcepub fn reasoning<VALUE: Into<ReasoningConfig>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn reasoning<VALUE: Into<ReasoningConfig>>( &mut self, value: VALUE, ) -> &mut Self
o-series models only: Configuration options for reasoning models.
Sourcepub fn service_tier<VALUE: Into<ServiceTier>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn service_tier<VALUE: Into<ServiceTier>>( &mut self, value: VALUE, ) -> &mut Self
Specifies the latency tier to use for processing the request.
This parameter is relevant for customers subscribed to the Scale tier service.
Supported values:
auto
- If the Project is Scale tier enabled, the system will utilize Scale tier credits until they are exhausted.
- If the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
default
The request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.flex
The request will be processed with the Flex Processing service tier. Learn more.
When not set, the default behavior is auto
.
When this parameter is set, the response body will include the service_tier
utilized.
Sourcepub fn store<VALUE: Into<bool>>(&mut self, value: VALUE) -> &mut Self
pub fn store<VALUE: Into<bool>>(&mut self, value: VALUE) -> &mut Self
Whether to store the generated model response for later retrieval via API.
Sourcepub fn temperature<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
pub fn temperature<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
What sampling temperature to use, between 0 and 2. Higher values like 0.8
will make the output more random, while lower values like 0.2 will make it
more focused and deterministic. We generally recommend altering this or
top_p
but not both.
Sourcepub fn text<VALUE: Into<TextConfig>>(&mut self, value: VALUE) -> &mut Self
pub fn text<VALUE: Into<TextConfig>>(&mut self, value: VALUE) -> &mut Self
Configuration options for a text response from the model. Can be plain text or structured JSON data.
Sourcepub fn tool_choice<VALUE: Into<ToolChoice>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn tool_choice<VALUE: Into<ToolChoice>>( &mut self, value: VALUE, ) -> &mut Self
How the model should select which tool (or tools) to use when generating a response.
Sourcepub fn tools<VALUE: Into<Vec<ToolDefinition>>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn tools<VALUE: Into<Vec<ToolDefinition>>>( &mut self, value: VALUE, ) -> &mut Self
An array of tools the model may call while generating a response. Can include built-in tools (file_search, web_search_preview, computer_use_preview) or custom function definitions.
Sourcepub fn top_p<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
pub fn top_p<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
An alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered. We generally recommend altering this or temperature
but
not both.
Sourcepub fn truncation<VALUE: Into<Truncation>>(&mut self, value: VALUE) -> &mut Self
pub fn truncation<VALUE: Into<Truncation>>(&mut self, value: VALUE) -> &mut Self
The truncation strategy to use for the model response:
auto
: drop items in the middle to fit context window.disabled
: error if exceeding context window.
Sourcepub fn user<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
pub fn user<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
Sourcepub fn build(&self) -> Result<CreateResponse, OpenAIError>
pub fn build(&self) -> Result<CreateResponse, OpenAIError>
Trait Implementations§
Source§impl Clone for CreateResponseArgs
impl Clone for CreateResponseArgs
Source§fn clone(&self) -> CreateResponseArgs
fn clone(&self) -> CreateResponseArgs
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more