pub struct CreateResponse {Show 19 fields
pub metadata: Option<Metadata>,
pub temperature: Option<f64>,
pub top_p: Option<f64>,
pub user: Option<String>,
pub service_tier: Option<ServiceTier>,
pub previous_response_id: Option<String>,
pub model: ModelIdsResponses,
pub reasoning: Option<Reasoning>,
pub max_output_tokens: Option<i64>,
pub instructions: Option<String>,
pub text: Option<ResponseProperties_Text>,
pub tools: Option<Vec<Tool>>,
pub tool_choice: Option<ResponseProperties_ToolChoice>,
pub truncation: Option<String>,
pub input: CreateResponse_Variant3_Input,
pub include: Option<Vec<Includable>>,
pub parallel_tool_calls: Option<bool>,
pub store: Option<bool>,
pub stream: Option<bool>,
}
Fields§
§metadata: Option<Metadata>
§temperature: Option<f64>
What sampling temperature to use, between 0 and 2.
top_p: Option<f64>
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
user: Option<String>
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
service_tier: Option<ServiceTier>
§previous_response_id: Option<String>
The unique ID of the previous response to the model.
model: ModelIdsResponses
Model ID used to generate the response, like gpt-4o
or o3
.
reasoning: Option<Reasoning>
§max_output_tokens: Option<i64>
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
instructions: Option<String>
Inserts a system (or developer) message as the first item in the model’s context.
text: Option<ResponseProperties_Text>
§tools: Option<Vec<Tool>>
An array of tools the model may call while generating a response.
tool_choice: Option<ResponseProperties_ToolChoice>
§truncation: Option<String>
The truncation strategy to use for the model response.
input: CreateResponse_Variant3_Input
§include: Option<Vec<Includable>>
Specify additional output data to include in the model response.
parallel_tool_calls: Option<bool>
Whether to allow the model to run tool calls in parallel.
store: Option<bool>
Whether to store the generated model response for later retrieval via API.
stream: Option<bool>
If set to true, the model response data will be streamed to the client as it is generated using server-sent events.
Trait Implementations§
Source§impl Clone for CreateResponse
impl Clone for CreateResponse
Source§fn clone(&self) -> CreateResponse
fn clone(&self) -> CreateResponse
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more