pub struct Response {Show 25 fields
pub created_at: u64,
pub error: Option<ErrorObject>,
pub id: String,
pub incomplete_details: Option<IncompleteDetails>,
pub instructions: Option<String>,
pub max_output_tokens: Option<u32>,
pub metadata: Option<HashMap<String, String>>,
pub model: String,
pub object: String,
pub output: Vec<OutputContent>,
pub output_text: Option<String>,
pub parallel_tool_calls: Option<bool>,
pub previous_response_id: Option<String>,
pub reasoning: Option<ReasoningConfig>,
pub store: Option<bool>,
pub service_tier: Option<ServiceTier>,
pub status: Status,
pub temperature: Option<f32>,
pub text: Option<TextConfig>,
pub tool_choice: Option<ToolChoice>,
pub tools: Option<Vec<ToolDefinition>>,
pub top_p: Option<f32>,
pub truncation: Option<Truncation>,
pub usage: Option<Usage>,
pub user: Option<String>,
}Expand description
The complete response returned by the Responses API.
Fields§
§created_at: u64Unix timestamp (in seconds) when this Response was created.
error: Option<ErrorObject>Error object if the API failed to generate a response.
id: StringUnique identifier for this response.
incomplete_details: Option<IncompleteDetails>Details about why the response is incomplete, if any.
instructions: Option<String>Instructions that were inserted as the first item in context.
max_output_tokens: Option<u32>The value of max_output_tokens that was honored.
metadata: Option<HashMap<String, String>>Metadata tags/values that were attached to this response.
model: StringModel ID used to generate the response.
object: StringThe object type – always response.
output: Vec<OutputContent>The array of content items generated by the model.
output_text: Option<String>SDK-only convenience property that contains the aggregated text output from all
output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
parallel_tool_calls: Option<bool>Whether parallel tool calls were enabled.
previous_response_id: Option<String>Previous response ID, if creating part of a multi-turn conversation.
reasoning: Option<ReasoningConfig>Reasoning configuration echoed back (effort, summary settings).
store: Option<bool>Whether to store the generated model response for later retrieval via API.
service_tier: Option<ServiceTier>The service tier that actually processed this response.
status: StatusThe status of the response generation.
temperature: Option<f32>Sampling temperature that was used.
text: Option<TextConfig>Text format configuration echoed back (plain, json_object, json_schema).
tool_choice: Option<ToolChoice>How the model chose or was forced to choose a tool.
tools: Option<Vec<ToolDefinition>>Tool definitions that were provided.
top_p: Option<f32>Nucleus sampling cutoff that was used.
truncation: Option<Truncation>Truncation strategy that was applied.
usage: Option<Usage>Token usage statistics for this request.
user: Option<String>End-user ID for which this response was generated.