pub struct Response {Show 23 fields
pub created_at: u64,
pub error: Option<ErrorObject>,
pub id: String,
pub incomplete_details: Option<IncompleteDetails>,
pub instructions: Option<String>,
pub max_output_tokens: Option<u32>,
pub metadata: Option<HashMap<String, String>>,
pub model: String,
pub object: String,
pub output: Vec<OutputContent>,
pub parallel_tool_calls: Option<bool>,
pub previous_response_id: Option<String>,
pub reasoning: Option<ReasoningConfig>,
pub service_tier: Option<ServiceTier>,
pub status: Status,
pub temperature: Option<f32>,
pub text: Option<TextConfig>,
pub tool_choice: Option<ToolChoice>,
pub tools: Option<Vec<ToolDefinition>>,
pub top_p: Option<f32>,
pub truncation: Option<Truncation>,
pub usage: Option<Usage>,
pub user: Option<String>,
}
Expand description
The complete response returned by the Responses API.
Fields§
§created_at: u64
Unix timestamp (in seconds) when this Response was created.
error: Option<ErrorObject>
Error object if the API failed to generate a response.
id: String
Unique identifier for this response.
incomplete_details: Option<IncompleteDetails>
Details about why the response is incomplete, if any.
instructions: Option<String>
Instructions that were inserted as the first item in context.
max_output_tokens: Option<u32>
The value of max_output_tokens
that was honored.
metadata: Option<HashMap<String, String>>
Metadata tags/values that were attached to this response.
model: String
Model ID used to generate the response.
object: String
The object type – always response
.
output: Vec<OutputContent>
The array of content items generated by the model.
parallel_tool_calls: Option<bool>
Whether parallel tool calls were enabled.
previous_response_id: Option<String>
Previous response ID, if creating part of a multi-turn conversation.
reasoning: Option<ReasoningConfig>
Reasoning configuration echoed back (effort, summary settings).
service_tier: Option<ServiceTier>
The service tier that actually processed this response.
status: Status
The status of the response generation.
temperature: Option<f32>
Sampling temperature that was used.
text: Option<TextConfig>
Text format configuration echoed back (plain, json_object, json_schema).
tool_choice: Option<ToolChoice>
How the model chose or was forced to choose a tool.
tools: Option<Vec<ToolDefinition>>
Tool definitions that were provided.
top_p: Option<f32>
Nucleus sampling cutoff that was used.
truncation: Option<Truncation>
Truncation strategy that was applied.
usage: Option<Usage>
Token usage statistics for this request.
user: Option<String>
End-user ID for which this response was generated.