pub struct TokenCountsBody {
pub conversation: Option<ConversationParam>,
pub input: Option<InputParam>,
pub instructions: Option<String>,
pub model: Option<String>,
pub parallel_tool_calls: Option<bool>,
pub previous_response_id: Option<String>,
pub reasoning: Option<Reasoning>,
pub text: Option<ResponseTextParam>,
pub tool_choice: Option<ToolChoiceParam>,
pub tools: Option<Vec<Tool>>,
pub truncation: Option<Truncation>,
}Fields§
§conversation: Option<ConversationParam>The conversation that this response belongs to. Items from this
conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this
conversation after this response completes.
input: Option<InputParam>Text, image, or file inputs to the model, used to generate a response
instructions: Option<String>A system (or developer) message inserted into the model’s context.
When used along with previous_response_id, the instructions from a previous response will
not be carried over to the next response. This makes it simple to swap out system (or
developer) messages in new responses.
model: Option<String>Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a
wide range of models with different capabilities, performance characteristics,
and price points. Refer to the model guide
to browse and compare available models.
parallel_tool_calls: Option<bool>Whether to allow the model to run tool calls in parallel.
previous_response_id: Option<String>The unique ID of the previous response to the model. Use this to create multi-turn
conversations. Learn more about conversation state.
Cannot be used in conjunction with conversation.
reasoning: Option<Reasoning>gpt-5 and o-series models only Configuration options for reasoning models.
text: Option<ResponseTextParam>Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
tool_choice: Option<ToolChoiceParam>How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
tools: Option<Vec<Tool>>An array of tools the model may call while generating a response. You can specify which tool
to use by setting the tool_choice parameter.
truncation: Option<Truncation>The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Trait Implementations§
Source§impl Clone for TokenCountsBody
impl Clone for TokenCountsBody
Source§fn clone(&self) -> TokenCountsBody
fn clone(&self) -> TokenCountsBody
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more