pub struct TokenCountsBodyArgs { /* private fields */ }Expand description
Builder for TokenCountsBody.
Implementations§
Source§impl TokenCountsBodyArgs
impl TokenCountsBodyArgs
Sourcepub fn conversation<VALUE: Into<ConversationParam>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn conversation<VALUE: Into<ConversationParam>>( &mut self, value: VALUE, ) -> &mut Self
The conversation that this response belongs to. Items from this
conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this
conversation after this response completes.
Sourcepub fn input<VALUE: Into<InputParam>>(&mut self, value: VALUE) -> &mut Self
pub fn input<VALUE: Into<InputParam>>(&mut self, value: VALUE) -> &mut Self
Text, image, or file inputs to the model, used to generate a response
Sourcepub fn instructions<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
pub fn instructions<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
A system (or developer) message inserted into the model’s context.
When used along with previous_response_id, the instructions from a previous response will
not be carried over to the next response. This makes it simple to swap out system (or
developer) messages in new responses.
Sourcepub fn model<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
pub fn model<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a
wide range of models with different capabilities, performance characteristics,
and price points. Refer to the model guide
to browse and compare available models.
Sourcepub fn parallel_tool_calls<VALUE: Into<bool>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn parallel_tool_calls<VALUE: Into<bool>>( &mut self, value: VALUE, ) -> &mut Self
Whether to allow the model to run tool calls in parallel.
Sourcepub fn previous_response_id<VALUE: Into<String>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn previous_response_id<VALUE: Into<String>>( &mut self, value: VALUE, ) -> &mut Self
The unique ID of the previous response to the model. Use this to create multi-turn
conversations. Learn more about conversation state.
Cannot be used in conjunction with conversation.
Sourcepub fn reasoning<VALUE: Into<Reasoning>>(&mut self, value: VALUE) -> &mut Self
pub fn reasoning<VALUE: Into<Reasoning>>(&mut self, value: VALUE) -> &mut Self
gpt-5 and o-series models only Configuration options for reasoning models.
Sourcepub fn text<VALUE: Into<ResponseTextParam>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn text<VALUE: Into<ResponseTextParam>>( &mut self, value: VALUE, ) -> &mut Self
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
Sourcepub fn tool_choice<VALUE: Into<ToolChoiceParam>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn tool_choice<VALUE: Into<ToolChoiceParam>>( &mut self, value: VALUE, ) -> &mut Self
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Sourcepub fn tools<VALUE: Into<Vec<Tool>>>(&mut self, value: VALUE) -> &mut Self
pub fn tools<VALUE: Into<Vec<Tool>>>(&mut self, value: VALUE) -> &mut Self
An array of tools the model may call while generating a response. You can specify which tool
to use by setting the tool_choice parameter.
Sourcepub fn truncation<VALUE: Into<Truncation>>(&mut self, value: VALUE) -> &mut Self
pub fn truncation<VALUE: Into<Truncation>>(&mut self, value: VALUE) -> &mut Self
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Sourcepub fn build(&self) -> Result<TokenCountsBody, OpenAIError>
pub fn build(&self) -> Result<TokenCountsBody, OpenAIError>
Trait Implementations§
Source§impl Clone for TokenCountsBodyArgs
impl Clone for TokenCountsBodyArgs
Source§fn clone(&self) -> TokenCountsBodyArgs
fn clone(&self) -> TokenCountsBodyArgs
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more