pub struct CreateChatCompletionRequest {Show 34 fields
pub metadata: Option<Metadata>,
pub top_logprobs: Option<i64>,
pub temperature: Option<Number>,
pub top_p: Option<Number>,
pub user: Option<String>,
pub safety_identifier: Option<String>,
pub prompt_cache_key: Option<String>,
pub service_tier: Option<ServiceTier>,
pub messages: Vec<ChatCompletionRequestMessage>,
pub model: ModelIdsShared,
pub modalities: Option<ResponseModalities>,
pub verbosity: Option<Verbosity>,
pub reasoning_effort: Option<ReasoningEffort>,
pub max_completion_tokens: Option<i64>,
pub frequency_penalty: Option<Number>,
pub presence_penalty: Option<Number>,
pub web_search_options: Option<WebSearchOptions>,
pub response_format: Option<ResponseFormat>,
pub audio: Option<Audio>,
pub store: Option<bool>,
pub stream: Option<bool>,
pub stop: Option<StopConfiguration>,
pub logit_bias: Option<IndexMap<String, i64>>,
pub logprobs: Option<bool>,
pub max_tokens: Option<i64>,
pub n: Option<i64>,
pub prediction: Option<Prediction>,
pub seed: Option<i64>,
pub stream_options: Option<ChatCompletionStreamOptions>,
pub tools: Option<Vec<Item>>,
pub tool_choice: Option<ChatCompletionToolChoiceOption>,
pub parallel_tool_calls: Option<ParallelToolCalls>,
pub function_call: Option<FunctionCall>,
pub functions: Option<Vec<ChatCompletionFunctions>>,
}
Fields§
§metadata: Option<Metadata>
§top_logprobs: Option<i64>
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
temperature: Option<Number>
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p
but not both.
top_p: Option<Number>
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature
but not both.
user: Option<String>
This field is being replaced by safety_identifier
and prompt_cache_key
. Use prompt_cache_key
instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
safety_identifier: Option<String>
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
prompt_cache_key: Option<String>
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user
field. Learn more.
service_tier: Option<ServiceTier>
§messages: Vec<ChatCompletionRequestMessage>
A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, images, and audio.
model: ModelIdsShared
Model ID used to generate the response, like gpt-4o
or o3
. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
modalities: Option<ResponseModalities>
§verbosity: Option<Verbosity>
§reasoning_effort: Option<ReasoningEffort>
§max_completion_tokens: Option<i64>
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
frequency_penalty: Option<Number>
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
presence_penalty: Option<Number>
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
web_search_options: Option<WebSearchOptions>
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
response_format: Option<ResponseFormat>
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} }
enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" }
enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
audio: Option<Audio>
Parameters for audio output. Required when audio output is requested with
modalities: ["audio"]
. Learn more.
store: Option<bool>
Whether or not to store the output of this chat completion request for use in our model distillation or evals products.
Supports text and image inputs. Note: image inputs over 8MB will be dropped.
stream: Option<bool>
If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information, along with the streaming responses guide for more information on how to handle the streaming events.
stop: Option<StopConfiguration>
§logit_bias: Option<IndexMap<String, i64>>
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
logprobs: Option<bool>
Whether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the
content
of message
.
max_tokens: Option<i64>
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
This value is now deprecated in favor of max_completion_tokens
, and is
not compatible with o-series models.
n: Option<i64>
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n
as 1
to minimize costs.
prediction: Option<Prediction>
Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content.
seed: Option<i64>
This feature is in Beta.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed
and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint
response parameter to monitor changes in the backend.
stream_options: Option<ChatCompletionStreamOptions>
§tools: Option<Vec<Item>>
A list of tools the model may call. You can provide either custom tools or function tools.
tool_choice: Option<ChatCompletionToolChoiceOption>
§parallel_tool_calls: Option<ParallelToolCalls>
§function_call: Option<FunctionCall>
Deprecated in favor of tool_choice
.
Controls which (if any) function is called by the model.
none
means the model will not call a function and instead generates a
message.
auto
means the model can pick between generating a message or calling a
function.
Specifying a particular function via {"name": "my_function"}
forces the
model to call that function.
none
is the default when no functions are present. auto
is the default
if functions are present.
functions: Option<Vec<ChatCompletionFunctions>>
Deprecated in favor of tools
.
A list of functions the model may generate JSON inputs for.
Implementations§
Source§impl CreateChatCompletionRequest
impl CreateChatCompletionRequest
Sourcepub fn builder() -> CreateChatCompletionRequestBuilder<((), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), ())>
pub fn builder() -> CreateChatCompletionRequestBuilder<((), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), ())>
Create a builder for building CreateChatCompletionRequest
.
On the builder, call .metadata(...)
(optional), .top_logprobs(...)
(optional), .temperature(...)
(optional), .top_p(...)
(optional), .user(...)
(optional), .safety_identifier(...)
(optional), .prompt_cache_key(...)
(optional), .service_tier(...)
(optional), .messages(...)
, .model(...)
, .modalities(...)
(optional), .verbosity(...)
(optional), .reasoning_effort(...)
(optional), .max_completion_tokens(...)
(optional), .frequency_penalty(...)
(optional), .presence_penalty(...)
(optional), .web_search_options(...)
(optional), .response_format(...)
(optional), .audio(...)
(optional), .store(...)
(optional), .stream(...)
(optional), .stop(...)
(optional), .logit_bias(...)
(optional), .logprobs(...)
(optional), .max_tokens(...)
(optional), .n(...)
(optional), .prediction(...)
(optional), .seed(...)
(optional), .stream_options(...)
(optional), .tools(...)
(optional), .tool_choice(...)
(optional), .parallel_tool_calls(...)
(optional), .function_call(...)
(optional), .functions(...)
(optional) to set the values of the fields.
Finally, call .build()
to create the instance of CreateChatCompletionRequest
.
Trait Implementations§
Source§impl Clone for CreateChatCompletionRequest
impl Clone for CreateChatCompletionRequest
Source§fn clone(&self) -> CreateChatCompletionRequest
fn clone(&self) -> CreateChatCompletionRequest
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more