pub struct ChatCompletionParametersBuilder { /* private fields */ }Expand description
Builder for ChatCompletionParameters.
Implementations§
Source§impl ChatCompletionParametersBuilder
impl ChatCompletionParametersBuilder
Sourcepub fn messages<VALUE: Into<Vec<ChatMessage>>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn messages<VALUE: Into<Vec<ChatMessage>>>( &mut self, value: VALUE, ) -> &mut Self
A list of messages comprising the conversation so far.
Sourcepub fn model<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
pub fn model<VALUE: Into<String>>(&mut self, value: VALUE) -> &mut Self
ID of the model to use.
Sourcepub fn store<VALUE: Into<bool>>(&mut self, value: VALUE) -> &mut Self
pub fn store<VALUE: Into<bool>>(&mut self, value: VALUE) -> &mut Self
Whether or not to store the output of this chat completion request for use in our model distillation or evals products.
Sourcepub fn reasoning_effort<VALUE: Into<ReasoningEffort>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn reasoning_effort<VALUE: Into<ReasoningEffort>>( &mut self, value: VALUE, ) -> &mut Self
Constrains effort on reasoning for reasoning models.
Sourcepub fn metadata<VALUE: Into<HashMap<String, String>>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn metadata<VALUE: Into<HashMap<String, String>>>( &mut self, value: VALUE, ) -> &mut Self
Developer-defined tags and values used for filtering completions in the dashboard.
Sourcepub fn frequency_penalty<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
pub fn frequency_penalty<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
Sourcepub fn logit_bias<VALUE: Into<HashMap<String, i32>>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn logit_bias<VALUE: Into<HashMap<String, i32>>>( &mut self, value: VALUE, ) -> &mut Self
Modify the likelihood of specified tokens appearing in the completion.
Sourcepub fn logprobs<VALUE: Into<bool>>(&mut self, value: VALUE) -> &mut Self
pub fn logprobs<VALUE: Into<bool>>(&mut self, value: VALUE) -> &mut Self
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the ‘content’ of ‘message’.
Sourcepub fn top_logprobs<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
pub fn top_logprobs<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. ‘logprobs’ must be set to ‘true’ if this parameter is used.
Sourcepub fn max_tokens<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
pub fn max_tokens<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
Max completion tokens, deprecated (still used by vllm)
Sourcepub fn max_completion_tokens<VALUE: Into<u32>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn max_completion_tokens<VALUE: Into<u32>>( &mut self, value: VALUE, ) -> &mut Self
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
Sourcepub fn n<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
pub fn n<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
How many chat completion choices to generate for each input message.
Sourcepub fn modalities<VALUE: Into<Vec<Modality>>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn modalities<VALUE: Into<Vec<Modality>>>( &mut self, value: VALUE, ) -> &mut Self
Output types that you would like the model to generate for this request.
Sourcepub fn prediction<VALUE: Into<PredictedOutput>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn prediction<VALUE: Into<PredictedOutput>>( &mut self, value: VALUE, ) -> &mut Self
Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content.
Sourcepub fn audio<VALUE: Into<AudioParameters>>(&mut self, value: VALUE) -> &mut Self
pub fn audio<VALUE: Into<AudioParameters>>(&mut self, value: VALUE) -> &mut Self
Parameters for audio output. Required when audio output is requested with modalities: [“audio”].
Sourcepub fn presence_penalty<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
pub fn presence_penalty<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
Sourcepub fn response_format<VALUE: Into<ChatCompletionResponseFormat>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn response_format<VALUE: Into<ChatCompletionResponseFormat>>( &mut self, value: VALUE, ) -> &mut Self
An object specifying the format that the model must output. Compatible with GPT-4o, GPT-4o mini, GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to { “type”: “json_schema”, “json_schema”: {…} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Setting to { “type”: “json_object” } enables JSON mode, which ensures the message the model generates is valid JSON.
Sourcepub fn seed<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
pub fn seed<VALUE: Into<u32>>(&mut self, value: VALUE) -> &mut Self
Deprecated (still used by vllm) This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
Sourcepub fn stop<VALUE: Into<StopToken>>(&mut self, value: VALUE) -> &mut Self
pub fn stop<VALUE: Into<StopToken>>(&mut self, value: VALUE) -> &mut Self
Up to 4 sequences where the API will stop generating further tokens.
Sourcepub fn stream<VALUE: Into<bool>>(&mut self, value: VALUE) -> &mut Self
pub fn stream<VALUE: Into<bool>>(&mut self, value: VALUE) -> &mut Self
If set, partial messages will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
Sourcepub fn stream_options<VALUE: Into<ChatCompletionStreamOptions>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn stream_options<VALUE: Into<ChatCompletionStreamOptions>>( &mut self, value: VALUE, ) -> &mut Self
Options for streaming response. Only set this when you set stream: true.
Sourcepub fn temperature<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
pub fn temperature<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Sourcepub fn top_p<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
pub fn top_p<VALUE: Into<f32>>(&mut self, value: VALUE) -> &mut Self
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Sourcepub fn tools<VALUE: Into<Vec<ChatCompletionTool>>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn tools<VALUE: Into<Vec<ChatCompletionTool>>>( &mut self, value: VALUE, ) -> &mut Self
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.
Sourcepub fn tool_choice<VALUE: Into<ChatCompletionToolChoice>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn tool_choice<VALUE: Into<ChatCompletionToolChoice>>( &mut self, value: VALUE, ) -> &mut Self
Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool.
Sourcepub fn parallel_tool_calls<VALUE: Into<bool>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn parallel_tool_calls<VALUE: Into<bool>>( &mut self, value: VALUE, ) -> &mut Self
Whether to enable parallel function calling during tool use.
Sourcepub fn safety_identifier<VALUE: Into<String>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn safety_identifier<VALUE: Into<String>>( &mut self, value: VALUE, ) -> &mut Self
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies.
Sourcepub fn prompt_cache_key<VALUE: Into<String>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn prompt_cache_key<VALUE: Into<String>>( &mut self, value: VALUE, ) -> &mut Self
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates.
Sourcepub fn web_search_options<VALUE: Into<WebSearchOptions>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn web_search_options<VALUE: Into<WebSearchOptions>>( &mut self, value: VALUE, ) -> &mut Self
This tool searches the web for relevant results to use in a response.
Sourcepub fn extra_body<VALUE: Into<Value>>(&mut self, value: VALUE) -> &mut Self
pub fn extra_body<VALUE: Into<Value>>(&mut self, value: VALUE) -> &mut Self
Allows to pass arbitrary json as an extra_body parameter, for specific features/openai-compatible endpoints.
Sourcepub fn query_params<VALUE: Into<HashMap<String, String>>>(
&mut self,
value: VALUE,
) -> &mut Self
pub fn query_params<VALUE: Into<HashMap<String, String>>>( &mut self, value: VALUE, ) -> &mut Self
Azure OpenAI and some other providers may require special query parameters to be set on the request URL. This field allows you to specify those query parameters.
Sourcepub fn build(
&self,
) -> Result<ChatCompletionParameters, ChatCompletionParametersBuilderError>
pub fn build( &self, ) -> Result<ChatCompletionParameters, ChatCompletionParametersBuilderError>
Trait Implementations§
Source§impl Clone for ChatCompletionParametersBuilder
impl Clone for ChatCompletionParametersBuilder
Source§fn clone(&self) -> ChatCompletionParametersBuilder
fn clone(&self) -> ChatCompletionParametersBuilder
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more