Struct openai_flows::chat::ChatOptions
source · pub struct ChatOptions<'a> {Show 13 fields
pub model: ChatModel,
pub restart: bool,
pub system_prompt: Option<&'a str>,
pub pre_prompt: Option<&'a str>,
pub post_prompt: Option<&'a str>,
pub temperature: Option<f32>,
pub top_p: Option<f32>,
pub stop: Option<Vec<String>>,
pub max_tokens: Option<u16>,
pub presence_penalty: Option<f32>,
pub frequency_penalty: Option<f32>,
pub logit_bias: Option<HashMap<String, i8>>,
pub response_format: Option<ResponseFormat>,
}
Expand description
struct for setting the chat options.
Fields§
§model: ChatModel
The ID or name of the model to use for completion.
restart: bool
When true, a new conversation will be created.
system_prompt: Option<&'a str>
The prompt of the system role.
pre_prompt: Option<&'a str>
The prompt that will be prepended to user’s prompt without saving in history.
post_prompt: Option<&'a str>
The prompt that will be appended to user’s prompt without saving in history.
temperature: Option<f32>
What sampling temperature to use, between 0 and 2.
top_p: Option<f32>
An alternative to sampling with temperature
stop: Option<Vec<String>>
Up to 4 sequences where the API will stop generating further tokens.
max_tokens: Option<u16>
The maximum number of tokens to generate in the chat completion.
presence_penalty: Option<f32>
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
frequency_penalty: Option<f32>
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
logit_bias: Option<HashMap<String, i8>>
Modify the likelihood of specified tokens appearing in the completion.
response_format: Option<ResponseFormat>
An object specifying the format that the model must output. Used to enable JSON mode.