Struct openai_flows::chat::ChatOptions
source · pub struct ChatOptions<'a> {
pub model: ChatModel,
pub restart: bool,
pub system_prompt: Option<&'a str>,
pub temperature: Option<f32>,
pub top_p: Option<f32>,
pub stop: Option<Vec<String>>,
pub max_tokens: Option<u16>,
pub presence_penalty: Option<f32>,
pub frequency_penalty: Option<f32>,
pub logit_bias: Option<HashMap<String, i8>>,
}Expand description
struct for setting the chat options.
Fields§
§model: ChatModelThe ID or name of the model to use for completion.
restart: boolWhen true, a new conversation will be created.
system_prompt: Option<&'a str>The prompt of the system role.
temperature: Option<f32>What sampling temperature to use, between 0 and 2.
top_p: Option<f32>An alternative to sampling with temperature
stop: Option<Vec<String>>Up to 4 sequences where the API will stop generating further tokens.
max_tokens: Option<u16>The maximum number of tokens to generate in the chat completion.
presence_penalty: Option<f32>Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
frequency_penalty: Option<f32>Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
logit_bias: Option<HashMap<String, i8>>Modify the likelihood of specified tokens appearing in the completion.