Struct chat_gpt_rs::request::Request
source · pub struct Request {
pub model: Model,
pub messages: Vec<Message>,
pub temperature: Option<f64>,
pub top_p: Option<f64>,
pub n: Option<i32>,
pub stop: Option<Vec<String>>,
pub max_tokens: Option<i32>,
pub presence_penalty: Option<f64>,
pub frequency_penalty: Option<f64>,
pub user: Option<String>,
}Expand description
request to the OpenAI API.
Fields§
§model: ModelID of the model to use. Currently, only gpt-3.5-turbo and gpt-3.5-turbo-0301 are supported.
messages: Vec<Message>Messages to generate chat completions for, in the chat format.
temperature: Option<f64>What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
top_p: Option<f64>Alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
n: Option<i32>How many chat completion choices to generate for each input message.
stop: Option<Vec<String>>Up to 4 sequences where the API will stop generating further tokens.
max_tokens: Option<i32>Maximum number of tokens to generate for each chat completion choice.
presence_penalty: Option<f64>A number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
frequency_penalty: Option<f64>A number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
user: Option<String>A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.