Struct tokio_openai::ChatRequest
source · pub struct ChatRequest {
pub model: ChatModel,
pub messages: Vec<Msg>,
pub temperature: f64,
pub top_p: f64,
pub n: u32,
pub stop_at: Vec<String>,
pub max_tokens: u32,
}
Fields§
§model: ChatModel
§messages: Vec<Msg>
§temperature: f64
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
OpenAI generally recommend altering this or top_p but not both.
top_p: f64
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
OpenAI generally recommends altering this or temperature but not both.
n: u32
How many chat completion choices to generate for each input message.
stop_at: Vec<String>
§max_tokens: u32
max tokens to generate
if 0, then no limit