pub struct ChatCompletionRequest {
pub model: String,
pub messages: Vec<ChatCompletionMessage>,
pub temperature: Option<f64>,
pub max_tokens: Option<u32>,
pub top_p: Option<f64>,
pub stream: Option<bool>,
pub stop: Option<Vec<String>>,
pub seed: Option<u64>,
}
Expand description
Represents a request to the OpenAI chat completion API.
model
: The language model to use for the chat completion.messages
: The messages to provide as context for the chat completion.temperature
: The temperature parameter to control the randomness of the generated response.max_tokens
: The maximum number of tokens to generate in the response.top_p
: The top-p parameter to control the nucleus sampling.stream
: Whether to stream the response or return it all at once.stop
: A list of strings to stop the generation when encountered.seed
: The seed value to use for the random number generator.
Fields§
§model: String
§messages: Vec<ChatCompletionMessage>
§temperature: Option<f64>
§max_tokens: Option<u32>
§top_p: Option<f64>
§stream: Option<bool>
§stop: Option<Vec<String>>
§seed: Option<u64>
Implementations§
Source§impl ChatCompletionRequest
Represents a request to the OpenAI chat completion API.
impl ChatCompletionRequest
Represents a request to the OpenAI chat completion API.
This struct provides a builder-style API for constructing a ChatCompletionRequest
with various optional parameters. The new
method creates a new instance with default values, and the other methods allow modifying individual parameters.
model
: The language model to use for the chat completion.messages
: The messages to provide as context for the chat completion.temperature
: The temperature parameter to control the randomness of the generated response.max_tokens
: The maximum number of tokens to generate in the response.top_p
: The top-p parameter to control the nucleus sampling.stream
: Whether to stream the response or return it all at once.stop
: A list of strings to stop the generation when encountered.seed
: The seed value to use for the random number generator.
Sourcepub fn new(model: &str, messages: Vec<ChatCompletionMessage>) -> Self
pub fn new(model: &str, messages: Vec<ChatCompletionMessage>) -> Self
Creates a new ChatCompletionRequest
instance with the given model and messages.
§Arguments
model
- The language model to use for the chat completion.messages
- The messages to provide as context for the chat completion.
Sourcepub fn temperature(self, temperature: f64) -> Self
pub fn temperature(self, temperature: f64) -> Self
Sets the temperature parameter for the chat completion request.
The temperature parameter controls the randomness of the generated response. Higher values (up to 1.0) make the output more random, while lower values make it more deterministic.
§Arguments
temperature
- The temperature value to use.
Sourcepub fn max_tokens(self, max_tokens: u32) -> Self
pub fn max_tokens(self, max_tokens: u32) -> Self
Sets the maximum number of tokens to generate in the response.
§Arguments
max_tokens
- The maximum number of tokens to generate.
Sourcepub fn top_p(self, top_p: f64) -> Self
pub fn top_p(self, top_p: f64) -> Self
Sets the top-p parameter for the chat completion request.
The top-p parameter controls the nucleus sampling, which is a technique for sampling from the most likely tokens.
§Arguments
top_p
- The top-p value to use.
Sourcepub fn stream(self, stream: bool) -> Self
pub fn stream(self, stream: bool) -> Self
Sets whether to stream the response or return it all at once.
§Arguments
stream
- Whether to stream the response or not.
Trait Implementations§
Source§impl Clone for ChatCompletionRequest
impl Clone for ChatCompletionRequest
Source§fn clone(&self) -> ChatCompletionRequest
fn clone(&self) -> ChatCompletionRequest
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more