pub struct CreateCompletionRequest {Show 18 fields
pub model: CreateCompletionRequest_Model,
pub prompt: Option<CreateCompletionRequest_Prompt>,
pub best_of: Option<i64>,
pub echo: Option<bool>,
pub frequency_penalty: Option<f64>,
pub logit_bias: Option<CreateCompletionRequest_LogitBias>,
pub logprobs: Option<i64>,
pub max_tokens: Option<i64>,
pub n: Option<i64>,
pub presence_penalty: Option<f64>,
pub seed: Option<i64>,
pub stop: Option<StopConfiguration>,
pub stream: Option<bool>,
pub stream_options: Option<ChatCompletionStreamOptions>,
pub suffix: Option<String>,
pub temperature: Option<f64>,
pub top_p: Option<f64>,
pub user: Option<String>,
}
Fields§
§model: CreateCompletionRequest_Model
§prompt: Option<CreateCompletionRequest_Prompt>
§best_of: Option<i64>
Generates best_of
completions server-side and returns the “best” (the
one with the highest log probability per token).
echo: Option<bool>
Echo back the prompt in addition to the completion
frequency_penalty: Option<f64>
Number between -2.0 and 2.0.
logit_bias: Option<CreateCompletionRequest_LogitBias>
Modify the likelihood of specified tokens appearing in the completion.
logprobs: Option<i64>
Include the log probabilities on the logprobs
most likely output
tokens, as well the chosen tokens.
max_tokens: Option<i64>
The maximum number of tokens that can be generated in the completion.
n: Option<i64>
How many completions to generate for each prompt.
presence_penalty: Option<f64>
Number between -2.0 and 2.0.
seed: Option<i64>
If specified, our system will make a best effort to sample
deterministically, such that repeated requests with the same seed
and
parameters should return the same result.
stop: Option<StopConfiguration>
§stream: Option<bool>
Whether to stream back partial progress.
stream_options: Option<ChatCompletionStreamOptions>
§suffix: Option<String>
The suffix that comes after a completion of inserted text.
temperature: Option<f64>
What sampling temperature to use, between 0 and 2.
top_p: Option<f64>
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
user: Option<String>
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
Trait Implementations§
Source§impl Clone for CreateCompletionRequest
impl Clone for CreateCompletionRequest
Source§fn clone(&self) -> CreateCompletionRequest
fn clone(&self) -> CreateCompletionRequest
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more