pub struct Completion<'a> {Show 15 fields
pub prompt: Option<Cow<'a, str>>,
pub suffix: Option<Cow<'a, str>>,
pub max_tokens: u32,
pub temperature: f32,
pub top_p: f32,
pub n: u32,
pub stream: bool,
pub logprobs: Option<u32>,
pub echo: bool,
pub stop: Option<Vec<Cow<'a, str>>>,
pub presence_penalty: f32,
pub frequency_penalty: f32,
pub best_of: u32,
pub logit_bias: Option<HashMap<Cow<'a, str>, i32>>,
pub user: Option<Cow<'a, str>>,
}
Expand description
Given a prompt, the response will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.
Fields§
§prompt: Option<Cow<'a, str>>
The prompt(s) to generate completions for, encoded as a string, array of strings,
array of tokens, or array of token arrays.
Note that <|endoftext|>
is the document separator that the model sees during training,
so if a prompt is not specified the model will generate as if from the beginning of a new document.
suffix: Option<Cow<'a, str>>
The suffix that comes after a completion of inserted text.
max_tokens: u32
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model’s context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
temperature: f32
What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or top_p but not both.
top_p: f32
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
n: u32
How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
stream: bool
Whether to stream back partial progress.
If set, tokens will be sent as data-only server-sent events as they become available,
with the stream terminated by a data: [DONE]
message.
logprobs: Option<u32>
Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. If you need more than this, please contact support@openai.com and describe your use case.
echo: bool
Echo back the prompt in addition to the completion
stop: Option<Vec<Cow<'a, str>>>
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
presence_penalty: f32
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
frequency_penalty: f32
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
best_of: u32
Generates best_of
completions server-side and returns the
“best” (the one with the lowest log probability per token). Results cannot be streamed.
When used with n, best_of controls the number of candidate completions and n specifies
how many to return – best_of must be greater than n.
Note: Because this parameter generates many completions, it can quickly consume your token
quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
logit_bias: Option<HashMap<Cow<'a, str>, i32>>
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
user: Option<Cow<'a, str>>
A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse.
Trait Implementations§
Source§impl<'a> Clone for Completion<'a>
impl<'a> Clone for Completion<'a>
Source§fn clone(&self) -> Completion<'a>
fn clone(&self) -> Completion<'a>
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more