Struct CreateCompletionRequest

Source
pub struct CreateCompletionRequest {
Show 18 fields pub model: Model, pub prompt: Option<PromptInput>, pub max_tokens: Option<u32>, pub temperature: Option<f64>, pub top_p: Option<f64>, pub n: Option<u32>, pub best_of: Option<u32>, pub stream: Option<bool>, pub stream_options: Option<ChatCompletionStreamOptions>, pub logprobs: Option<u32>, pub echo: Option<bool>, pub stop: Option<StopSequence>, pub presence_penalty: Option<f64>, pub frequency_penalty: Option<f64>, pub logit_bias: Option<HashMap<String, i32>>, pub user: Option<String>, pub seed: Option<i64>, pub suffix: Option<String>,
}
Expand description

A request struct for creating text completions with the OpenAI API.

This struct fully reflects the extended specification from OpenAI, including fields such as best_of, seed, and suffix.

Fields§

§model: Model

Required. ID of the model to use. For example: "gpt-3.5-turbo-instruct", "davinci-002", or "text-davinci-003".

§prompt: Option<PromptInput>

Required. The prompt(s) to generate completions for. Defaults to <|endoftext|> if not provided.

Can be a single string, an array of strings, an array of integers (token IDs), or an array of arrays of integers (multiple token sequences).

§max_tokens: Option<u32>

The maximum number of tokens to generate in the completion. Defaults to 16.

The combined length of prompt + max_tokens cannot exceed the model’s context length.

§temperature: Option<f64>

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

We generally recommend altering this or top_p but not both.

§top_p: Option<f64>

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

§n: Option<u32>

How many completions to generate for each prompt. Defaults to 1.

Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure you have reasonable settings for max_tokens and stop.

§best_of: Option<u32>

Generates best_of completions server-side and returns the “best” (the one with the highest log probability per token). Must be greater than n. Defaults to 1.

Note: This parameter can quickly consume your token quota if best_of is large.

§stream: Option<bool>

Whether to stream back partial progress. Defaults to false.

If set to true, tokens will be sent as data-only server-sent events (SSE) as they become available, with the stream terminated by a data: [DONE] message.

§stream_options: Option<ChatCompletionStreamOptions>

Additional options that could be used in streaming scenarios. This is a placeholder for any extended streaming logic.

§logprobs: Option<u32>

Include the log probabilities on the logprobs most likely tokens, along with the chosen tokens. A value of 5 returns the 5 most likely tokens. Defaults to null.

§echo: Option<bool>

Echo back the prompt in addition to the completion. Defaults to false.

§stop: Option<StopSequence>

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. Defaults to null.

§presence_penalty: Option<f64>

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.

§frequency_penalty: Option<f64>

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.

§logit_bias: Option<HashMap<String, i32>>

Modify the likelihood of specified tokens appearing in the completion. Maps token IDs to a bias value from -100 to 100. Defaults to null.

§user: Option<String>

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. This is optional, but recommended. Example: "user-1234".

§seed: Option<i64>

If specified, the system will make a best effort to sample deterministically. Repeated requests with the same seed and parameters should return the same result (best-effort).

Determinism is not guaranteed, and you should refer to the system_fingerprint in the response to monitor backend changes.

§suffix: Option<String>

The suffix that comes after a completion of inserted text. This parameter is only supported for gpt-3.5-turbo-instruct. Defaults to null.

Trait Implementations§

Source§

impl Clone for CreateCompletionRequest

Source§

fn clone(&self) -> CreateCompletionRequest

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for CreateCompletionRequest

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Default for CreateCompletionRequest

Source§

fn default() -> CreateCompletionRequest

Returns the “default value” for a type. Read more
Source§

impl Serialize for CreateCompletionRequest

Source§

fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>
where __S: Serializer,

Serialize this value into the given Serde serializer. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> PolicyExt for T
where T: ?Sized,

Source§

fn and<P, B, E>(self, other: P) -> And<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow only if self and other return Action::Follow. Read more
Source§

fn or<P, B, E>(self, other: P) -> Or<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow if either self or other returns Action::Follow. Read more
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> ErasedDestructor for T
where T: 'static,