pub struct ChatCompletionRequestBody {Show 13 fields
pub model: String,
pub messages: Vec<Message>,
pub store: Option<bool>,
pub frequency_penalty: Option<f32>,
pub logit_bias: Option<FxHashMap<String, i32>>,
pub logprobs: Option<bool>,
pub top_logprobs: Option<u8>,
pub max_completion_tokens: Option<u64>,
pub n: Option<u32>,
pub modalities: Option<Vec<String>>,
pub presence_penalty: Option<f32>,
pub temperature: Option<f32>,
pub response_format: Option<ChatCompletionResponseFormat>,
}
Expand description
Request body structure for OpenAI Chat Completion API.
This structure contains all the parameters that can be sent to the OpenAI Chat Completion endpoint. Most fields are optional and will be omitted from the JSON if not set.
§Fields
model
- ID of the model to use (required)messages
- List of conversation messages (required)store
- Whether to store the output for the userfrequency_penalty
- Penalty for token frequency (-2.0 to 2.0)logit_bias
- Modify likelihood of specified tokenslogprobs
- Whether to return log probabilitiestop_logprobs
- Number of most likely tokens to return (0-20)max_completion_tokens
- Maximum tokens to generaten
- Number of completion choices to generatemodalities
- Output types the model should generatepresence_penalty
- Penalty for token presence (-2.0 to 2.0)temperature
- Sampling temperature (0-2)response_format
- Output format specification
§Example
use openai_tools::chat::ChatCompletionRequestBody;
use openai_tools::common::Message;
let mut body = ChatCompletionRequestBody::new("gpt-4o-mini".to_string());
body.messages = vec![Message::from_string("user".to_string(), "Hello!".to_string())];
body.temperature = Some(0.7);
Fields§
§model: String
ID of the model to use. (https://platform.openai.com/docs/models#model-endpoint-compatibility)
messages: Vec<Message>
A list of messages comprising the conversation so far.
store: Option<bool>
Whether or not to store the output of this chat completion request for user. false by default.
frequency_penalty: Option<f32>
-2.0 ~ 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
logit_bias: Option<FxHashMap<String, i32>>
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens to an associated bias value from 100 to 100.
logprobs: Option<bool>
Whether to return log probabilities of the output tokens or not.
top_logprobs: Option<u8>
0 ~ 20. Specify the number of most likely tokens to return at each token position, each with an associated log probability.
max_completion_tokens: Option<u64>
An upper bound for the number of tokens that can be generated for a completion.
n: Option<u32>
How many chat completion choices to generate for each input message. 1 by default.
modalities: Option<Vec<String>>
Output types that you would like the model to generate for this request. [“text”] for most models.
presence_penalty: Option<f32>
-2.0 ~ 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
temperature: Option<f32>
0 ~ 2. What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
response_format: Option<ChatCompletionResponseFormat>
An object specifying the format that the model must output. (https://platform.openai.com/docs/guides/structured-outputs)
Implementations§
Source§impl ChatCompletionRequestBody
impl ChatCompletionRequestBody
Sourcepub fn new(model_id: String) -> Self
pub fn new(model_id: String) -> Self
Creates a new ChatCompletionRequestBody
with the specified model ID.
All other fields are initialized to their default values and can be configured using the builder pattern methods.
§Arguments
model_id
- The ID of the OpenAI model to use
§Returns
A new ChatCompletionRequestBody
instance with the model ID set.
§Example
use openai_tools::chat::ChatCompletionRequestBody;
let body = ChatCompletionRequestBody::new("gpt-4o-mini".to_string());
assert_eq!(body.model, "gpt-4o-mini");
Trait Implementations§
Source§impl Clone for ChatCompletionRequestBody
impl Clone for ChatCompletionRequestBody
Source§fn clone(&self) -> ChatCompletionRequestBody
fn clone(&self) -> ChatCompletionRequestBody
1.0.0 · Source§const fn clone_from(&mut self, source: &Self)
const fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for ChatCompletionRequestBody
impl Debug for ChatCompletionRequestBody
Source§impl Default for ChatCompletionRequestBody
impl Default for ChatCompletionRequestBody
Source§fn default() -> ChatCompletionRequestBody
fn default() -> ChatCompletionRequestBody
Source§impl<'de> Deserialize<'de> for ChatCompletionRequestBody
impl<'de> Deserialize<'de> for ChatCompletionRequestBody
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Auto Trait Implementations§
impl Freeze for ChatCompletionRequestBody
impl RefUnwindSafe for ChatCompletionRequestBody
impl Send for ChatCompletionRequestBody
impl Sync for ChatCompletionRequestBody
impl Unpin for ChatCompletionRequestBody
impl UnwindSafe for ChatCompletionRequestBody
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<T> PolicyExt for Twhere
T: ?Sized,
impl<T> PolicyExt for Twhere
T: ?Sized,
Source§impl<R, P> ReadPrimitive<R> for P
impl<R, P> ReadPrimitive<R> for P
Source§fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
ReadEndian::read_from_little_endian()
.