pub struct RequestBody {Show 21 fields
pub messages: Vec<Message>,
pub model: String,
pub stream: bool,
pub frequency_penalty: Option<f32>,
pub presence_penalty: Option<f32>,
pub max_tokens: Option<u32>,
pub max_completion_tokens: Option<u32>,
pub response_format: Option<ResponseFormat>,
pub safety_identifier: Option<String>,
pub seed: Option<i64>,
pub n: Option<u32>,
pub stop: Option<StopKeywords>,
pub stream_options: Option<StreamOptions>,
pub temperature: Option<f32>,
pub top_p: Option<f32>,
pub tools: Option<Vec<RequestTool>>,
pub tool_choice: Option<ToolChoice>,
pub logprobs: Option<bool>,
pub top_logprobs: Option<u32>,
pub extra_body: Option<ExtraBody>,
pub extra_body_map: Option<Map<String, Value>>,
}
Fields§
§messages: Vec<Message>
A list of messages comprising the conversation so far.
model: String
Name of the model to use to generate the response.
stream: bool
Although it is optional, you should explicitly designate it for an expected response.
frequency_penalty: Option<f32>
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
presence_penalty: Option<f32>
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
max_tokens: Option<u32>
The maximum number of tokens that can be generated in the chat completion.
Deprecated according to OpenAI’s Python SDK in favour of
max_completion_tokens
.
max_completion_tokens: Option<u32>
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
response_format: Option<ResponseFormat>
specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} }
enables Structured
Outputs which ensures the model will match your supplied JSON schema. Learn more
in the
Structured Outputs guide.
Setting to { "type": "json_object" }
enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is
preferred for models that support it.
safety_identifier: Option<String>
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user. It is recommended to hash their username or email address, in order to avoid sending any identifying information.
seed: Option<i64>
If specified, the system will make a best effort to sample deterministically. Determinism
is not guaranteed, and you should refer to the system_fingerprint
response parameter to
monitor changes in the backend.
n: Option<u32>
How many chat completion choices to generate for each input message. Note that
you will be charged based on the number of generated tokens across all of the
choices. Keep n
as 1
to minimize costs.
stop: Option<StopKeywords>
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
stream_options: Option<StreamOptions>
Options for streaming response. Only set this when you set stream: true
temperature: Option<f32>
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will
make the output more random, while lower values like 0.2 will make it more
focused and deterministic. It is generally recommended to alter this or top_p
but
not both.
top_p: Option<f32>
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
It is generally recommended to alter this or temperature
but not both.
tools: Option<Vec<RequestTool>>
A list of tools the model may call.
tool_choice: Option<ToolChoice>
Controls which (if any) tool is called by the model. none
means the model will
not call any tool and instead generates a message. auto
means the model can
pick between generating a message or calling one or more tools. required
means
the model must call one or more tools. Specifying a particular tool via
{"type": "function", "function": {"name": "my_function"}}
forces the model to
call that tool.
logprobs: Option<bool>
Whether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the content
of
message
.
top_logprobs: Option<u32>
An integer between 0 and 20 specifying the number of most likely tokens to
return at each token position, each with an associated log probability.
logprobs
must be set to true
if this parameter is used.
extra_body: Option<ExtraBody>
Other request bodies that are not in standard OpenAI API.
extra_body_map: Option<Map<String, Value>>
Other request bodies that are not in standard OpenAI API and not included in the ExtraBody struct.
Implementations§
Source§impl RequestBody
impl RequestBody
pub async fn get_response(&self, url: &str, key: &str) -> Result<String>
Sourcepub async fn get_stream_response(
&self,
url: &str,
api_key: &str,
) -> Result<BoxStream<'static, Result<String, Error>>, Error>
pub async fn get_stream_response( &self, url: &str, api_key: &str, ) -> Result<BoxStream<'static, Result<String, Error>>, Error>
Getting stream response. You must ensure self.stream is true, or otherwise it will panic.
§Example
use std::sync::LazyLock;
use futures_util::StreamExt;
use openai_interface::chat::request::{Message, RequestBody};
const DEEPSEEK_API_KEY: LazyLock<&str> =
LazyLock::new(|| include_str!("../.././keys/deepseek_domestic_key").trim());
const DEEPSEEK_CHAT_URL: &'static str = "https://api.deepseek.com/chat/completions";
const DEEPSEEK_MODEL: &'static str = "deepseek-chat";
#[tokio::main]
async fn main() {
let request = RequestBody {
messages: vec![
Message::System {
content: "This is a request of test purpose. Reply briefly".to_string(),
name: None,
},
Message::User {
content: "What's your name?".to_string(),
name: None,
},
],
model: DEEPSEEK_MODEL.to_string(),
stream: true,
..Default::default()
};
let mut response = request
.get_stream_response(DEEPSEEK_CHAT_URL, *DEEPSEEK_API_KEY)
.await
.unwrap();
while let Some(chunk) = response.next().await {
println!("{}", chunk.unwrap());
}
}