pub struct OpenAIChatCompletionRequest {Show 20 fields
pub model: String,
pub messages: Vec<OpenAIChatMessage>,
pub temperature: Option<f32>,
pub max_tokens: Option<u32>,
pub top_p: Option<f32>,
pub frequency_penalty: Option<f32>,
pub presence_penalty: Option<f32>,
pub stop: Option<Vec<String>>,
pub user: Option<String>,
pub provider: Option<String>,
pub stream: Option<bool>,
pub logit_bias: Option<Value>,
pub logprobs: Option<bool>,
pub top_logprobs: Option<u32>,
pub n: Option<u32>,
pub response_format: Option<ResponseFormat>,
pub tools: Option<Vec<Tool>>,
pub tool_choice: Option<ToolChoice>,
pub thinking_config: Option<ThinkingConfig>,
pub thinking: Option<Value>,
}Expand description
OpenAI-compatible request payload with full message replay support.
Fields§
§model: StringThe identifier of the model to use for the completion.
messages: Vec<OpenAIChatMessage>Full OpenAI-compatible message history.
temperature: Option<f32>The sampling temperature to use.
max_tokens: Option<u32>The maximum number of tokens to generate in the completion.
top_p: Option<f32>Nucleus sampling parameter.
frequency_penalty: Option<f32>Frequency penalty.
presence_penalty: Option<f32>Presence penalty.
stop: Option<Vec<String>>Stop sequences.
user: Option<String>End-user identifier.
provider: Option<String>Router/provider hint.
stream: Option<bool>If true, the response will be streamed as SSE events.
logit_bias: Option<Value>Logit bias map.
logprobs: Option<bool>Whether to return log probabilities.
top_logprobs: Option<u32>Number of top log probabilities to return.
n: Option<u32>Number of completion choices to generate.
response_format: Option<ResponseFormat>Structured response format.
tools: Option<Vec<Tool>>Tools available to the model.
tool_choice: Option<ToolChoice>Tool selection strategy.
thinking_config: Option<ThinkingConfig>Gemini thinking configuration.
thinking: Option<Value>Anthropic extended-thinking configuration (thinking.budget_tokens).
Serialised as the thinking top-level field so it is passed through to
OpenRouter/Anthropic as {"type":"enabled","budget_tokens":N}.
Implementations§
Source§impl OpenAIChatCompletionRequest
impl OpenAIChatCompletionRequest
Sourcepub fn new(model: impl Into<String>, messages: Vec<OpenAIChatMessage>) -> Self
pub fn new(model: impl Into<String>, messages: Vec<OpenAIChatMessage>) -> Self
Creates a new OpenAI-compatible chat completion request.
Sourcepub fn with_temperature(self, temperature: f32) -> Self
pub fn with_temperature(self, temperature: f32) -> Self
Sets the sampling temperature.
Sourcepub fn with_max_tokens(self, max_tokens: u32) -> Self
pub fn with_max_tokens(self, max_tokens: u32) -> Self
Sets the maximum number of tokens to generate.
Sourcepub fn with_provider(self, provider: impl Into<String>) -> Self
pub fn with_provider(self, provider: impl Into<String>) -> Self
Sets a provider hint.
Sourcepub fn with_stream(self, stream: bool) -> Self
pub fn with_stream(self, stream: bool) -> Self
Enables or disables streaming.
Sourcepub fn with_top_p(self, top_p: f32) -> Self
pub fn with_top_p(self, top_p: f32) -> Self
Sets nucleus sampling.
Sourcepub fn with_frequency_penalty(self, frequency_penalty: f32) -> Self
pub fn with_frequency_penalty(self, frequency_penalty: f32) -> Self
Sets frequency penalty.
Sourcepub fn with_presence_penalty(self, presence_penalty: f32) -> Self
pub fn with_presence_penalty(self, presence_penalty: f32) -> Self
Sets presence penalty.
Sourcepub fn with_logit_bias(self, logit_bias: Value) -> Self
pub fn with_logit_bias(self, logit_bias: Value) -> Self
Sets logit bias.
Sourcepub fn with_logprobs(self, logprobs: bool) -> Self
pub fn with_logprobs(self, logprobs: bool) -> Self
Enables or disables log probabilities.
Sourcepub fn with_top_logprobs(self, top_logprobs: u32) -> Self
pub fn with_top_logprobs(self, top_logprobs: u32) -> Self
Sets the top log probabilities count.
Sourcepub fn with_response_format(self, response_format: ResponseFormat) -> Self
pub fn with_response_format(self, response_format: ResponseFormat) -> Self
Sets the response format.
Sourcepub fn with_tools(self, tools: Vec<Tool>) -> Self
pub fn with_tools(self, tools: Vec<Tool>) -> Self
Sets the available tools.
Sourcepub fn with_tool_choice(self, tool_choice: ToolChoice) -> Self
pub fn with_tool_choice(self, tool_choice: ToolChoice) -> Self
Sets the tool choice strategy.
Sourcepub fn with_thinking_config(self, thinking_config: ThinkingConfig) -> Self
pub fn with_thinking_config(self, thinking_config: ThinkingConfig) -> Self
Sets the Gemini thinking configuration.
Sourcepub fn with_include_thoughts(self, include_thoughts: bool) -> Self
pub fn with_include_thoughts(self, include_thoughts: bool) -> Self
Enables or disables thought summaries.
Sourcepub fn with_thinking_level(self, thinking_level: ThinkingLevel) -> Self
pub fn with_thinking_level(self, thinking_level: ThinkingLevel) -> Self
Sets the Gemini 3 thinking level.
Sourcepub fn with_thinking_budget(self, thinking_budget: i32) -> Self
pub fn with_thinking_budget(self, thinking_budget: i32) -> Self
Sets the Gemini 2.5 thinking budget.
Sourcepub fn with_anthropic_thinking(self, budget_tokens: i32) -> Self
pub fn with_anthropic_thinking(self, budget_tokens: i32) -> Self
Sets Anthropic extended-thinking configuration.
Serialised as thinking: {"type":"enabled","budget_tokens":N} — the format
expected by Anthropic’s API via OpenRouter/Rainy API.
Sourcepub fn validate_openai_compatibility(&self) -> Result<(), String>
pub fn validate_openai_compatibility(&self) -> Result<(), String>
Validates compatibility using the same parameter rules as the simple chat request.
Sourcepub fn supports_thinking(&self) -> bool
pub fn supports_thinking(&self) -> bool
Checks whether the selected model supports thinking features.
Sourcepub fn requires_thought_signatures(&self) -> bool
pub fn requires_thought_signatures(&self) -> bool
Checks whether the selected model requires thought signatures for function calling.
Trait Implementations§
Source§impl Clone for OpenAIChatCompletionRequest
impl Clone for OpenAIChatCompletionRequest
Source§fn clone(&self) -> OpenAIChatCompletionRequest
fn clone(&self) -> OpenAIChatCompletionRequest
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more