pub struct Body {Show 24 fields
pub model: String,
pub instructions: Option<String>,
pub plain_text_input: Option<String>,
pub messages_input: Option<Vec<Message>>,
pub tools: Option<Vec<Tool>>,
pub structured_output: Option<Format>,
pub temperature: Option<f64>,
pub max_output_tokens: Option<usize>,
pub max_tool_calls: Option<usize>,
pub metadata: Option<HashMap<String, Value>>,
pub parallel_tool_calls: Option<bool>,
pub include: Option<Vec<Include>>,
pub background: Option<bool>,
pub conversation: Option<String>,
pub previous_response_id: Option<String>,
pub reasoning: Option<Reasoning>,
pub safety_identifier: Option<String>,
pub service_tier: Option<String>,
pub store: Option<bool>,
pub stream: Option<bool>,
pub stream_options: Option<StreamOptions>,
pub top_logprobs: Option<usize>,
pub top_p: Option<f64>,
pub truncation: Option<Truncation>,
}
Expand description
Represents the body of a request to the OpenAI Responses API
This struct contains all the parameters for making requests to the OpenAI Responses API. It supports both plain text and structured message input, along with extensive configuration options for tools, reasoning, output formatting, and response behavior.
§Required Parameters
model
: The ID of the model to use- Either
plain_text_input
ORmessages_input
(mutually exclusive)
§API Reference
Based on the OpenAI Responses API specification: https://platform.openai.com/docs/api-reference/responses/create
§Examples
§Simple Text Input
use openai_tools::responses::request::Body;
let body = Body {
model: "gpt-4".to_string(),
plain_text_input: Some("What is the weather like?".to_string()),
..Default::default()
};
§With Messages and Tools
use openai_tools::responses::request::Body;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;
let messages = vec![
Message::from_string(Role::User, "Help me with coding")
];
let body = Body {
model: "gpt-4".to_string(),
messages_input: Some(messages),
instructions: Some("You are a helpful coding assistant".to_string()),
max_output_tokens: Some(1000),
..Default::default()
};
Fields§
§model: String
The ID of the model to use for generating responses
Specifies which OpenAI model to use for response generation. Common values include “gpt-4”, “gpt-4-turbo”, “gpt-3.5-turbo”.
§Required
This field is required for all requests.
§Examples
"gpt-4"
- Latest GPT-4 model"gpt-4-turbo"
- GPT-4 Turbo for faster responses"gpt-3.5-turbo"
- More cost-effective option
instructions: Option<String>
Optional instructions to guide the model’s behavior and response style
Provides system-level instructions that define how the model should behave, its personality, response format, or any other behavioral guidance.
§Examples
"You are a helpful assistant that provides concise answers"
"Respond only with JSON formatted data"
"Act as a professional code reviewer"
plain_text_input: Option<String>
Plain text input for simple text-based requests
Use this for straightforward text input when you don’t need the structure
of messages with roles. This is mutually exclusive with messages_input
.
§Mutually Exclusive
Cannot be used together with messages_input
. Choose one based on your needs:
- Use
plain_text_input
for simple, single-turn interactions - Use
messages_input
for conversation history or role-based interactions
§Examples
"What is the capital of France?"
"Summarize this article: [article content]"
"Write a haiku about programming"
messages_input: Option<Vec<Message>>
Structured message input for conversation-style interactions
Use this when you need conversation history, different message roles
(user, assistant, system), or structured dialogue. This is mutually
exclusive with plain_text_input
.
§Mutually Exclusive
Cannot be used together with plain_text_input
.
§Message Roles
System
: Instructions for the model’s behaviorUser
: User input or questionsAssistant
: Previous model responses (for conversation history)
§Examples
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;
let messages = vec![
Message::from_string(Role::System, "You are a helpful assistant"),
Message::from_string(Role::User, "Hello!"),
Message::from_string(Role::Assistant, "Hi there! How can I help you?"),
Message::from_string(Role::User, "What's 2+2?"),
];
tools: Option<Vec<Tool>>
Optional tools that the model can use during response generation
Provides the model with access to external tools like web search, code execution, file access, or custom functions. The model will automatically decide when and how to use these tools based on the query.
§Tool Types
- Web search tools for finding current information
- Code interpreter for running and analyzing code
- File search tools for accessing document collections
- Custom function tools for specific business logic
§Examples
use openai_tools::common::tool::Tool;
use openai_tools::common::parameters::ParameterProperty;
let tools = vec![
Tool::function("search", "Search the web", Vec::<(&str, ParameterProperty)>::new(), false),
Tool::function("calculate", "Perform calculations", Vec::<(&str, ParameterProperty)>::new(), false),
];
structured_output: Option<Format>
Optional tool choice configuration Optional structured output format specification
Defines the structure and format for the model’s response output. Use this when you need the response in a specific JSON schema format or other structured format for programmatic processing.
§Examples
use openai_tools::common::structured_output::Schema;
use openai_tools::responses::request::Format;
let format = Format::new(Schema::responses_json_schema("response_schema"));
temperature: Option<f64>
Optional sampling temperature for controlling response randomness
Controls the randomness and creativity of the model’s responses. Higher values make the output more random and creative, while lower values make it more focused, deterministic, and consistent.
§Range
- Range: 0.0 to 2.0
- Default: 1.0 (if not specified)
- Minimum: 0.0 (most deterministic, least creative)
- Maximum: 2.0 (most random, most creative)
§Recommended Values
-
0.0 - 0.3: Highly focused and deterministic
- Best for: Factual questions, code generation, translations
- Behavior: Very consistent, predictable responses
-
0.3 - 0.7: Balanced creativity and consistency
- Best for: General conversation, explanations, analysis
- Behavior: Good balance between creativity and reliability
-
0.7 - 1.2: More creative and varied responses
- Best for: Creative writing, brainstorming, ideation
- Behavior: More diverse and interesting outputs
-
1.2 - 2.0: Highly creative and unpredictable
- Best for: Experimental creative tasks, humor, unconventional ideas
- Behavior: Very diverse but potentially less coherent
§Usage Guidelines
- Start with 0.7 for most applications as a good default
- Use 0.0-0.3 when you need consistent, reliable responses
- Use 0.8-1.2 for creative tasks that still need coherence
- Avoid values above 1.5 unless you specifically want very random outputs
§API Reference
Corresponds to the temperature
parameter in the OpenAI Responses API:
https://platform.openai.com/docs/api-reference/responses/create
§Examples
use openai_tools::responses::request::Responses;
// Deterministic, factual responses
let mut client_factual = Responses::new();
client_factual.temperature(0.2);
// Balanced creativity and consistency
let mut client_balanced = Responses::new();
client_balanced.temperature(0.7);
// High creativity for brainstorming
let mut client_creative = Responses::new();
client_creative.temperature(1.1);
max_output_tokens: Option<usize>
Optional maximum number of tokens to generate in the response
Controls the maximum length of the generated response. The actual response may be shorter if the model naturally concludes or hits other stopping conditions.
§Range
- Minimum: 1
- Maximum: Depends on the model (typically 4096-8192 for most models)
§Default Behavior
If not specified, the model will use its default maximum output length.
§Examples
Some(100)
- Short responses, good for summaries or brief answersSome(1000)
- Medium responses, suitable for detailed explanationsSome(4000)
- Long responses, for comprehensive analysis or long-form content
max_tool_calls: Option<usize>
Optional maximum number of tool calls to make
Limits how many tools the model can invoke during response generation. This helps control cost and response time when using multiple tools.
§Range
- Minimum: 0 (no tool calls allowed)
- Maximum: Implementation-dependent
§Use Cases
- Set to
Some(1)
for single tool usage - Set to
Some(0)
to disable tool usage entirely - Leave as
None
for unlimited tool usage (subject to other constraints)
metadata: Option<HashMap<String, Value>>
Optional metadata to include with the request
Arbitrary key-value pairs that can be attached to the request for tracking, logging, or passing additional context that doesn’t affect the model’s behavior.
§Common Use Cases
- Request tracking:
{"request_id": "req_123", "user_id": "user_456"}
- A/B testing:
{"experiment": "variant_a", "test_group": "control"}
- Analytics:
{"session_id": "sess_789", "feature": "chat"}
§Examples
use std::collections::HashMap;
use serde_json::Value;
let mut metadata = HashMap::new();
metadata.insert("user_id".to_string(), Value::String("user123".to_string()));
metadata.insert("session_id".to_string(), Value::String("sess456".to_string()));
metadata.insert("priority".to_string(), Value::Number(serde_json::Number::from(1)));
parallel_tool_calls: Option<bool>
Optional flag to enable parallel tool calls
When enabled, the model can make multiple tool calls simultaneously rather than sequentially. This can significantly improve response time when multiple independent tools need to be used.
§Default
If not specified, defaults to the model’s default behavior (usually true
).
§When to Use
Some(true)
: Enable when tools are independent and can run in parallelSome(false)
: Disable when tools have dependencies or order matters
§Examples
- Weather + Stock prices: Can run in parallel (
true
) - File read + File analysis: Should run sequentially (
false
)
include: Option<Vec<Include>>
Optional fields to include in the output
Specifies additional metadata and information to include in the response beyond the main generated content. This can include tool call details, reasoning traces, log probabilities, and more.
§Available Inclusions
- Web search call sources and results
- Code interpreter execution outputs
- Image URLs from various sources
- Log probabilities for generated tokens
- Reasoning traces and encrypted content
§Examples
use openai_tools::responses::request::Include;
let includes = vec![
Include::WebSearchCall,
Include::LogprobsInOutput,
Include::ReasoningEncryptedContent,
];
background: Option<bool>
Optional flag to enable background processing
When enabled, allows the request to be processed in the background, potentially improving throughput for non-urgent requests.
§Use Cases
Some(true)
: Batch processing, non-interactive requestsSome(false)
orNone
: Real-time, interactive requests
§Trade-offs
- Background processing may have lower latency guarantees
- May be more cost-effective for bulk operations
- May have different rate limiting behavior
conversation: Option<String>
Optional conversation ID for tracking
Identifier for grouping related requests as part of the same conversation or session. This helps with context management and analytics.
§Format
Typically a UUID or other unique identifier string.
§Examples
Some("conv_123e4567-e89b-12d3-a456-426614174000".to_string())
Some("user123_session456".to_string())
previous_response_id: Option<String>
Optional ID of the previous response for context
References a previous response in the same conversation to maintain context and enable features like response chaining or follow-up handling.
§Use Cases
- Multi-turn conversations with context preservation
- Follow-up questions or clarifications
- Response refinement or iteration
§Examples
Some("resp_abc123def456".to_string())
reasoning: Option<Reasoning>
Optional reasoning configuration
Controls how the model approaches complex reasoning tasks, including the effort level and format of reasoning explanations.
§Use Cases
- Complex problem-solving requiring deep analysis
- Mathematical or logical reasoning tasks
- When you need insight into the model’s reasoning process
§Examples
use openai_tools::responses::request::{Reasoning, ReasoningEffort, ReasoningSummary};
let reasoning = Reasoning {
effort: Some(ReasoningEffort::High),
summary: Some(ReasoningSummary::Detailed),
};
safety_identifier: Option<String>
Optional safety identifier
Identifier for safety and content filtering configurations. Used to specify which safety policies should be applied to the request.
§Examples
Some("strict".to_string())
- Apply strict content filteringSome("moderate".to_string())
- Apply moderate content filteringSome("permissive".to_string())
- Apply permissive content filtering
service_tier: Option<String>
Optional service tier specification
Specifies the service tier for the request, which may affect processing priority, rate limits, and pricing.
§Common Values
Some("default".to_string())
- Standard service tierSome("scale".to_string())
- High-throughput tierSome("premium".to_string())
- Premium service tier with enhanced features
store: Option<bool>
Optional flag to store the conversation
When enabled, the conversation may be stored for future reference, training, or analytics purposes (subject to privacy policies).
§Privacy Considerations
Some(true)
: Allow storage (check privacy policies)Some(false)
: Explicitly opt-out of storageNone
: Use default storage policy
stream: Option<bool>
Optional flag to enable streaming responses
When enabled, the response will be streamed back in chunks as it’s generated, allowing for real-time display of partial results.
§Use Cases
Some(true)
: Real-time chat interfaces, live text generationSome(false)
: Batch processing, when you need the complete response
§Considerations
- Streaming responses require different handling in client code
- May affect some response features or formatting options
stream_options: Option<StreamOptions>
Optional streaming configuration options
Additional options for controlling streaming response behavior, such as whether to include obfuscated placeholder content.
§Only Relevant When Streaming
This field is only meaningful when stream
is Some(true)
.
top_logprobs: Option<usize>
Optional number of top log probabilities to include
Specifies how many of the most likely alternative tokens to include with their log probabilities for each generated token.
§Range
- Minimum: 0 (no log probabilities)
- Maximum: Model-dependent (typically 5-20)
§Use Cases
- Model analysis and debugging
- Confidence estimation
- Alternative response exploration
§Examples
Some(1)
- Include the top alternative for each tokenSome(5)
- Include top 5 alternatives for detailed analysis
top_p: Option<f64>
Optional nucleus sampling parameter
Controls the randomness of the model’s responses by limiting the cumulative probability of considered tokens.
§Range
- 0.0 to 1.0
- Lower values (e.g., 0.1) make responses more focused and deterministic
- Higher values (e.g., 0.9) make responses more diverse and creative
§Default
If not specified, uses the model’s default value (typically around 1.0).
§Examples
Some(0.1)
- Very focused, deterministic responsesSome(0.7)
- Balanced creativity and focusSome(0.95)
- High creativity and diversity
truncation: Option<Truncation>
Optional truncation behavior configuration
Controls how the system handles inputs that exceed the maximum context length supported by the model.
§Options
Some(Truncation::Auto)
- Automatically truncate long inputsSome(Truncation::Disabled)
- Return error for long inputsNone
- Use system default behavior
§Use Cases
Auto
: When you want to handle long documents gracefullyDisabled
: When you need to ensure complete input processing
Implementations§
Source§impl Body
impl Body
Sourcepub fn new(
model: String,
instructions: Option<String>,
plain_text_input: Option<String>,
messages_input: Option<Vec<Message>>,
tools: Option<Vec<Tool>>,
structured_output: Option<Format>,
temperature: Option<f64>,
max_output_tokens: Option<usize>,
max_tool_calls: Option<usize>,
metadata: Option<HashMap<String, Value>>,
parallel_tool_calls: Option<bool>,
include: Option<Vec<Include>>,
background: Option<bool>,
conversation: Option<String>,
previous_response_id: Option<String>,
reasoning: Option<Reasoning>,
safety_identifier: Option<String>,
service_tier: Option<String>,
store: Option<bool>,
stream: Option<bool>,
stream_options: Option<StreamOptions>,
top_logprobs: Option<usize>,
top_p: Option<f64>,
truncation: Option<Truncation>,
) -> Self
pub fn new( model: String, instructions: Option<String>, plain_text_input: Option<String>, messages_input: Option<Vec<Message>>, tools: Option<Vec<Tool>>, structured_output: Option<Format>, temperature: Option<f64>, max_output_tokens: Option<usize>, max_tool_calls: Option<usize>, metadata: Option<HashMap<String, Value>>, parallel_tool_calls: Option<bool>, include: Option<Vec<Include>>, background: Option<bool>, conversation: Option<String>, previous_response_id: Option<String>, reasoning: Option<Reasoning>, safety_identifier: Option<String>, service_tier: Option<String>, store: Option<bool>, stream: Option<bool>, stream_options: Option<StreamOptions>, top_logprobs: Option<usize>, top_p: Option<f64>, truncation: Option<Truncation>, ) -> Self
Constructs a new Body
.
Trait Implementations§
Source§impl Serialize for Body
impl Serialize for Body
Source§fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>where
S: Serializer,
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>where
S: Serializer,
Custom serialization implementation for the request body
This implementation handles the conversion of either plain text input or messages input into the appropriate “input” field format required by the OpenAI API. It also conditionally includes optional fields like tools and text formatting.
§Errors
Returns a serialization error if neither plain_text_input nor messages_input is set, as one of them is required.
Auto Trait Implementations§
impl Freeze for Body
impl RefUnwindSafe for Body
impl Send for Body
impl Sync for Body
impl Unpin for Body
impl UnwindSafe for Body
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<T> PolicyExt for Twhere
T: ?Sized,
impl<T> PolicyExt for Twhere
T: ?Sized,
Source§impl<R, P> ReadPrimitive<R> for P
impl<R, P> ReadPrimitive<R> for P
Source§fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
ReadEndian::read_from_little_endian()
.