Expand description
The OpenAI Responses API.
By default when creating a completion client, this is the API that gets used.
If you’d like to switch back to the regular Completions API, you can do so by using the .completions_api()
function - see below for an example:
let openai_client = rig::providers::openai::Client::from_env();
let model = openai_client.completion_model("gpt-4o").completions_api();
Modules§
- streaming
- The streaming module for the OpenAI Responses API.
Please see the
openai_streaming
oropenai_streaming_with_tools
example for more practical usage.
Structs§
- Additional
Parameters - Additional parameters for the completion request type for OpenAI’s Response API: https://platform.openai.com/docs/api-reference/responses/create
Intended to be derived from
crate::completion::request::CompletionRequest
. - Completion
Request - The completion request type for OpenAI’s Response API: https://platform.openai.com/docs/api-reference/responses/create
Intended to be derived from
crate::completion::request::CompletionRequest
. - Completion
Response - The standard response format from OpenAI’s Responses API.
- Incomplete
Details Reason - Occasionally, when using OpenAI’s Responses API you may get an incomplete response. This struct holds the reason as to why it happened.
- Input
Item - An input item for
CompletionRequest
. - Input
Tokens Details - In-depth details on input tokens.
- OpenAI
Reasoning - Output
Function Call - An OpenAI Responses API tool call. A call ID will be returned that must be used when creating a tool result to send back to OpenAI as a message input, otherwise an error will be received.
- Output
Message - An output message from OpenAI’s Responses API.
- Output
Reasoning - Output
Tokens Details - In-depth details on output tokens.
- Reasoning
- Add reasoning to a
CompletionRequest
. - Response
Error - A response error from OpenAI’s Response API.
- Responses
Completion Model - The completion model struct for OpenAI’s response API.
- Responses
Tool Definition - The definition of a tool response, repurposed for OpenAI’s Responses API.
- Responses
Usage - Token usage. Token usage from the OpenAI Responses API generally shows the input tokens and output tokens (both with more in-depth details) as well as a total tokens field.
- Structured
Outputs Input - The inputs required for adding structured outputs.
- Text
Config - The model output format configuration. You can either have plain text by default, or attach a JSON schema for the purposes of structured outputs.
- Tool
Result - A tool result.
Enums§
- Assistant
Content - Text assistant content.
Note that the text type in comparison to the Completions API is actually
output_text
rather thantext
. - Assistant
Content Type - The type of assistant content.
- Include
- Results to additionally include in the OpenAI Responses API. Note that most of these are currently unsupported, but have been added for completeness.
- Input
Content - The type of content used in an
InputItem
. Additionally holds data for each type of input content. - Message
- An OpenAI Responses API message.
- OpenAI
Service Tier - The billing service tier that will be used. On auto by default.
- Output
- A currently non-exhaustive list of output types.
- Output
Role - The role of an output message.
- Reasoning
Effort - The amount of reasoning effort that will be used by a given model.
- Reasoning
Summary - Reasoning
Summary Level - The amount of effort that will go into a reasoning summary by a given model.
- Response
Object - A response object as an enum (ensures type validation)
- Response
Status - The response status as an enum (ensures type validation)
- Role
- Message roles. Used by OpenAI Responses API to determine who created a given message.
- Text
Format - The text format (contained by
TextConfig
). You can either have plain text by default, or attach a JSON schema for the purposes of structured outputs. - Tool
Result Content Type - The type of a tool result content item.
- Tool
Status - The status of a given tool.
- Truncation
Strategy - The truncation strategy. When using auto, if the context of this response and previous ones exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. Otherwise, does nothing (and is disabled by default).
- User
Content - Different types of user content.