Module responses_api

Module responses_api 

Source
Expand description

The OpenAI Responses API.

By default when creating a completion client, this is the API that gets used.

If you’d like to switch back to the regular Completions API, you can do so by using the .completions_api() function - see below for an example:

let openai_client = rig::providers::openai::Client::from_env();
let model = openai_client.completion_model("gpt-4o").completions_api();

Modules§

streaming
The streaming module for the OpenAI Responses API. Please see the openai_streaming or openai_streaming_with_tools example for more practical usage.

Structs§

AdditionalParameters
Additional parameters for the completion request type for OpenAI’s Response API: https://platform.openai.com/docs/api-reference/responses/create Intended to be derived from crate::completion::request::CompletionRequest.
CompletionRequest
The completion request type for OpenAI’s Response API: https://platform.openai.com/docs/api-reference/responses/create Intended to be derived from crate::completion::request::CompletionRequest.
CompletionResponse
The standard response format from OpenAI’s Responses API.
IncompleteDetailsReason
Occasionally, when using OpenAI’s Responses API you may get an incomplete response. This struct holds the reason as to why it happened.
InputItem
An input item for CompletionRequest.
InputTokensDetails
In-depth details on input tokens.
OpenAIReasoning
OutputFunctionCall
An OpenAI Responses API tool call. A call ID will be returned that must be used when creating a tool result to send back to OpenAI as a message input, otherwise an error will be received.
OutputMessage
An output message from OpenAI’s Responses API.
OutputReasoning
OutputTokensDetails
In-depth details on output tokens.
Reasoning
Add reasoning to a CompletionRequest.
ResponseError
A response error from OpenAI’s Response API.
ResponsesCompletionModel
The completion model struct for OpenAI’s response API.
ResponsesToolDefinition
The definition of a tool response, repurposed for OpenAI’s Responses API.
ResponsesUsage
Token usage. Token usage from the OpenAI Responses API generally shows the input tokens and output tokens (both with more in-depth details) as well as a total tokens field.
StructuredOutputsInput
The inputs required for adding structured outputs.
TextConfig
The model output format configuration. You can either have plain text by default, or attach a JSON schema for the purposes of structured outputs.
ToolResult
A tool result.

Enums§

AssistantContent
Text assistant content. Note that the text type in comparison to the Completions API is actually output_text rather than text.
AssistantContentType
The type of assistant content.
Include
Results to additionally include in the OpenAI Responses API. Note that most of these are currently unsupported, but have been added for completeness.
InputContent
The type of content used in an InputItem. Additionally holds data for each type of input content.
Message
An OpenAI Responses API message.
OpenAIServiceTier
The billing service tier that will be used. On auto by default.
Output
A currently non-exhaustive list of output types.
OutputRole
The role of an output message.
ReasoningEffort
The amount of reasoning effort that will be used by a given model.
ReasoningSummary
ReasoningSummaryLevel
The amount of effort that will go into a reasoning summary by a given model.
ResponseObject
A response object as an enum (ensures type validation)
ResponseStatus
The response status as an enum (ensures type validation)
Role
Message roles. Used by OpenAI Responses API to determine who created a given message.
TextFormat
The text format (contained by TextConfig). You can either have plain text by default, or attach a JSON schema for the purposes of structured outputs.
ToolResultContentType
The type of a tool result content item.
ToolStatus
The status of a given tool.
TruncationStrategy
The truncation strategy. When using auto, if the context of this response and previous ones exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. Otherwise, does nothing (and is disabled by default).
UserContent
Different types of user content.