Crate openai_openapi_types

Source

Structs§

AddUploadPartRequest
AdminApiKey
Represents an individual Admin API key in an org.
AdminApiKeyOwner
AdminApiKeyOwnerObject
The object type, which is always organization.user
AdminApiKeyOwnerRole
Always owner
AdminApiKeyOwnerType
Always user
ApiKeyList
ApproximateLocation
AssistantObject
Represents an assistant that can call the model and use tools.
AssistantObjectToolResources
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
AssistantObjectToolResourcesCodeInterpreter
AssistantObjectToolResourcesFileSearch
AssistantToolsCode
AssistantToolsFileSearch
AssistantToolsFileSearchFileSearch
Overrides for the file search tool.
AssistantToolsFileSearchTypeOnly
AssistantToolsFunction
AssistantsNamedToolChoice
Specifies a tool the model should use. Use to force the model to call a specific tool.
AssistantsNamedToolChoiceFunction
AuditLog
A log of a user action or configuration change within this organization.
AuditLogActor
The actor who performed the audit logged action.
AuditLogActorApiKey
The API Key used to perform the audit logged action.
AuditLogActorServiceAccount
The service account that performed the audit logged action.
AuditLogActorSession
The session in which the audit logged action was performed.
AuditLogActorUser
The user who performed the audit logged action.
AuditLogApiKeyCreated
The details for events with this type.
AuditLogApiKeyCreatedData
The payload used to create the API key.
AuditLogApiKeyDeleted
The details for events with this type.
AuditLogApiKeyUpdated
The details for events with this type.
AuditLogApiKeyUpdatedChangesRequested
The payload used to update the API key.
AuditLogCertificateCreated
The details for events with this type.
AuditLogCertificateDeleted
The details for events with this type.
AuditLogCertificateUpdated
The details for events with this type.
AuditLogCertificatesActivated
The details for events with this type.
AuditLogCertificatesActivatedCertificate
AuditLogCertificatesDeactivated
The details for events with this type.
AuditLogCertificatesDeactivatedCertificate
AuditLogCheckpointPermissionCreated
The project and fine-tuned model checkpoint that the checkpoint permission was created for.
AuditLogCheckpointPermissionCreatedData
The payload used to create the checkpoint permission.
AuditLogCheckpointPermissionDeleted
The details for events with this type.
AuditLogInviteAccepted
The details for events with this type.
AuditLogInviteDeleted
The details for events with this type.
AuditLogInviteSent
The details for events with this type.
AuditLogInviteSentData
The payload used to create the invite.
AuditLogLoginFailed
The details for events with this type.
AuditLogLogoutFailed
The details for events with this type.
AuditLogOrganizationUpdated
The details for events with this type.
AuditLogOrganizationUpdatedChangesRequested
The payload used to update the organization settings.
AuditLogProject
The project that the action was scoped to. Absent for actions not scoped to projects.
AuditLogProjectArchived
The details for events with this type.
AuditLogProjectCreated
The details for events with this type.
AuditLogProjectCreatedData
The payload used to create the project.
AuditLogProjectUpdated
The details for events with this type.
AuditLogProjectUpdatedChangesRequested
The payload used to update the project.
AuditLogRateLimitDeleted
The details for events with this type.
AuditLogRateLimitUpdated
The details for events with this type.
AuditLogRateLimitUpdatedChangesRequested
The payload used to update the rate limits.
AuditLogServiceAccountCreated
The details for events with this type.
AuditLogServiceAccountCreatedData
The payload used to create the service account.
AuditLogServiceAccountDeleted
The details for events with this type.
AuditLogServiceAccountUpdated
The details for events with this type.
AuditLogServiceAccountUpdatedChangesRequested
The payload used to updated the service account.
AuditLogUserAdded
The details for events with this type.
AuditLogUserAddedData
The payload used to add the user to the project.
AuditLogUserDeleted
The details for events with this type.
AuditLogUserUpdated
The details for events with this type.
AuditLogUserUpdatedChangesRequested
The payload used to update the user.
AutoChunkingStrategyRequestParam
The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
Batch
BatchError
BatchErrors
BatchErrorsObject
The object type, which is always list.
BatchRequestCounts
The request counts for different statuses within the batch.
BatchRequestInput
The per-line object of the batch input file
BatchRequestInputMethod
The HTTP method to be used for the request. Currently only POST is supported.
BatchRequestOutput
The per-line object of the batch output and error files
BatchRequestOutputError
For requests that failed with a non-HTTP error, this will contain more information on the cause of the failure.
BatchRequestOutputResponse
Certificate
Represents an individual certificate uploaded to the organization.
CertificateCertificateDetails
ChatCompletionDeleted
ChatCompletionFunctionCallOption
Specifying a particular function via {"name": "my_function"} forces the model to call that function.
ChatCompletionFunctions
ChatCompletionList
An object representing a list of Chat Completions.
ChatCompletionMessageList
An object representing a list of chat completion messages.
ChatCompletionMessageListDatum
ChatCompletionMessageToolCall
ChatCompletionMessageToolCallChunk
ChatCompletionMessageToolCallChunkFunction
ChatCompletionMessageToolCallChunkType
The type of the tool. Currently, only function is supported.
ChatCompletionMessageToolCallFunction
The function that the model called.
ChatCompletionNamedToolChoice
Specifies a tool the model should use. Use to force the model to call a specific function.
ChatCompletionNamedToolChoiceFunction
ChatCompletionRequestAssistantMessage
Messages sent by the model in response to user messages.
ChatCompletionRequestAssistantMessageAudio
Data about a previous audio response from the model. Learn more.
ChatCompletionRequestAssistantMessageFunctionCall
Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
ChatCompletionRequestDeveloperMessage
Developer-provided instructions that the model should follow, regardless of messages sent by the user. With o1 models and newer, developer messages replace the previous system messages.
ChatCompletionRequestFunctionMessage
ChatCompletionRequestMessageContentPartAudio
Learn about audio inputs.
ChatCompletionRequestMessageContentPartAudioInputAudio
ChatCompletionRequestMessageContentPartFile
Learn about file inputs for text generation.
ChatCompletionRequestMessageContentPartFileFile
ChatCompletionRequestMessageContentPartImage
Learn about image inputs.
ChatCompletionRequestMessageContentPartImageImageUrl
ChatCompletionRequestMessageContentPartRefusal
ChatCompletionRequestMessageContentPartText
Learn about text inputs.
ChatCompletionRequestSystemMessage
Developer-provided instructions that the model should follow, regardless of messages sent by the user. With o1 models and newer, use developer messages for this purpose instead.
ChatCompletionRequestToolMessage
ChatCompletionRequestUserMessage
Messages sent by an end user, containing prompts or additional context information.
ChatCompletionResponseMessage
A chat completion message generated by the model.
ChatCompletionResponseMessageAnnotation
A URL citation when using web search.
ChatCompletionResponseMessageAnnotationUrlCitation
A URL citation when using web search.
ChatCompletionResponseMessageAudio
If the audio output modality is requested, this object contains data about the audio response from the model. Learn more.
ChatCompletionResponseMessageFunctionCall
Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
ChatCompletionStreamOptions
Options for streaming response. Only set this when you set stream: true.
ChatCompletionStreamResponseDelta
A chat completion delta generated by streamed model responses.
ChatCompletionStreamResponseDeltaFunctionCall
Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
ChatCompletionTokenLogprob
ChatCompletionTokenLogprobTopLogprob
ChatCompletionTool
Click
A click action.
CodeInterpreterFileOutput
The output of a code interpreter tool call that is a file.
CodeInterpreterFileOutputFile
CodeInterpreterTextOutput
The output of a code interpreter tool call that is text.
CodeInterpreterTool
A tool that runs Python code to help generate a response to a prompt.
CodeInterpreterToolAuto
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
CodeInterpreterToolCall
A tool call to run code.
ComparisonFilter
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompleteUploadRequest
CompletionUsage
Usage statistics for the completion request.
CompletionUsageCompletionTokensDetails
Breakdown of tokens used in a completion.
CompletionUsagePromptTokensDetails
Breakdown of tokens used in the prompt.
CompoundFilter
Combine multiple filters using and or or.
ComputerCallOutputItemParam
The output of a computer tool call.
ComputerCallSafetyCheckParam
A pending safety check for the computer call.
ComputerScreenshotImage
A computer screenshot image used with the computer use tool.
ComputerToolCall
A tool call to a computer use tool. See the computer use guide for more information.
ComputerToolCallOutput
The output of a computer tool call.
ComputerToolCallOutputResource
ComputerToolCallSafetyCheck
A pending safety check for the computer call.
ComputerUsePreviewTool
A tool that controls a virtual computer. Learn more about the computer tool.
ContainerFileCitationBody
A citation for a container file used to generate a model response.
ContainerFileListResource
ContainerFileResource
ContainerListResource
ContainerResource
ContainerResourceExpiresAfter
The container will expire after this time period. The anchor is the reference point for the expiration. The minutes is the number of minutes after the anchor before the container expires.
Coordinate
An x/y coordinate pair, e.g. { x: 100, y: 200 }.
CostsResult
The aggregated costs details of the specific time bucket.
CostsResultAmount
The monetary value in its associated currency.
CreateAssistantRequest
CreateAssistantRequestToolResources
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
CreateAssistantRequestToolResourcesCodeInterpreter
CreateAssistantRequestToolResourcesFileSearch0
CreateAssistantRequestToolResourcesFileSearch0VectorStore
CreateAssistantRequestToolResourcesFileSearch0VectorStoreChunkingStrategyAuto
The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
CreateAssistantRequestToolResourcesFileSearch0VectorStoreChunkingStrategyStatic
CreateAssistantRequestToolResourcesFileSearch0VectorStoreChunkingStrategyStaticStatic
CreateAssistantRequestToolResourcesFileSearch1
CreateAssistantRequestToolResourcesFileSearch1VectorStore
CreateAssistantRequestToolResourcesFileSearch1VectorStoreChunkingStrategyAuto
The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
CreateAssistantRequestToolResourcesFileSearch1VectorStoreChunkingStrategyStatic
CreateAssistantRequestToolResourcesFileSearch1VectorStoreChunkingStrategyStaticStatic
CreateChatCompletionRequest
CreateChatCompletionRequestAudio
Parameters for audio output. Required when audio output is requested with modalities: ["audio"]. Learn more.
CreateChatCompletionRequestWebSearchOptions
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
CreateChatCompletionRequestWebSearchOptionsUserLocation
Approximate location parameters for the search.
CreateChatCompletionResponse
Represents a chat completion response returned by model, based on the provided input.
CreateChatCompletionResponseChoice
CreateChatCompletionResponseChoiceLogprobs
Log probability information for the choice.
CreateChatCompletionStreamResponse
Represents a streamed chunk of a chat completion response returned by the model, based on the provided input. Learn more.
CreateChatCompletionStreamResponseChoice
CreateChatCompletionStreamResponseChoiceLogprobs
Log probability information for the choice.
CreateCompletionRequest
CreateCompletionResponse
Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
CreateCompletionResponseChoice
CreateCompletionResponseChoiceLogprobs
CreateContainerBody
CreateContainerBodyExpiresAfter
Container expiration time in seconds relative to the ‘anchor’ time.
CreateContainerFileBody
CreateEmbeddingRequest
CreateEmbeddingResponse
CreateEmbeddingResponseUsage
The usage information for the request.
CreateEvalCompletionsRunDataSource
A CompletionsRunDataSource object describing a model sampling configuration.
CreateEvalCompletionsRunDataSourceInputMessagesItemReference
CreateEvalCompletionsRunDataSourceInputMessagesTemplate
CreateEvalCompletionsRunDataSourceSamplingParams
CreateEvalCustomDataSourceConfig
A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs. This schema is used to define the shape of the data that will be:
CreateEvalItem0
CreateEvalJsonlRunDataSource
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
CreateEvalLabelModelGrader
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
CreateEvalLogsDataSourceConfig
A data source config which specifies the metadata property of your logs query. This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
CreateEvalRequest
CreateEvalResponsesRunDataSource
A ResponsesRunDataSource object describing a model sampling configuration.
CreateEvalResponsesRunDataSourceInputMessagesItemReference
CreateEvalResponsesRunDataSourceInputMessagesTemplate
CreateEvalResponsesRunDataSourceInputMessagesTemplateTemplate0
CreateEvalResponsesRunDataSourceSamplingParams
CreateEvalResponsesRunDataSourceSamplingParamsText
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
CreateEvalRunRequest
CreateEvalStoredCompletionsDataSourceConfig
Deprecated in favor of LogsDataSourceConfig.
CreateFileRequest
CreateFineTuningCheckpointPermissionRequest
CreateFineTuningJobRequest
CreateFineTuningJobRequestHyperparameters
The hyperparameters used for the fine-tuning job. This value is now deprecated in favor of method, and should be passed in under the method parameter.
CreateFineTuningJobRequestIntegration
CreateFineTuningJobRequestIntegrationWandb
The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.
CreateImageEditRequest
CreateImageRequest
CreateImageVariationRequest
CreateMessageRequest
CreateMessageRequestAttachments
CreateModerationRequest
CreateModerationResponse
Represents if a given text input is potentially harmful.
CreateModerationResponseResult
CreateModerationResponseResultCategories
A list of the categories, and whether they are flagged or not.
CreateModerationResponseResultCategoryAppliedInputTypes
A list of the categories along with the input type(s) that the score applies to.
CreateModerationResponseResultCategoryAppliedInputTypesHarassment
text
CreateModerationResponseResultCategoryAppliedInputTypesHarassmentThreatening
text
CreateModerationResponseResultCategoryAppliedInputTypesHate
text
CreateModerationResponseResultCategoryAppliedInputTypesHateThreatening
text
CreateModerationResponseResultCategoryAppliedInputTypesIllicit
text
CreateModerationResponseResultCategoryAppliedInputTypesIllicitViolent
text
CreateModerationResponseResultCategoryAppliedInputTypesSexualMinors
text
CreateModerationResponseResultCategoryScores
A list of the categories along with their scores as predicted by model.
CreateResponse
CreateRunRequest
CreateRunRequestWithoutStream
CreateSpeechRequest
CreateThreadAndRunRequest
CreateThreadAndRunRequestToolResources
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
CreateThreadAndRunRequestToolResourcesCodeInterpreter
CreateThreadAndRunRequestToolResourcesFileSearch
CreateThreadAndRunRequestWithoutStream
CreateThreadAndRunRequestWithoutStreamToolResources
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
CreateThreadAndRunRequestWithoutStreamToolResourcesCodeInterpreter
CreateThreadAndRunRequestWithoutStreamToolResourcesFileSearch
CreateThreadRequest
Options to create a new thread. If no thread is provided when running a request, an empty thread will be created.
CreateThreadRequestToolResources
A set of resources that are made available to the assistant’s tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
CreateThreadRequestToolResourcesCodeInterpreter
CreateThreadRequestToolResourcesFileSearch0
CreateThreadRequestToolResourcesFileSearch0VectorStore
CreateThreadRequestToolResourcesFileSearch0VectorStoreChunkingStrategyAuto
The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
CreateThreadRequestToolResourcesFileSearch0VectorStoreChunkingStrategyStatic
CreateThreadRequestToolResourcesFileSearch0VectorStoreChunkingStrategyStaticStatic
CreateThreadRequestToolResourcesFileSearch1
CreateThreadRequestToolResourcesFileSearch1VectorStore
CreateThreadRequestToolResourcesFileSearch1VectorStoreChunkingStrategyAuto
The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
CreateThreadRequestToolResourcesFileSearch1VectorStoreChunkingStrategyStatic
CreateThreadRequestToolResourcesFileSearch1VectorStoreChunkingStrategyStaticStatic
CreateTranscriptionRequest
CreateTranscriptionResponseJson
Represents a transcription response returned by model, based on the provided input.
CreateTranscriptionResponseJsonLogprob
CreateTranscriptionResponseVerboseJson
Represents a verbose json transcription response returned by model, based on the provided input.
CreateTranslationRequest
CreateTranslationResponseJson
CreateTranslationResponseVerboseJson
CreateUploadRequest
CreateVectorStoreFileBatchRequest
CreateVectorStoreFileRequest
CreateVectorStoreRequest
DeleteAssistantResponse
DeleteCertificateResponse
DeleteFileResponse
DeleteFineTuningCheckpointPermissionResponse
DeleteMessageResponse
DeleteModelResponse
DeleteThreadResponse
DeleteVectorStoreFileResponse
DeleteVectorStoreResponse
DoneEvent
Occurs when a stream ends.
DoubleClick
A double click action.
Drag
A drag action.
EasyInputMessage
A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.
EasyInputMessageType
The type of the message input. Always message.
Embedding
Represents an embedding vector returned by embedding endpoint.
Error
ErrorEvent
Occurs when an error occurs. This can happen due to an internal server error or a timeout.
ErrorResponse
Eval
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
EvalApiError
An object representing an error response from the Eval API.
EvalCustomDataSourceConfig
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces. The response schema defines the shape of the data that will be:
EvalGraderLabelModel
EvalGraderPython
EvalGraderScoreModel
EvalGraderStringCheck
EvalGraderTextSimilarity
EvalItem
A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.
EvalItemContentOutputText
A text output from the model.
EvalItemType
The type of the message input. Always message.
EvalJsonlFileContentSource
EvalJsonlFileContentSourceContent
EvalJsonlFileIdSource
EvalList
An object representing a list of evals.
EvalLogsDataSourceConfig
A LogsDataSourceConfig which specifies the metadata property of your logs query. This is usually metadata like usecase=chatbot or prompt-version=v2, etc. The schema returned by this data source config is used to defined what variables are available in your evals. item and sample are both defined when using this data source config.
EvalResponsesSource
A EvalResponsesSource object describing a run data source configuration.
EvalRun
A schema representing an evaluation run.
EvalRunList
An object representing a list of runs for an evaluation.
EvalRunOutputItem
A schema representing an evaluation run output item.
EvalRunOutputItemList
An object representing a list of output items for an evaluation run.
EvalRunOutputItemSample
A sample containing the input and output of the evaluation run.
EvalRunOutputItemSampleInput
An input message.
EvalRunOutputItemSampleOutput
EvalRunOutputItemSampleUsage
Token usage details for the sample.
EvalRunPerModelUsage
EvalRunPerTestingCriteriaResult
EvalRunResultCounts
Counters summarizing the outcomes of the evaluation run.
EvalStoredCompletionsDataSourceConfig
Deprecated in favor of LogsDataSourceConfig.
EvalStoredCompletionsSource
A StoredCompletionsRunDataSource configuration describing a set of filters
FileCitationBody
A citation to a file.
FilePath
A path to a file.
FileSearchRankingOptions
The ranking options for the file search. If not specified, the file search tool will use the auto ranker and a score_threshold of 0.
FileSearchTool
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
FileSearchToolCall
The results of a file search tool call. See the file search guide for more information.
FileSearchToolCallResult
FineTuneChatCompletionRequestAssistantMessage
FineTuneChatRequestInput
The per-line training example of a fine-tuning input file for chat models using the supervised method.
FineTuneDpoHyperparameters
The hyperparameters used for the DPO fine-tuning job.
FineTuneDpoMethod
Configuration for the DPO fine-tuning method.
FineTuneMethod
The method used for fine-tuning.
FineTunePreferenceRequestInput
The per-line training example of a fine-tuning input file for chat models using the dpo method.
FineTunePreferenceRequestInputInput
FineTuneReinforcementHyperparameters
The hyperparameters used for the reinforcement fine-tuning job.
FineTuneReinforcementMethod
Configuration for the reinforcement fine-tuning method.
FineTuneReinforcementRequestInput
Per-line training example for reinforcement fine-tuning. Note that messages and tools are the only reserved keywords. Any other arbitrary key-value data can be included on training datapoints and will be available to reference during grading under the {{ item.XXX }} template variable.
FineTuneSupervisedHyperparameters
The hyperparameters used for the fine-tuning job.
FineTuneSupervisedMethod
Configuration for the supervised fine-tuning method.
FineTuningCheckpointPermission
The checkpoint.permission object represents a permission for a fine-tuned model checkpoint.
FineTuningIntegration
FineTuningIntegrationWandb
The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.
FineTuningJob
The fine_tuning.job object represents a fine-tuning job that has been created through the API.
FineTuningJobCheckpoint
The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
FineTuningJobCheckpointMetrics
Metrics at the step number during the fine-tuning job.
FineTuningJobError
For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.
FineTuningJobEvent
Fine-tuning job event object
FineTuningJobHyperparameters
The hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs.
FunctionCallOutputItemParam
The output of a function tool call.
FunctionObject
FunctionTool
Defines a function in your own code the model can choose to call. Learn more about function calling.
FunctionToolCall
A tool call to run a function. See the function calling guide for more information.
FunctionToolCallOutput
The output of a function tool call.
FunctionToolCallOutputResource
FunctionToolCallResource
GraderLabelModel
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
GraderMulti
A MultiGrader object combines the output of multiple graders to produce a single score.
GraderPython
A PythonGrader object that runs a python script on the input.
GraderScoreModel
A ScoreModelGrader object that uses a model to assign a score to the input.
GraderStringCheck
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
GraderTextSimilarity
A TextSimilarityGrader object which grades text based on similarity metrics.
Image
Represents the content or the URL of an image generated by the OpenAI API.
ImageGenTool
A tool that generates images using a model like gpt-image-1.
ImageGenToolCall
An image generation request made by the model.
ImageGenToolInputImageMask
Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).
ImagesResponse
The response from the image generation endpoint.
ImagesResponseUsage
For gpt-image-1 only, the token usage information for the image generation.
ImagesResponseUsageInputTokensDetails
The input tokens detailed information for the image generation.
InputAudio
An audio input to the model.
InputFileContent
A file input to the model.
InputImageContent
An image input to the model. Learn about image inputs.
InputMessage
A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.
InputMessageResource
InputMessageType
The type of the message input. Always set to message.
InputTextContent
A text input to the model.
Invite
Represents an individual invite to the organization.
InviteDeleteResponse
InviteListResponse
InviteProjects
InviteRequest
InviteRequestProjects
ItemReferenceParam
An internal identifier for an item to reference.
KeyPress
A collection of keypresses the model would like to perform.
ListAssistantsResponse
ListAuditLogsResponse
ListBatchesResponse
ListCertificatesResponse
ListFilesResponse
ListFineTuningCheckpointPermissionResponse
ListFineTuningJobCheckpointsResponse
ListFineTuningJobEventsResponse
ListMessagesResponse
ListModelsResponse
ListPaginatedFineTuningJobsResponse
ListRunStepsResponse
ListRunsResponse
ListVectorStoreFilesResponse
ListVectorStoresResponse
LocalShellExecAction
Execute a shell command on the server.
LocalShellTool
A tool that allows the model to execute shell commands in a local environment.
LocalShellToolCall
A tool call to run a command on the local shell.
LocalShellToolCallOutput
The output of a local shell tool call.
LogProb
The log probability of a token.
LogProbProperties
A log probability object.
McpApprovalRequest
A request for human approval of a tool invocation.
McpApprovalResponse
A response to an MCP approval request.
McpApprovalResponseResource
A response to an MCP approval request.
McpListTools
A list of tools available on an MCP server.
McpListToolsTool
A tool available on an MCP server.
McpTool
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
McpToolAllowedTools1
A filter object to specify which tools are allowed.
McpToolCall
An invocation of a tool on an MCP server.
McpToolRequireApproval0
McpToolRequireApproval0Always
A list of tools that always require approval.
McpToolRequireApproval0Never
A list of tools that never require approval.
MessageContentImageFileObject
References an image File in the content of a message.
MessageContentImageFileObjectImageFile
MessageContentImageUrlObject
References an image URL in the content of a message.
MessageContentImageUrlObjectImageUrl
MessageContentRefusalObject
The refusal content generated by the assistant.
MessageContentTextAnnotationsFileCitationObject
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the “file_search” tool to search files.
MessageContentTextAnnotationsFileCitationObjectFileCitation
MessageContentTextAnnotationsFilePathObject
A URL for the file that’s generated when the assistant used the code_interpreter tool to generate a file.
MessageContentTextAnnotationsFilePathObjectFilePath
MessageContentTextObject
The text content that is part of a message.
MessageContentTextObjectText
MessageDeltaContentImageFileObject
References an image File in the content of a message.
MessageDeltaContentImageFileObjectImageFile
MessageDeltaContentImageUrlObject
References an image URL in the content of a message.
MessageDeltaContentImageUrlObjectImageUrl
MessageDeltaContentRefusalObject
The refusal content that is part of a message.
MessageDeltaContentTextAnnotationsFileCitationObject
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the “file_search” tool to search files.
MessageDeltaContentTextAnnotationsFileCitationObjectFileCitation
MessageDeltaContentTextAnnotationsFilePathObject
A URL for the file that’s generated when the assistant used the code_interpreter tool to generate a file.
MessageDeltaContentTextAnnotationsFilePathObjectFilePath
MessageDeltaContentTextObject
The text content that is part of a message.
MessageDeltaContentTextObjectText
MessageDeltaObject
Represents a message delta i.e. any changed fields on a message during streaming.
MessageDeltaObjectDelta
The delta containing the fields that have changed on the Message.
MessageObject
Represents a message within a thread.
MessageObjectAttachments
MessageObjectIncompleteDetails
On an incomplete message, details about why the message is incomplete.
MessageRequestContentTextObject
The text content that is part of a message.
MessageStreamEventThreadMessageCompleted
Occurs when a message is completed.
MessageStreamEventThreadMessageCreated
Occurs when a message is created.
MessageStreamEventThreadMessageDelta
Occurs when parts of a Message are being streamed.
MessageStreamEventThreadMessageInProgress
Occurs when a message moves to an in_progress state.
MessageStreamEventThreadMessageIncomplete
Occurs when a message ends before it is completed.
Model
Describes an OpenAI model offering that can be used with the API.
ModelResponseProperties
ModerationImageUrlInput
An object describing an image to classify.
ModerationImageUrlInputImageUrl
Contains either an image URL or a data URL for a base64 encoded image.
ModerationTextInput
An object describing text to classify.
ModifyAssistantRequest
ModifyAssistantRequestToolResources
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
ModifyAssistantRequestToolResourcesCodeInterpreter
ModifyAssistantRequestToolResourcesFileSearch
ModifyCertificateRequest
ModifyMessageRequest
ModifyRunRequest
ModifyThreadRequest
ModifyThreadRequestToolResources
A set of resources that are made available to the assistant’s tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
ModifyThreadRequestToolResourcesCodeInterpreter
ModifyThreadRequestToolResourcesFileSearch
Move
A mouse move action.
OpenAiFile
The File object represents a document that has been uploaded to OpenAI.
OtherChunkingStrategyResponseParam
This is returned when the chunking strategy is unknown. Typically, this is because the file was indexed before the chunking_strategy concept was introduced in the API.
OutputAudio
An audio output from the model.
OutputMessage
An output message from the model.
OutputTextContent
A text output from the model.
PredictionContent
Static predicted output content, such as the content of a text file that is being regenerated.
Project
Represents an individual project.
ProjectApiKey
Represents an individual API key in a project.
ProjectApiKeyDeleteResponse
ProjectApiKeyListResponse
ProjectApiKeyOwner
ProjectCreateRequest
ProjectListResponse
ProjectRateLimit
Represents a project rate limit config.
ProjectRateLimitListResponse
ProjectRateLimitUpdateRequest
ProjectServiceAccount
Represents an individual service account in a project.
ProjectServiceAccountApiKey
ProjectServiceAccountCreateRequest
ProjectServiceAccountCreateResponse
ProjectServiceAccountDeleteResponse
ProjectServiceAccountListResponse
ProjectUpdateRequest
ProjectUser
Represents an individual user in a project.
ProjectUserCreateRequest
ProjectUserDeleteResponse
ProjectUserListResponse
ProjectUserUpdateRequest
RankingOptions
RealtimeClientEventConversationItemCreate
Add a new Item to the Conversation’s context, including messages, function calls, and function call responses. This event can be used both to populate a “history” of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
RealtimeClientEventConversationItemDelete
Send this event when you want to remove any item from the conversation history. The server will respond with a conversation.item.deleted event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
RealtimeClientEventConversationItemRetrieve
Send this event when you want to retrieve the server’s representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD. The server will respond with a conversation.item.retrieved event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
RealtimeClientEventConversationItemTruncate
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server’s understanding of the audio with the client’s playback.
RealtimeClientEventInputAudioBufferAppend
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. In Server VAD mode, the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually.
RealtimeClientEventInputAudioBufferClear
Send this event to clear the audio bytes in the buffer. The server will respond with an input_audio_buffer.cleared event.
RealtimeClientEventInputAudioBufferCommit
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
RealtimeClientEventOutputAudioBufferClear
WebRTC Only: Emit to cut off the current audio response. This will trigger the server to stop generating audio and emit a output_audio_buffer.cleared event. This event should be preceded by a response.cancel client event to stop the generation of the current response. Learn more.
RealtimeClientEventResponseCancel
Send this event to cancel an in-progress response. The server will respond with a response.cancelled event or an error if there is no response to cancel.
RealtimeClientEventResponseCreate
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
RealtimeClientEventSessionUpdate
Send this event to update the session’s default configuration. The client may send this event at any time to update any field, except for voice. However, note that once a session has been initialized with a particular model, it can’t be changed to another model using session.update.
RealtimeClientEventTranscriptionSessionUpdate
Send this event to update a transcription session.
RealtimeConnectParams
RealtimeConversationItem
The item to add to the conversation.
RealtimeConversationItemContent
RealtimeConversationItemObject
Identifier for the API object being returned - always realtime.item.
RealtimeConversationItemWithReference
The item to add to the conversation.
RealtimeConversationItemWithReferenceObject
Identifier for the API object being returned - always realtime.item.
RealtimeResponse
The response resource.
RealtimeResponseCreateParams
Create a new Realtime response with these parameters
RealtimeResponseCreateParamsTool
RealtimeResponseCreateParamsToolType
The type of the tool, i.e. function.
RealtimeResponseObject
The object type, must be realtime.response.
RealtimeResponseStatusDetails
Additional details about the status.
RealtimeResponseStatusDetailsError
A description of the error that caused the response to fail, populated when the status is failed.
RealtimeResponseUsage
Usage statistics for the Response, this will correspond to billing. A Realtime API session will maintain a conversation context and append new Items to the Conversation, thus output from previous turns (text and audio tokens) will become the input for later turns.
RealtimeResponseUsageInputTokenDetails
Details about the input tokens used in the Response.
RealtimeResponseUsageOutputTokenDetails
Details about the output tokens used in the Response.
RealtimeServerEventConversationCreated
Returned when a conversation is created. Emitted right after session creation.
RealtimeServerEventConversationCreatedConversation
The conversation resource.
RealtimeServerEventConversationCreatedConversationObject
The object type, must be realtime.conversation.
RealtimeServerEventConversationItemCreated
Returned when a conversation item is created. There are several scenarios that produce this event:
RealtimeServerEventConversationItemDeleted
Returned when an item in the conversation is deleted by the client with a conversation.item.delete event. This event is used to synchronize the server’s understanding of the conversation history with the client’s view.
RealtimeServerEventConversationItemInputAudioTranscriptionCompleted
This event is the output of audio transcription for user audio written to the user audio buffer. Transcription begins when the input audio buffer is committed by the client or server (in server_vad mode). Transcription runs asynchronously with Response creation, so this event may come before or after the Response events.
RealtimeServerEventConversationItemInputAudioTranscriptionDelta
Returned when the text value of an input audio transcription content part is updated.
RealtimeServerEventConversationItemInputAudioTranscriptionFailed
Returned when input audio transcription is configured, and a transcription request for a user message failed. These events are separate from other error events so that the client can identify the related Item.
RealtimeServerEventConversationItemInputAudioTranscriptionFailedError
Details of the transcription error.
RealtimeServerEventConversationItemRetrieved
Returned when a conversation item is retrieved with conversation.item.retrieve.
RealtimeServerEventConversationItemTruncated
Returned when an earlier assistant audio message item is truncated by the client with a conversation.item.truncate event. This event is used to synchronize the server’s understanding of the audio with the client’s playback.
RealtimeServerEventError
Returned when an error occurs, which could be a client problem or a server problem. Most errors are recoverable and the session will stay open, we recommend to implementors to monitor and log error messages by default.
RealtimeServerEventErrorError
Details of the error.
RealtimeServerEventInputAudioBufferCleared
Returned when the input audio buffer is cleared by the client with a input_audio_buffer.clear event.
RealtimeServerEventInputAudioBufferCommitted
Returned when an input audio buffer is committed, either by the client or automatically in server VAD mode. The item_id property is the ID of the user message item that will be created, thus a conversation.item.created event will also be sent to the client.
RealtimeServerEventInputAudioBufferSpeechStarted
Sent by the server when in server_vad mode to indicate that speech has been detected in the audio buffer. This can happen any time audio is added to the buffer (unless speech is already detected). The client may want to use this event to interrupt audio playback or provide visual feedback to the user.
RealtimeServerEventInputAudioBufferSpeechStopped
Returned in server_vad mode when the server detects the end of speech in the audio buffer. The server will also send an conversation.item.created event with the user message item that is created from the audio buffer.
RealtimeServerEventOutputAudioBufferCleared
WebRTC Only: Emitted when the output audio buffer is cleared. This happens either in VAD mode when the user has interrupted (input_audio_buffer.speech_started), or when the client has emitted the output_audio_buffer.clear event to manually cut off the current audio response. Learn more.
RealtimeServerEventOutputAudioBufferStarted
WebRTC Only: Emitted when the server begins streaming audio to the client. This event is emitted after an audio content part has been added (response.content_part.added) to the response. Learn more.
RealtimeServerEventOutputAudioBufferStopped
WebRTC Only: Emitted when the output audio buffer has been completely drained on the server, and no more audio is forthcoming. This event is emitted after the full response data has been sent to the client (response.done). Learn more.
RealtimeServerEventRateLimitsUpdated
Emitted at the beginning of a Response to indicate the updated rate limits. When a Response is created some tokens will be “reserved” for the output tokens, the rate limits shown here reflect that reservation, which is then adjusted accordingly once the Response is completed.
RealtimeServerEventRateLimitsUpdatedRateLimits
RealtimeServerEventResponseAudioDelta
Returned when the model-generated audio is updated.
RealtimeServerEventResponseAudioDone
Returned when the model-generated audio is done. Also emitted when a Response is interrupted, incomplete, or cancelled.
RealtimeServerEventResponseAudioTranscriptDelta
Returned when the model-generated transcription of audio output is updated.
RealtimeServerEventResponseAudioTranscriptDone
Returned when the model-generated transcription of audio output is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
RealtimeServerEventResponseContentPartAdded
Returned when a new content part is added to an assistant message item during response generation.
RealtimeServerEventResponseContentPartAddedPart
The content part that was added.
RealtimeServerEventResponseContentPartDone
Returned when a content part is done streaming in an assistant message item. Also emitted when a Response is interrupted, incomplete, or cancelled.
RealtimeServerEventResponseContentPartDonePart
The content part that is done.
RealtimeServerEventResponseCreated
Returned when a new Response is created. The first event of response creation, where the response is in an initial state of in_progress.
RealtimeServerEventResponseDone
Returned when a Response is done streaming. Always emitted, no matter the final state. The Response object included in the response.done event will include all output Items in the Response but will omit the raw audio data.
RealtimeServerEventResponseFunctionCallArgumentsDelta
Returned when the model-generated function call arguments are updated.
RealtimeServerEventResponseFunctionCallArgumentsDone
Returned when the model-generated function call arguments are done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
RealtimeServerEventResponseOutputItemAdded
Returned when a new Item is created during Response generation.
RealtimeServerEventResponseOutputItemDone
Returned when an Item is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
RealtimeServerEventResponseTextDelta
Returned when the text value of a “text” content part is updated.
RealtimeServerEventResponseTextDone
Returned when the text value of a “text” content part is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
RealtimeServerEventSessionCreated
Returned when a Session is created. Emitted automatically when a new connection is established as the first server event. This event will contain the default Session configuration.
RealtimeServerEventSessionUpdated
Returned when a session is updated with a session.update event, unless there is an error.
RealtimeServerEventTranscriptionSessionUpdated
Returned when a transcription session is updated with a transcription_session.update event, unless there is an error.
RealtimeSession
Realtime session object configuration.
RealtimeSessionCreateRequest
Realtime session object configuration.
RealtimeSessionCreateRequestClientSecret
Configuration options for the generated client secret.
RealtimeSessionCreateRequestClientSecretExpiresAt
Configuration for the ephemeral token expiration.
RealtimeSessionCreateRequestInputAudioNoiseReduction
Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
RealtimeSessionCreateRequestInputAudioTranscription
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
RealtimeSessionCreateRequestTool
RealtimeSessionCreateRequestToolType
The type of the tool, i.e. function.
RealtimeSessionCreateRequestTracing1
Granular configuration for tracing.
RealtimeSessionCreateRequestTurnDetection
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. Semantic VAD is more advanced and uses a turn detection model (in conjuction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with “uhhm”, the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
RealtimeSessionCreateResponse
A new Realtime session configuration, with an ephermeral key. Default TTL for keys is one minute.
RealtimeSessionCreateResponseClientSecret
Ephemeral key returned by the API.
RealtimeSessionCreateResponseInputAudioTranscription
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through Whisper and should be treated as rough guidance rather than the representation understood by the model.
RealtimeSessionCreateResponseTool
RealtimeSessionCreateResponseToolType
The type of the tool, i.e. function.
RealtimeSessionCreateResponseTracing1
Granular configuration for tracing.
RealtimeSessionCreateResponseTurnDetection
Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
RealtimeSessionInputAudioNoiseReduction
Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
RealtimeSessionInputAudioTranscription
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
RealtimeSessionTool
RealtimeSessionToolType
The type of the tool, i.e. function.
RealtimeSessionTracing1
Granular configuration for tracing.
RealtimeSessionTurnDetection
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. Semantic VAD is more advanced and uses a turn detection model (in conjuction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with “uhhm”, the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
RealtimeTranscriptionSessionCreateRequest
Realtime transcription session object configuration.
RealtimeTranscriptionSessionCreateRequestClientSecret
Configuration options for the generated client secret.
RealtimeTranscriptionSessionCreateRequestClientSecretExpiresAt
Configuration for the ephemeral token expiration.
RealtimeTranscriptionSessionCreateRequestInputAudioNoiseReduction
Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
RealtimeTranscriptionSessionCreateRequestInputAudioTranscription
Configuration for input audio transcription. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
RealtimeTranscriptionSessionCreateRequestTurnDetection
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. Semantic VAD is more advanced and uses a turn detection model (in conjuction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with “uhhm”, the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
RealtimeTranscriptionSessionCreateResponse
A new Realtime transcription session configuration.
RealtimeTranscriptionSessionCreateResponseClientSecret
Ephemeral key returned by the API. Only present when the session is created on the server via REST API.
RealtimeTranscriptionSessionCreateResponseInputAudioTranscription
Configuration of the transcription model.
RealtimeTranscriptionSessionCreateResponseTurnDetection
Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Reasoning
o-series models only
ReasoningItem
A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.
ReasoningItemSummary
RefusalContent
A refusal from the model.
Response
ResponseAudioDeltaEvent
Emitted when there is a partial audio response.
ResponseAudioDoneEvent
Emitted when the audio response is complete.
ResponseAudioTranscriptDeltaEvent
Emitted when there is a partial transcript of audio.
ResponseAudioTranscriptDoneEvent
Emitted when the full audio transcript is completed.
ResponseCodeInterpreterCallCodeDeltaEvent
Emitted when a partial code snippet is added by the code interpreter.
ResponseCodeInterpreterCallCodeDoneEvent
Emitted when code snippet output is finalized by the code interpreter.
ResponseCodeInterpreterCallCompletedEvent
Emitted when the code interpreter call is completed.
ResponseCodeInterpreterCallInProgressEvent
Emitted when a code interpreter call is in progress.
ResponseCodeInterpreterCallInterpretingEvent
Emitted when the code interpreter is actively interpreting the code snippet.
ResponseCompletedEvent
Emitted when the model response is complete.
ResponseContentPartAddedEvent
Emitted when a new content part is added.
ResponseContentPartDoneEvent
Emitted when a content part is done.
ResponseCreatedEvent
An event that is emitted when a response is created.
ResponseError
An error object returned when the model fails to generate a Response.
ResponseErrorEvent
Emitted when an error occurs.
ResponseFailedEvent
An event that is emitted when a response fails.
ResponseFileSearchCallCompletedEvent
Emitted when a file search call is completed (results found).
ResponseFileSearchCallInProgressEvent
Emitted when a file search call is initiated.
ResponseFileSearchCallSearchingEvent
Emitted when a file search is currently searching.
ResponseFormatJsonObject
JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.
ResponseFormatJsonSchema
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
ResponseFormatJsonSchemaJsonSchema
Structured Outputs configuration options, including a JSON Schema.
ResponseFormatText
Default response format. Used to generate text responses.
ResponseFunctionCallArgumentsDeltaEvent
Emitted when there is a partial function-call arguments delta.
ResponseFunctionCallArgumentsDoneEvent
Emitted when function-call arguments are finalized.
ResponseImageGenCallCompletedEvent
Emitted when an image generation tool call has completed and the final image is available.
ResponseImageGenCallGeneratingEvent
Emitted when an image generation tool call is actively generating an image (intermediate state).
ResponseImageGenCallInProgressEvent
Emitted when an image generation tool call is in progress.
ResponseImageGenCallPartialImageEvent
Emitted when a partial image is available during image generation streaming.
ResponseInProgressEvent
Emitted when the response is in progress.
ResponseIncompleteDetails
Details about why the response is incomplete.
ResponseIncompleteEvent
An event that is emitted when a response finishes as incomplete.
ResponseItemList
A list of Response items.
ResponseMcpCallArgumentsDeltaEvent
Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
ResponseMcpCallArgumentsDoneEvent
Emitted when the arguments for an MCP tool call are finalized.
ResponseMcpCallCompletedEvent
Emitted when an MCP tool call has completed successfully.
ResponseMcpCallFailedEvent
Emitted when an MCP tool call has failed.
ResponseMcpCallInProgressEvent
Emitted when an MCP tool call is in progress.
ResponseMcpListToolsCompletedEvent
Emitted when the list of available MCP tools has been successfully retrieved.
ResponseMcpListToolsFailedEvent
Emitted when the attempt to list available MCP tools has failed.
ResponseMcpListToolsInProgressEvent
Emitted when the system is in the process of retrieving the list of available MCP tools.
ResponseOutputItemAddedEvent
Emitted when a new output item is added.
ResponseOutputItemDoneEvent
Emitted when an output item is marked done.
ResponseOutputTextAnnotationAddedEvent
Emitted when an annotation is added to output text content.
ResponseProperties
ResponsePropertiesText
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
ResponseQueuedEvent
Emitted when a response is queued and waiting to be processed.
ResponseReasoningDeltaEvent
Emitted when there is a delta (partial update) to the reasoning content.
ResponseReasoningDoneEvent
Emitted when the reasoning content is finalized for an item.
ResponseReasoningSummaryDeltaEvent
Emitted when there is a delta (partial update) to the reasoning summary content.
ResponseReasoningSummaryDoneEvent
Emitted when the reasoning summary content is finalized for an item.
ResponseReasoningSummaryPartAddedEvent
Emitted when a new reasoning summary part is added.
ResponseReasoningSummaryPartAddedEventPart
The summary part that was added.
ResponseReasoningSummaryPartDoneEvent
Emitted when a reasoning summary part is completed.
ResponseReasoningSummaryPartDoneEventPart
The completed summary part.
ResponseReasoningSummaryTextDeltaEvent
Emitted when a delta is added to a reasoning summary text.
ResponseReasoningSummaryTextDoneEvent
Emitted when a reasoning summary text is completed.
ResponseRefusalDeltaEvent
Emitted when there is a partial refusal text.
ResponseRefusalDoneEvent
Emitted when refusal text is finalized.
ResponseTextDeltaEvent
Emitted when there is an additional text delta.
ResponseTextDoneEvent
Emitted when text content is finalized.
ResponseUsage
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
ResponseUsageInputTokensDetails
A detailed breakdown of the input tokens.
ResponseUsageOutputTokensDetails
A detailed breakdown of the output tokens.
ResponseWebSearchCallCompletedEvent
Emitted when a web search call is completed.
ResponseWebSearchCallInProgressEvent
Emitted when a web search call is initiated.
ResponseWebSearchCallSearchingEvent
Emitted when a web search call is executing.
RunCompletionUsage
Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).
RunGraderRequest
RunGraderResponse
RunGraderResponseMetadata
RunGraderResponseMetadataErrors
RunObject
Represents an execution run on a thread.
RunObjectIncompleteDetails
Details on why the run is incomplete. Will be null if the run is not incomplete.
RunObjectLastError
The last error associated with this run. Will be null if there are no errors.
RunObjectRequiredAction
Details on the action required to continue the run. Will be null if no action is required.
RunObjectRequiredActionSubmitToolOutputs
Details on the tool outputs needed for this run to continue.
RunStepCompletionUsage
Usage statistics related to the run step. This value will be null while the run step’s status is in_progress.
RunStepDeltaObject
Represents a run step delta i.e. any changed fields on a run step during streaming.
RunStepDeltaObjectDelta
The delta containing the fields that have changed on the run step.
RunStepDeltaStepDetailsMessageCreationObject
Details of the message creation by the run step.
RunStepDeltaStepDetailsMessageCreationObjectMessageCreation
RunStepDeltaStepDetailsToolCallsCodeObject
Details of the Code Interpreter tool call the run step was involved in.
RunStepDeltaStepDetailsToolCallsCodeObjectCodeInterpreter
The Code Interpreter tool call definition.
RunStepDeltaStepDetailsToolCallsCodeOutputImageObject
RunStepDeltaStepDetailsToolCallsCodeOutputImageObjectImage
RunStepDeltaStepDetailsToolCallsCodeOutputLogsObject
Text output from the Code Interpreter tool call as part of a run step.
RunStepDeltaStepDetailsToolCallsFileSearchObject
RunStepDeltaStepDetailsToolCallsFunctionObject
RunStepDeltaStepDetailsToolCallsFunctionObjectFunction
The definition of the function that was called.
RunStepDeltaStepDetailsToolCallsObject
Details of the tool call.
RunStepDetailsMessageCreationObject
Details of the message creation by the run step.
RunStepDetailsMessageCreationObjectMessageCreation
RunStepDetailsToolCallsCodeObject
Details of the Code Interpreter tool call the run step was involved in.
RunStepDetailsToolCallsCodeObjectCodeInterpreter
The Code Interpreter tool call definition.
RunStepDetailsToolCallsCodeOutputImageObject
RunStepDetailsToolCallsCodeOutputImageObjectImage
RunStepDetailsToolCallsCodeOutputLogsObject
Text output from the Code Interpreter tool call as part of a run step.
RunStepDetailsToolCallsFileSearchObject
RunStepDetailsToolCallsFileSearchObjectFileSearch
For now, this is always going to be an empty object.
RunStepDetailsToolCallsFileSearchRankingOptionsObject
The ranking options for the file search.
RunStepDetailsToolCallsFileSearchResultObject
A result instance of the file search.
RunStepDetailsToolCallsFileSearchResultObjectContent
RunStepDetailsToolCallsFileSearchResultObjectContentType
The type of the content.
RunStepDetailsToolCallsFunctionObject
RunStepDetailsToolCallsFunctionObjectFunction
The definition of the function that was called.
RunStepDetailsToolCallsObject
Details of the tool call.
RunStepObject
Represents a step in execution of a run.
RunStepObjectLastError
The last error associated with this run step. Will be null if there are no errors.
RunStepStreamEventThreadRunStepCancelled
Occurs when a run step is cancelled.
RunStepStreamEventThreadRunStepCompleted
Occurs when a run step is completed.
RunStepStreamEventThreadRunStepCreated
Occurs when a run step is created.
RunStepStreamEventThreadRunStepDelta
Occurs when parts of a run step are being streamed.
RunStepStreamEventThreadRunStepExpired
Occurs when a run step expires.
RunStepStreamEventThreadRunStepFailed
Occurs when a run step fails.
RunStepStreamEventThreadRunStepInProgress
Occurs when a run step moves to an in_progress state.
RunStreamEventThreadRunCancelled
Occurs when a run is cancelled.
RunStreamEventThreadRunCancelling
Occurs when a run moves to a cancelling status.
RunStreamEventThreadRunCompleted
Occurs when a run is completed.
RunStreamEventThreadRunCreated
Occurs when a new run is created.
RunStreamEventThreadRunExpired
Occurs when a run expires.
RunStreamEventThreadRunFailed
Occurs when a run fails.
RunStreamEventThreadRunInProgress
Occurs when a run moves to an in_progress status.
RunStreamEventThreadRunIncomplete
Occurs when a run ends with status incomplete.
RunStreamEventThreadRunQueued
Occurs when a run moves to a queued status.
RunStreamEventThreadRunRequiresAction
Occurs when a run moves to a requires_action status.
RunToolCallObject
Tool call objects
RunToolCallObjectFunction
The function definition.
Screenshot
A screenshot action.
Scroll
A scroll action.
StaticChunkingStrategy
StaticChunkingStrategyRequestParam
Customize your own chunking strategy by setting chunk size and chunk overlap.
StaticChunkingStrategyResponseParam
SubmitToolOutputsRunRequest
SubmitToolOutputsRunRequestToolOutputs
SubmitToolOutputsRunRequestWithoutStream
SubmitToolOutputsRunRequestWithoutStreamToolOutputs
TextResponseFormatJsonSchema
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
ThreadObject
Represents a thread that contains messages.
ThreadObjectToolResources
A set of resources that are made available to the assistant’s tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
ThreadObjectToolResourcesCodeInterpreter
ThreadObjectToolResourcesFileSearch
ThreadStreamEventThreadCreated
Occurs when a new thread is created.
ToggleCertificatesRequest
ToolChoiceFunction
Use this option to force the model to call a specific function.
ToolChoiceTypes
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
TopLogProb
The top log probability of a token.
TranscriptTextDeltaEvent
Emitted when there is an additional text delta. This is also the first event emitted when the transcription starts. Only emitted when you create a transcription with the Stream parameter set to true.
TranscriptTextDeltaEventLogprob
TranscriptTextDoneEvent
Emitted when the transcription is complete. Contains the complete transcription text. Only emitted when you create a transcription with the Stream parameter set to true.
TranscriptTextDoneEventLogprob
TranscriptionSegment
TranscriptionWord
TruncationObject
Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.
Type
An action to type in text.
UpdateVectorStoreFileAttributesRequest
UpdateVectorStoreRequest
Upload
The Upload object can accept byte chunks in the form of Parts.
UploadCertificateRequest
UploadPart
The upload Part represents a chunk of bytes we can add to an Upload object.
UrlCitationBody
A citation for a web resource used to generate a model response.
UsageAudioSpeechesResult
The aggregated audio speeches usage details of the specific time bucket.
UsageAudioTranscriptionsResult
The aggregated audio transcriptions usage details of the specific time bucket.
UsageCodeInterpreterSessionsResult
The aggregated code interpreter sessions usage details of the specific time bucket.
UsageCompletionsResult
The aggregated completions usage details of the specific time bucket.
UsageEmbeddingsResult
The aggregated embeddings usage details of the specific time bucket.
UsageImagesResult
The aggregated images usage details of the specific time bucket.
UsageModerationsResult
The aggregated moderations usage details of the specific time bucket.
UsageResponse
UsageTimeBucket
UsageVectorStoresResult
The aggregated vector stores usage details of the specific time bucket.
User
Represents an individual user within an organization.
UserDeleteResponse
UserListResponse
UserRoleUpdateRequest
VadConfig
ValidateGraderRequest
ValidateGraderResponse
VectorStoreExpirationAfter
The expiration policy for a vector store.
VectorStoreFileBatchObject
A batch of files attached to a vector store.
VectorStoreFileBatchObjectFileCounts
VectorStoreFileContentResponse
Represents the parsed content of a vector store file.
VectorStoreFileContentResponseDatum
VectorStoreFileObject
A list of files attached to a vector store.
VectorStoreFileObjectLastError
The last error associated with this vector store file. Will be null if there are no errors.
VectorStoreObject
A vector store is a collection of processed files can be used by the file_search tool.
VectorStoreObjectFileCounts
VectorStoreSearchRequest
VectorStoreSearchRequestRankingOptions
Ranking options for search.
VectorStoreSearchResultContentObject
VectorStoreSearchResultItem
VectorStoreSearchResultsPage
Wait
A wait action.
WebSearchLocation
Approximate location parameters for the search.
WebSearchPreviewTool
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
WebSearchToolCall
The results of a web search tool call. See the web search guide for more information.

Enums§

Annotation
AssistantStreamEvent
Represents an event emitted when streaming a Run.
AssistantSupportedModels
AssistantTool
AssistantsApiResponseFormatOption
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
AssistantsApiToolChoiceOption
Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
AssistantsNamedToolChoiceType
The type of the tool. If type is function, the function name must be set
AudioResponseFormat
The format of the output, in one of these options: json, text, srt, verbose_json, or vtt. For gpt-4o-transcribe and gpt-4o-mini-transcribe, the only supported format is json.
AuditLogActorApiKeyType
The type of API key. Can be either user or service_account.
AuditLogActorType
The type of actor. Is either session or api_key.
AuditLogEventType
The event type.
BatchStatus
The current status of the batch.
CertificateObject
The object type.
ChatCompletionModality
ChatCompletionRequestAssistantMessageContent
The contents of the assistant message. Required unless tool_calls or function_call is specified.
ChatCompletionRequestAssistantMessageContentPart
ChatCompletionRequestDeveloperMessageContent
The contents of the developer message.
ChatCompletionRequestMessage
ChatCompletionRequestMessageContentPartAudioInputAudioFormat
The format of the encoded audio data. Currently supports “wav” and “mp3”.
ChatCompletionRequestMessageContentPartImageImageUrlDetail
Specifies the detail level of the image. Learn more in the Vision guide.
ChatCompletionRequestSystemMessageContent
The contents of the system message.
ChatCompletionRequestSystemMessageContentPart
ChatCompletionRequestToolMessageContent
The contents of the tool message.
ChatCompletionRequestToolMessageContentPart
ChatCompletionRequestUserMessageContent
The contents of the user message.
ChatCompletionRequestUserMessageContentPart
ChatCompletionRole
The role of the author of a message
ChatCompletionStreamResponseDeltaRole
The role of the author of this message.
ChatCompletionToolChoiceOption
Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
ChatModel
ChunkingStrategyRequestParam
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty.
ChunkingStrategyResponse
The strategy used to chunk the file.
ClickButton
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
CodeInterpreterToolCallStatus
The status of the code interpreter tool call.
CodeInterpreterToolContainer
The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code.
CodeInterpreterToolOutput
The output of a code interpreter tool.
ComparisonFilterType
Specifies the comparison operator: eq, ne, gt, gte, lt, lte.
ComparisonFilterValue
The value to compare against the attribute key; supports string, number, or boolean types.
CompoundFilterFilter
CompoundFilterType
Type of operation: and or or.
ComputerAction
ComputerCallOutputItemParamStatus
The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.
ComputerToolCallOutputStatus
The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.
ComputerToolCallStatus
The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.
ComputerUsePreviewToolEnvironment
The type of computer environment to control.
ContainerResourceExpiresAfterAnchor
The reference point for the expiration.
Content
Multi-modal input and output contents.
CreateAssistantRequestToolResourcesFileSearch
CreateAssistantRequestToolResourcesFileSearch0VectorStoreChunkingStrategy
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy.
CreateAssistantRequestToolResourcesFileSearch1VectorStoreChunkingStrategy
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy.
CreateChatCompletionRequestAudioFormat
Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16.
CreateChatCompletionRequestFunctionCall
Deprecated in favor of tool_choice.
CreateChatCompletionRequestPrediction
Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content.
CreateChatCompletionRequestResponseFormat
An object specifying the format that the model must output.
CreateChatCompletionResponseChoiceFinishReason
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
CreateChatCompletionStreamResponseChoiceFinishReason
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
CreateCompletionRequestPrompt
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
CreateCompletionResponseChoiceFinishReason
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, or content_filter if content was omitted due to a flag from our content filters.
CreateContainerBodyExpiresAfterAnchor
Time anchor for the expiration time. Currently only ‘last_active_at’ is supported.
CreateEmbeddingRequestEncodingFormat
The format to return the embeddings in. Can be either float or base64.
CreateEmbeddingRequestInput
Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for all embedding models), cannot be an empty string, and any array must be 2048 dimensions or less. Example Python code for counting tokens. In addition to the per-input token limit, all embedding models enforce a maximum of 300,000 tokens summed across all inputs in a single request.
CreateEvalCompletionsRunDataSourceInputMessages
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
CreateEvalCompletionsRunDataSourceInputMessagesTemplateTemplate
CreateEvalCompletionsRunDataSourceSamplingParamsResponseFormat
An object specifying the format that the model must output.
CreateEvalCompletionsRunDataSourceSource
Determines what populates the item namespace in this run’s data source.
CreateEvalItem
A chat message that makes up the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
CreateEvalJsonlRunDataSourceSource
Determines what populates the item namespace in the data source.
CreateEvalRequestDataSourceConfig
The configuration for the data source used for the evaluation runs. Dictates the schema of the data used in the evaluation.
CreateEvalRequestTestingCriteria
CreateEvalResponsesRunDataSourceInputMessages
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
CreateEvalResponsesRunDataSourceInputMessagesTemplateTemplate
CreateEvalResponsesRunDataSourceSource
Determines what populates the item namespace in this run’s data source.
CreateEvalRunRequestDataSource
Details about the run’s data source.
CreateFineTuningJobRequestHyperparametersBatchSize
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
CreateFineTuningJobRequestHyperparametersLearningRateMultiplier
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
CreateFineTuningJobRequestHyperparametersNEpochs
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
CreateFineTuningJobRequestIntegrationType
The type of integration to enable. Currently, only “wandb” (Weights and Biases) is supported.
CreateImageEditRequestBackground
Allows to set transparency for the background of the generated image(s). This parameter is only supported for gpt-image-1. Must be one of transparent, opaque or auto (default value). When auto is used, the model will automatically determine the best background for the image.
CreateImageEditRequestImage
The image(s) to edit. Must be a supported image file or an array of images.
CreateImageEditRequestQuality
The quality of the image that will be generated. high, medium and low are only supported for gpt-image-1. dall-e-2 only supports standard quality. Defaults to auto.
CreateImageEditRequestResponseFormat
The format in which the generated images are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated. This parameter is only supported for dall-e-2, as gpt-image-1 will always return base64-encoded images.
CreateImageEditRequestSize
The size of the generated images. Must be one of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default value) for gpt-image-1, and one of 256x256, 512x512, or 1024x1024 for dall-e-2.
CreateImageRequestBackground
Allows to set transparency for the background of the generated image(s). This parameter is only supported for gpt-image-1. Must be one of transparent, opaque or auto (default value). When auto is used, the model will automatically determine the best background for the image.
CreateImageRequestModeration
Control the content-moderation level for images generated by gpt-image-1. Must be either low for less restrictive filtering or auto (default value).
CreateImageRequestOutputFormat
The format in which the generated images are returned. This parameter is only supported for gpt-image-1. Must be one of png, jpeg, or webp.
CreateImageRequestQuality
The quality of the image that will be generated.
CreateImageRequestResponseFormat
The format in which generated images with dall-e-2 and dall-e-3 are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated. This parameter isn’t supported for gpt-image-1 which will always return base64-encoded images.
CreateImageRequestSize
The size of the generated images. Must be one of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default value) for gpt-image-1, one of 256x256, 512x512, or 1024x1024 for dall-e-2, and one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3.
CreateImageRequestStyle
The style of the generated images. This parameter is only supported for dall-e-3. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images.
CreateImageVariationRequestResponseFormat
The format in which the generated images are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated.
CreateImageVariationRequestSize
The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.
CreateMessageRequestAttachmentsTool
CreateMessageRequestContent
CreateMessageRequestContentArray
CreateMessageRequestRole
The role of the entity that is creating the message. Allowed values include:
CreateModerationRequestInput
Input (or inputs) to classify. Can be a single string, an array of strings, or an array of multi-modal input objects similar to other models.
CreateModerationRequestInput2
CreateModerationResponseResultCategoryAppliedInputTypesSelfHarm
CreateModerationResponseResultCategoryAppliedInputTypesSelfHarmInstructions
CreateModerationResponseResultCategoryAppliedInputTypesSelfHarmIntent
CreateModerationResponseResultCategoryAppliedInputTypesSexual
CreateModerationResponseResultCategoryAppliedInputTypesViolence
CreateModerationResponseResultCategoryAppliedInputTypesViolenceGraphic
CreateResponseInput
Text, image, or file inputs to the model, used to generate a response.
CreateSpeechRequestResponseFormat
The format to audio in. Supported formats are mp3, opus, aac, flac, wav, and pcm.
CreateThreadRequestToolResourcesFileSearch
CreateThreadRequestToolResourcesFileSearch0VectorStoreChunkingStrategy
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy.
CreateThreadRequestToolResourcesFileSearch1VectorStoreChunkingStrategy
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy.
CreateTranscriptionRequestTimestampGranularities
CreateTranscriptionResponseStreamEvent
CreateTranslationRequestResponseFormat
The format of the output, in one of these options: json, text, srt, verbose_json, or vtt.
CreateUploadRequestPurpose
The intended purpose of the uploaded file.
EasyInputMessageContent
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
EasyInputMessageRole
The role of the message input. One of user, assistant, system, or developer.
EvalDataSourceConfig
Configuration of data sources used in runs of the evaluation.
EvalItemContent
Text inputs to the model - can contain template strings.
EvalItemRole
The role of the message input. One of user, assistant, system, or developer.
EvalRunDataSource
Information about the run’s data source.
EvalTestingCriteria
FilePurpose
The intended purpose of the uploaded file. One of: - assistants: Used in the Assistants API - batch: Used in the Batch API - fine-tune: Used for fine-tuning - vision: Images used for vision fine-tuning - user_data: Flexible file type for any purpose - evals: Used for eval data sets
FileSearchRanker
The ranker to use for the file search. If not specified will use the auto ranker.
FileSearchToolCallStatus
The status of the file search tool call. One of in_progress, searching, incomplete or failed,
Filters
FineTuneChatRequestInputMessages
FineTuneDpoHyperparametersBatchSize
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
FineTuneDpoHyperparametersBeta
The beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model.
FineTuneDpoHyperparametersLearningRateMultiplier
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
FineTuneDpoHyperparametersNEpochs
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
FineTuneMethodType
The type of method. Is either supervised, dpo, or reinforcement.
FineTunePreferenceRequestInputInputMessages
FineTunePreferenceRequestInputNonPreferredOutput
FineTunePreferenceRequestInputPreferredOutput
FineTuneReinforcementHyperparametersBatchSize
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
FineTuneReinforcementHyperparametersComputeMultiplier
Multiplier on amount of compute used for exploring search space during training.
FineTuneReinforcementHyperparametersEvalInterval
The number of training steps between evaluation runs.
FineTuneReinforcementHyperparametersEvalSamples
Number of evaluation samples to generate per training step.
FineTuneReinforcementHyperparametersLearningRateMultiplier
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
FineTuneReinforcementHyperparametersNEpochs
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
FineTuneReinforcementHyperparametersReasoningEffort
Level of reasoning effort.
FineTuneReinforcementMethodGrader
The grader used for the fine-tuning job.
FineTuneReinforcementRequestInputMessages
FineTuneSupervisedHyperparametersBatchSize
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
FineTuneSupervisedHyperparametersLearningRateMultiplier
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
FineTuneSupervisedHyperparametersNEpochs
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
FineTuningJobEventLevel
The log level of the event.
FineTuningJobEventType
The type of event.
FineTuningJobHyperparametersBatchSize
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
FineTuningJobHyperparametersLearningRateMultiplier
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
FineTuningJobHyperparametersNEpochs
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
FineTuningJobIntegration
FineTuningJobStatus
The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.
FunctionCallOutputItemParamStatus
The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.
FunctionToolCallOutputStatus
The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.
FunctionToolCallStatus
The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.
GraderMultiGraders
GraderStringCheckOperation
The string check operation to perform. One of eq, ne, like, or ilike.
GraderTextSimilarityEvaluationMetric
The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
ImageGenToolBackground
Background type for the generated image. One of transparent, opaque, or auto. Default: auto.
ImageGenToolCallStatus
The status of the image generation call.
ImageGenToolModel
The image generation model to use. Default: gpt-image-1.
ImageGenToolModeration
Moderation level for the generated image. Default: auto.
ImageGenToolOutputFormat
The output format of the generated image. One of png, webp, or jpeg. Default: png.
ImageGenToolQuality
The quality of the generated image. One of low, medium, high, or auto. Default: auto.
ImageGenToolSize
The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.
Includable
Specify additional output data to include in the model response. Currently supported values are:
InputAudioFormat
The format of the audio data. Currently supported formats are mp3 and wav.
InputContent
InputImageContentDetail
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
InputItem
InputMessageRole
The role of the message input. One of user, system, or developer.
InputMessageStatus
The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.
InviteProjectsRole
Project membership role
InviteRequestProjectsRole
Project membership role
InviteRequestRole
owner or reader
InviteRole
owner or reader
InviteStatus
accepted,expired, or pending
Item
Content item used to generate a response.
ItemReferenceParamType
The type of item to reference. Always item_reference.
ItemResource
Content item used to generate a response.
LocalShellToolCallOutputStatus
The status of the item. One of in_progress, completed, or incomplete.
LocalShellToolCallStatus
The status of the local shell call.
McpToolAllowedTools
List of allowed tool names or a filter object.
McpToolRequireApproval
Specify which of the MCP server’s tools require approval.
MessageContent
MessageContentDelta
MessageContentImageFileObjectImageFileDetail
Specifies the detail level of the image if specified by the user. low uses fewer tokens, you can opt in to high resolution using high.
MessageContentImageUrlObjectImageUrlDetail
Specifies the detail level of the image. low uses fewer tokens, you can opt in to high resolution using high. Default value is auto
MessageDeltaContentImageFileObjectImageFileDetail
Specifies the detail level of the image if specified by the user. low uses fewer tokens, you can opt in to high resolution using high.
MessageDeltaContentImageUrlObjectImageUrlDetail
Specifies the detail level of the image. low uses fewer tokens, you can opt in to high resolution using high.
MessageDeltaObjectDeltaRole
The entity that produced the message. One of user or assistant.
MessageObjectAttachmentsTool
MessageObjectIncompleteDetailsReason
The reason the message is incomplete.
MessageObjectRole
The entity that produced the message. One of user or assistant.
MessageObjectStatus
The status of the message, which can be either in_progress, incomplete, or completed.
MessageStreamEvent
ModelIds
ModelIdsResponses
ModelIdsShared
ModifyAssistantRequestModel
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
OpenAiFilePurpose
The intended purpose of the file. Supported values are assistants, assistants_output, batch, batch_output, fine-tune, fine-tune-results and vision.
OpenAiFileStatus
Deprecated. The current status of the file, which can be either uploaded, processed, or error.
OutputContent
OutputItem
OutputMessageStatus
The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.
PredictionContentContent
The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly.
ProjectApiKeyOwnerType
user or service_account
ProjectServiceAccountRole
owner or member
ProjectStatus
active or archived
ProjectUserCreateRequestRole
owner or member
ProjectUserRole
owner or member
ProjectUserUpdateRequestRole
owner or member
RankingOptionsRanker
The ranker to use for the file search.
RealtimeClientEvent
A realtime client event.
RealtimeConversationItemContentType
The content type (input_text, input_audio, item_reference, text).
RealtimeConversationItemRole
The role of the message sender (user, assistant, system), only applicable for message items.
RealtimeConversationItemStatus
The status of the item (completed, incomplete). These have no effect on the conversation, but are accepted for consistency with the conversation.item.created event.
RealtimeConversationItemType
The type of the item (message, function_call, function_call_output).
RealtimeConversationItemWithReferenceRole
The role of the message sender (user, assistant, system), only applicable for message items.
RealtimeConversationItemWithReferenceStatus
The status of the item (completed, incomplete). These have no effect on the conversation, but are accepted for consistency with the conversation.item.created event.
RealtimeConversationItemWithReferenceType
The type of the item (message, function_call, function_call_output, item_reference).
RealtimeResponseCreateParamsConversation
Controls which conversation the response is added to. Currently supports auto and none, with auto as the default value. The auto value means that the contents of the response will be added to the default conversation. Set this to none to create an out-of-band response which will not add items to default conversation.
RealtimeResponseCreateParamsMaxResponseOutputTokens
Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.
RealtimeResponseCreateParamsModality
RealtimeResponseCreateParamsOutputAudioFormat
The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw.
RealtimeResponseMaxOutputTokens
Maximum number of output tokens for a single assistant response, inclusive of tool calls, that was used in this response.
RealtimeResponseModality
RealtimeResponseOutputAudioFormat
The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw.
RealtimeResponseStatus
The final status of the response (completed, cancelled, failed, or incomplete).
RealtimeResponseStatusDetailsReason
The reason the Response did not complete. For a cancelled Response, one of turn_detected (the server VAD detected a new start of speech) or client_cancelled (the client sent a cancel event). For an incomplete Response, one of max_output_tokens or content_filter (the server-side safety filter activated and cut off the response).
RealtimeResponseStatusDetailsType
The type of error that caused the response to fail, corresponding with the status field (completed, cancelled, incomplete, failed).
RealtimeServerEvent
A realtime server event.
RealtimeServerEventRateLimitsUpdatedRateLimitsName
The name of the rate limit (requests, tokens).
RealtimeServerEventResponseContentPartAddedPartType
The content type (“text”, “audio”).
RealtimeServerEventResponseContentPartDonePartType
The content type (“text”, “audio”).
RealtimeSessionCreateRequestClientSecretExpiresAtAnchor
The anchor point for the ephemeral token expiration. Only created_at is currently supported.
RealtimeSessionCreateRequestInputAudioFormat
The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.
RealtimeSessionCreateRequestInputAudioNoiseReductionType
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
RealtimeSessionCreateRequestMaxResponseOutputTokens
Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.
RealtimeSessionCreateRequestModality
RealtimeSessionCreateRequestModel
The Realtime model used for this session.
RealtimeSessionCreateRequestOutputAudioFormat
The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, output audio is sampled at a rate of 24kHz.
RealtimeSessionCreateRequestTracing
Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
RealtimeSessionCreateRequestTurnDetectionEagerness
Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium.
RealtimeSessionCreateRequestTurnDetectionType
Type of turn detection.
RealtimeSessionCreateResponseMaxResponseOutputTokens
Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.
RealtimeSessionCreateResponseModality
RealtimeSessionCreateResponseTracing
Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
RealtimeSessionInputAudioFormat
The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.
RealtimeSessionInputAudioNoiseReductionType
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
RealtimeSessionMaxResponseOutputTokens
Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.
RealtimeSessionModality
RealtimeSessionModel
The Realtime model used for this session.
RealtimeSessionOutputAudioFormat
The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, output audio is sampled at a rate of 24kHz.
RealtimeSessionTracing
Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
RealtimeSessionTurnDetectionEagerness
Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium.
RealtimeSessionTurnDetectionType
Type of turn detection.
RealtimeTranscriptionSessionCreateRequestClientSecretExpiresAtAnchor
The anchor point for the ephemeral token expiration. Only created_at is currently supported.
RealtimeTranscriptionSessionCreateRequestInputAudioFormat
The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.
RealtimeTranscriptionSessionCreateRequestInputAudioNoiseReductionType
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
RealtimeTranscriptionSessionCreateRequestInputAudioTranscriptionModel
The model to use for transcription, current options are gpt-4o-transcribe, gpt-4o-mini-transcribe, and whisper-1.
RealtimeTranscriptionSessionCreateRequestModality
RealtimeTranscriptionSessionCreateRequestTurnDetectionEagerness
Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium.
RealtimeTranscriptionSessionCreateRequestTurnDetectionType
Type of turn detection.
RealtimeTranscriptionSessionCreateResponseInputAudioTranscriptionModel
The model to use for transcription. Can be gpt-4o-transcribe, gpt-4o-mini-transcribe, or whisper-1.
RealtimeTranscriptionSessionCreateResponseModality
ReasoningEffort
o-series models only
ReasoningGenerateSummary
Deprecated: use summary instead.
ReasoningItemStatus
The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.
ReasoningSummary
A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.
ResponseErrorCode
The error code for the response.
ResponseIncompleteDetailsReason
The reason why the response is incomplete.
ResponseModality
ResponsePropertiesToolChoice
How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.
ResponsePropertiesTruncation
The truncation strategy to use for the model response.
ResponseStatus
The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.
ResponseStreamEvent
RunGraderRequestGrader
The grader used for the fine-tuning job.
RunObjectIncompleteDetailsReason
The reason why the run is incomplete. This will point to which specific token limit was reached over the course of the run.
RunObjectLastErrorCode
One of server_error, rate_limit_exceeded, or invalid_prompt.
RunStatus
The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired.
RunStepDeltaObjectDeltaStepDetails
The details of the run step.
RunStepDeltaStepDetailsToolCall
RunStepDeltaStepDetailsToolCallsCodeObjectCodeInterpreterOutputs
RunStepDetailsToolCall
RunStepDetailsToolCallsCodeObjectCodeInterpreterOutputs
RunStepObjectLastErrorCode
One of server_error or rate_limit_exceeded.
RunStepObjectStatus
The status of the run step, which can be either in_progress, cancelled, failed, completed, or expired.
RunStepObjectStepDetails
The details of the run step.
RunStepObjectType
The type of run step, which can be either message_creation or tool_calls.
RunStepStreamEvent
RunStreamEvent
ServiceTier
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
StopConfiguration
Not supported with latest reasoning models o3 and o4-mini.
TextAnnotation
TextAnnotationDelta
TextResponseFormatConfiguration
An object specifying the format that the model must output.
ThreadStreamEvent
Tool
A tool that can be used to generate a response.
ToolChoiceOptions
Controls which (if any) tool is called by the model.
ToolChoiceTypesType
The type of hosted tool the model should to use. Learn more about built-in tools.
TranscriptionInclude
TruncationObjectType
The truncation strategy to use for the thread. The default is auto. If set to last_messages, the thread will be truncated to the n most recent messages in the thread. When set to auto, messages in the middle of the thread will be dropped to fit the context length of the model, max_prompt_tokens.
UploadStatus
The status of the Upload.
UsageTimeBucketResult
UserRole
owner or reader
UserRoleUpdateRequestRole
owner or reader
VadConfigType
Must be set to server_vad to enable manual chunking using server side VAD.
ValidateGraderRequestGrader
The grader used for the fine-tuning job.
ValidateGraderResponseGrader
The grader used for the fine-tuning job.
VectorStoreFileAttribute
VectorStoreFileBatchObjectStatus
The status of the vector store files batch, which can be either in_progress, completed, cancelled or failed.
VectorStoreFileObjectLastErrorCode
One of server_error or rate_limit_exceeded.
VectorStoreFileObjectStatus
The status of the vector store file, which can be either in_progress, completed, cancelled, or failed. The status completed indicates that the vector store file is ready for use.
VectorStoreObjectStatus
The status of the vector store, which can be either expired, in_progress, or completed. A status of completed indicates that the vector store is ready for use.
VectorStoreSearchRequestFilters
A filter to apply based on file attributes.
VectorStoreSearchRequestQuery
A query string for a search
VectorStoreSearchRequestRankingOptionsRanker
VectorStoreSearchResultContentObjectType
The type of content.
VoiceIdsShared
WebSearchContextSize
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
WebSearchPreviewToolSearchContextSize
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
WebSearchPreviewToolType
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
WebSearchToolCallStatus
The status of the web search tool call.

Type Aliases§

ChatCompletionMessageToolCalls
The tool calls generated by the model, such as function calls.
ChatCompletionModalities
Output types that you would like the model to generate for this request. Most models are capable of generating text, which is the default:
CreateModelResponseProperties
FunctionParameters
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
InputMessageContentList
A list of one or many input items to the model, containing different content types.
Metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
ParallelToolCalls
Whether to enable parallel function calling during tool use.
ResponseFormatJsonSchemaSchema
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
ResponseModalities
Output types that you would like the model to generate. Most models are capable of generating text, which is the default:
TranscriptionChunkingStrategy
Controls how the audio is cut into chunks. When set to "auto", the server first normalizes loudness and then uses voice activity detection (VAD) to choose boundaries. server_vad object can be provided to tweak VAD detection parameters manually. If unset, the audio is transcribed as a single block.
VectorStoreFileAttributes
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.