openai-client-base 0.12.0

Auto-generated Rust client for the OpenAI API
# RealtimeResponse

## Properties

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**id** | Option<**String**> | The unique ID of the response, will look like `resp_1234`. | [optional]
**object** | Option<**String**> | The object type, must be `realtime.response`. | [optional]
**status** | Option<**String**> | The final status of the response (`completed`, `cancelled`, `failed`, or  `incomplete`, `in_progress`).  | [optional]
**status_details** | Option<[**models::RealtimeResponseStatusDetails**](RealtimeResponse_status_details.md)> |  | [optional]
**output** | Option<[**Vec<models::RealtimeConversationItem>**](RealtimeConversationItem.md)> | The list of output items generated by the response. | [optional]
**metadata** | Option<**std::collections::HashMap<String, String>**> | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.  Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.  | [optional]
**audio** | Option<[**models::RealtimeResponseAudio**](RealtimeResponse_audio.md)> |  | [optional]
**usage** | Option<[**models::RealtimeResponseUsage**](RealtimeResponse_usage.md)> |  | [optional]
**conversation_id** | Option<**String**> | Which conversation the response is added to, determined by the `conversation` field in the `response.create` event. If `auto`, the response will be added to the default conversation and the value of `conversation_id` will be an id like `conv_1234`. If `none`, the response will not be added to any conversation and the value of `conversation_id` will be `null`. If responses are being triggered automatically by VAD the response will be added to the default conversation  | [optional]
**output_modalities** | Option<**Vec<String>**> | The set of modalities the model used to respond, currently the only possible values are `[\\\"audio\\\"]`, `[\\\"text\\\"]`. Audio output always include a text transcript. Setting the output to mode `text` will disable audio output from the model.  | [optional]
**max_output_tokens** | Option<[**models::RealtimeBetaResponseMaxOutputTokens**](RealtimeBetaResponse_max_output_tokens.md)> |  | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)