openai-client-base 0.12.0

Auto-generated Rust client for the OpenAI API
1
2
3
4
5
6
7
8
9
10
11
12
13
# AudioTranscription

## Properties

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**model** | Option<**String**> | The model to use for transcription. Current options are `whisper-1`, `gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `gpt-4o-transcribe`, and `gpt-4o-transcribe-diarize`. Use `gpt-4o-transcribe-diarize` when you need diarization with speaker labels.  | [optional]
**language** | Option<**String**> | The language of the input audio. Supplying the input language in [ISO-639-1]https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes (e.g. `en`) format will improve accuracy and latency.  | [optional]
**prompt** | Option<**String**> | An optional text to guide the model's style or continue a previous audio segment. For `whisper-1`, the [prompt is a list of keywords]/docs/guides/speech-to-text#prompting. For `gpt-4o-transcribe` models (excluding `gpt-4o-transcribe-diarize`), the prompt is a free text string, for example \"expect words related to technology\".  | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)