pub struct Audio {
pub model: Model,
pub prompt: Option<String>,
pub response_format: Option<Format>,
pub temperature: Option<f32>,
pub language: Option<Language>,
}
Expand description
Given a prompt and an instruction, the model will return an edited version of the prompt.
Fields§
§model: Model
§prompt: Option<String>
§response_format: Option<Format>
§temperature: Option<f32>
§language: Option<Language>
Implementations§
Source§impl Audio
impl Audio
Sourcepub fn model(self, model: Model) -> Self
pub fn model(self, model: Model) -> Self
ID of the model to use. Only whisper-1
is currently available.
Sourcepub fn prompt(self, content: &str) -> Self
pub fn prompt(self, content: &str) -> Self
Set target prompt for image generations.
An optional text to guide the model’s style or continue a previous audio segment. The prompt should match the audio language.
Sourcepub fn response_format(self, response_format: Format) -> Self
pub fn response_format(self, response_format: Format) -> Self
The format of the transcript output, in one of these options: json
,
text
, srt
, verbose_json
, or vtt
.
Sourcepub fn temperature(self, temperature: f32) -> Self
pub fn temperature(self, temperature: f32) -> Self
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
Sourcepub fn language(self, language: Language) -> Self
pub fn language(self, language: Language) -> Self
The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
Sourcepub async fn transcription(
&self,
file_name: String,
bytes: Vec<u8>,
) -> Result<AudioResponse, Box<dyn Error>>
pub async fn transcription( &self, file_name: String, bytes: Vec<u8>, ) -> Result<AudioResponse, Box<dyn Error>>
Send transcription request to OpenAI.
The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
§Arguments
file_name
- Audio file namebytes
- Bytes vector of the file
Sourcepub async fn translation(
&self,
file_name: String,
bytes: Vec<u8>,
) -> Result<AudioResponse, Box<dyn Error>>
pub async fn translation( &self, file_name: String, bytes: Vec<u8>, ) -> Result<AudioResponse, Box<dyn Error>>
Translates audio into into English.
The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
§Arguments
file_name
- Audio file namebytes
- Bytes vector of the file