pub struct Speech { /* private fields */ }
Expand description
The Speech service allows you to convert text into synthesized speech and get a list of supported voices for a region by using a REST API.
Source: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-text-to-speech
Implementations§
Source§impl Speech
impl Speech
pub fn new_transcription(display_name: String) -> Transcription
pub fn format(self, f: MicrosoftOutputFormat) -> Self
pub fn ssml(self, ssml: SSML) -> Self
Sourcepub async fn voice_list() -> Result<Vec<Voice>, Box<dyn Error>>
pub async fn voice_list() -> Result<Vec<Voice>, Box<dyn Error>>
Get a full list of voices for a specific region or endpoint. Prefix the
voices list endpoint with a region to get a list of voices for that
region. This is preconfigured in your config.yml
.
Voices and styles in preview are only available in three service regions: East US, West Europe, and Southeast Asia.
Source: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-text-to-speech
Sourcepub async fn health_check() -> Result<ServiceHealth, Box<dyn Error>>
pub async fn health_check() -> Result<ServiceHealth, Box<dyn Error>>
Health status provides insights about the overall health of the service and sub-components.
V3.1 API supported only.
Sourcepub async fn text_to_speech(self) -> Result<Vec<u8>, Box<dyn Error>>
pub async fn text_to_speech(self) -> Result<Vec<u8>, Box<dyn Error>>
The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A Speech resource key for the endpoint or region that you plan to use is required.
Source: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-text-to-speech