#[non_exhaustive]pub struct SpeechToTextConfig {
pub speech_model_variant: SpeechModelVariant,
pub model: String,
pub phrase_sets: Vec<String>,
pub audio_encoding: AudioEncoding,
pub sample_rate_hertz: i32,
pub language_code: String,
pub enable_word_info: bool,
pub use_timeout_based_endpointing: bool,
/* private fields */
}conversation-profiles or conversations only.Expand description
Configures speech transcription for ConversationProfile.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.speech_model_variant: SpeechModelVariantThe speech model used in speech to text.
SPEECH_MODEL_VARIANT_UNSPECIFIED, USE_BEST_AVAILABLE will be treated as
USE_ENHANCED. It can be overridden in
AnalyzeContentRequest
and
StreamingAnalyzeContentRequest
request. If enhanced model variant is specified and an enhanced version of
the specified model for the language does not exist, then it would emit an
error.
model: StringWhich Speech model to select. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then Dialogflow auto-selects a model based on other parameters in the SpeechToTextConfig and Agent settings. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:
- phone_call (best for Agent Assist and telephony)
- latest_short (best for Dialogflow non-telephony)
- command_and_search
Leave this field unspecified to use Agent Speech settings for model selection.
phrase_sets: Vec<String>List of names of Cloud Speech phrase sets that are used for transcription. For phrase set limitations, please refer to Cloud Speech API quotas and limits.
audio_encoding: AudioEncodingAudio encoding of the audio content to process.
sample_rate_hertz: i32Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
language_code: StringThe language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. If not specified, the default language configured at ConversationProfile is used.
enable_word_info: boolIf true, Dialogflow returns
SpeechWordInfo in
StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn’t return any word-level
information.
use_timeout_based_endpointing: boolUse timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value.
Implementations§
Source§impl SpeechToTextConfig
impl SpeechToTextConfig
pub fn new() -> Self
Sourcepub fn set_speech_model_variant<T: Into<SpeechModelVariant>>(self, v: T) -> Self
pub fn set_speech_model_variant<T: Into<SpeechModelVariant>>(self, v: T) -> Self
Sets the value of speech_model_variant.
§Example
use google_cloud_dialogflow_v2::model::SpeechModelVariant;
let x0 = SpeechToTextConfig::new().set_speech_model_variant(SpeechModelVariant::UseBestAvailable);
let x1 = SpeechToTextConfig::new().set_speech_model_variant(SpeechModelVariant::UseStandard);
let x2 = SpeechToTextConfig::new().set_speech_model_variant(SpeechModelVariant::UseEnhanced);Sourcepub fn set_phrase_sets<T, V>(self, v: T) -> Self
pub fn set_phrase_sets<T, V>(self, v: T) -> Self
Sets the value of phrase_sets.
§Example
let x = SpeechToTextConfig::new().set_phrase_sets(["a", "b", "c"]);Sourcepub fn set_audio_encoding<T: Into<AudioEncoding>>(self, v: T) -> Self
pub fn set_audio_encoding<T: Into<AudioEncoding>>(self, v: T) -> Self
Sets the value of audio_encoding.
§Example
use google_cloud_dialogflow_v2::model::AudioEncoding;
let x0 = SpeechToTextConfig::new().set_audio_encoding(AudioEncoding::Linear16);
let x1 = SpeechToTextConfig::new().set_audio_encoding(AudioEncoding::Flac);
let x2 = SpeechToTextConfig::new().set_audio_encoding(AudioEncoding::Mulaw);Sourcepub fn set_sample_rate_hertz<T: Into<i32>>(self, v: T) -> Self
pub fn set_sample_rate_hertz<T: Into<i32>>(self, v: T) -> Self
Sets the value of sample_rate_hertz.
§Example
let x = SpeechToTextConfig::new().set_sample_rate_hertz(42);Sourcepub fn set_language_code<T: Into<String>>(self, v: T) -> Self
pub fn set_language_code<T: Into<String>>(self, v: T) -> Self
Sets the value of language_code.
§Example
let x = SpeechToTextConfig::new().set_language_code("example");Sourcepub fn set_enable_word_info<T: Into<bool>>(self, v: T) -> Self
pub fn set_enable_word_info<T: Into<bool>>(self, v: T) -> Self
Sets the value of enable_word_info.
§Example
let x = SpeechToTextConfig::new().set_enable_word_info(true);Sourcepub fn set_use_timeout_based_endpointing<T: Into<bool>>(self, v: T) -> Self
pub fn set_use_timeout_based_endpointing<T: Into<bool>>(self, v: T) -> Self
Sets the value of use_timeout_based_endpointing.
§Example
let x = SpeechToTextConfig::new().set_use_timeout_based_endpointing(true);Trait Implementations§
Source§impl Clone for SpeechToTextConfig
impl Clone for SpeechToTextConfig
Source§fn clone(&self) -> SpeechToTextConfig
fn clone(&self) -> SpeechToTextConfig
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more