#[non_exhaustive]pub struct InputAudioConfig {Show 13 fields
pub audio_encoding: AudioEncoding,
pub sample_rate_hertz: i32,
pub language_code: String,
pub enable_word_info: bool,
pub phrase_hints: Vec<String>,
pub speech_contexts: Vec<SpeechContext>,
pub model: String,
pub model_variant: SpeechModelVariant,
pub single_utterance: bool,
pub disable_no_speech_recognized_event: bool,
pub enable_automatic_punctuation: bool,
pub phrase_sets: Vec<String>,
pub opt_out_conformer_model_migration: bool,
/* private fields */
}participants or sessions only.Expand description
Instructs the speech recognizer how to process the audio content.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.audio_encoding: AudioEncodingRequired. Audio encoding of the audio content to process.
sample_rate_hertz: i32Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
language_code: StringRequired. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
enable_word_info: boolIf true, Dialogflow returns
SpeechWordInfo in
StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn’t return any word-level
information.
phrase_hints: Vec<String>A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts instead. If you
specify both phrase_hints and speech_contexts, Dialogflow will
treat the phrase_hints as a single additional SpeechContext.
speech_contexts: Vec<SpeechContext>Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
model: StringOptional. Which Speech model to select for the given request. For more information, see Speech models.
model_variant: SpeechModelVariantWhich variant of the Speech model to use.
single_utterance: boolIf false (default), recognition does not cease until the
client closes the stream.
If true, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio’s voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
disable_no_speech_recognized_event: boolOnly used in
Participants.AnalyzeContent
and
[Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2.Participants.StreamingAnalyzeContent].
If false and recognition doesn’t return any result, trigger
NO_SPEECH_RECOGNIZED event to Dialogflow agent.
enable_automatic_punctuation: boolEnable automatic punctuation option at the speech backend.
phrase_sets: Vec<String>A collection of phrase set resources to use for speech adaptation.
opt_out_conformer_model_migration: boolIf true, the request will opt out for STT conformer model migration.
This field will be deprecated once force migration takes place in June
2024. Please refer to Dialogflow ES Speech model
migration.
Implementations§
Source§impl InputAudioConfig
impl InputAudioConfig
pub fn new() -> Self
Sourcepub fn set_audio_encoding<T: Into<AudioEncoding>>(self, v: T) -> Self
pub fn set_audio_encoding<T: Into<AudioEncoding>>(self, v: T) -> Self
Sets the value of audio_encoding.
§Example
use google_cloud_dialogflow_v2::model::AudioEncoding;
let x0 = InputAudioConfig::new().set_audio_encoding(AudioEncoding::Linear16);
let x1 = InputAudioConfig::new().set_audio_encoding(AudioEncoding::Flac);
let x2 = InputAudioConfig::new().set_audio_encoding(AudioEncoding::Mulaw);Sourcepub fn set_sample_rate_hertz<T: Into<i32>>(self, v: T) -> Self
pub fn set_sample_rate_hertz<T: Into<i32>>(self, v: T) -> Self
Sets the value of sample_rate_hertz.
§Example
let x = InputAudioConfig::new().set_sample_rate_hertz(42);Sourcepub fn set_language_code<T: Into<String>>(self, v: T) -> Self
pub fn set_language_code<T: Into<String>>(self, v: T) -> Self
Sets the value of language_code.
§Example
let x = InputAudioConfig::new().set_language_code("example");Sourcepub fn set_enable_word_info<T: Into<bool>>(self, v: T) -> Self
pub fn set_enable_word_info<T: Into<bool>>(self, v: T) -> Self
Sets the value of enable_word_info.
§Example
let x = InputAudioConfig::new().set_enable_word_info(true);Sourcepub fn set_phrase_hints<T, V>(self, v: T) -> Self
👎Deprecated
pub fn set_phrase_hints<T, V>(self, v: T) -> Self
Sets the value of phrase_hints.
§Example
let x = InputAudioConfig::new().set_phrase_hints(["a", "b", "c"]);Sourcepub fn set_speech_contexts<T, V>(self, v: T) -> Self
pub fn set_speech_contexts<T, V>(self, v: T) -> Self
Sets the value of speech_contexts.
§Example
use google_cloud_dialogflow_v2::model::SpeechContext;
let x = InputAudioConfig::new()
.set_speech_contexts([
SpeechContext::default()/* use setters */,
SpeechContext::default()/* use (different) setters */,
]);Sourcepub fn set_model_variant<T: Into<SpeechModelVariant>>(self, v: T) -> Self
pub fn set_model_variant<T: Into<SpeechModelVariant>>(self, v: T) -> Self
Sets the value of model_variant.
§Example
use google_cloud_dialogflow_v2::model::SpeechModelVariant;
let x0 = InputAudioConfig::new().set_model_variant(SpeechModelVariant::UseBestAvailable);
let x1 = InputAudioConfig::new().set_model_variant(SpeechModelVariant::UseStandard);
let x2 = InputAudioConfig::new().set_model_variant(SpeechModelVariant::UseEnhanced);Sourcepub fn set_single_utterance<T: Into<bool>>(self, v: T) -> Self
pub fn set_single_utterance<T: Into<bool>>(self, v: T) -> Self
Sets the value of single_utterance.
§Example
let x = InputAudioConfig::new().set_single_utterance(true);Sourcepub fn set_disable_no_speech_recognized_event<T: Into<bool>>(self, v: T) -> Self
pub fn set_disable_no_speech_recognized_event<T: Into<bool>>(self, v: T) -> Self
Sets the value of disable_no_speech_recognized_event.
§Example
let x = InputAudioConfig::new().set_disable_no_speech_recognized_event(true);Sourcepub fn set_enable_automatic_punctuation<T: Into<bool>>(self, v: T) -> Self
pub fn set_enable_automatic_punctuation<T: Into<bool>>(self, v: T) -> Self
Sets the value of enable_automatic_punctuation.
§Example
let x = InputAudioConfig::new().set_enable_automatic_punctuation(true);Sourcepub fn set_phrase_sets<T, V>(self, v: T) -> Self
pub fn set_phrase_sets<T, V>(self, v: T) -> Self
Sets the value of phrase_sets.
§Example
let x = InputAudioConfig::new().set_phrase_sets(["a", "b", "c"]);Sourcepub fn set_opt_out_conformer_model_migration<T: Into<bool>>(self, v: T) -> Self
pub fn set_opt_out_conformer_model_migration<T: Into<bool>>(self, v: T) -> Self
Sets the value of opt_out_conformer_model_migration.
§Example
let x = InputAudioConfig::new().set_opt_out_conformer_model_migration(true);Trait Implementations§
Source§impl Clone for InputAudioConfig
impl Clone for InputAudioConfig
Source§fn clone(&self) -> InputAudioConfig
fn clone(&self) -> InputAudioConfig
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more