[−][src]Struct google_speech1::RecognitionConfig
Provides information to the recognizer that specifies how to process the request.
This type is not used in any activity, and only used as part of another schema.
Fields
language_code: Option<String>
Required The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes.
audio_channel_count: Option<i32>
Optional The number of channels in the input audio data.
ONLY set this for MULTI-CHANNEL recognition.
Valid values for LINEAR16 and FLAC are 1
-8
.
Valid values for OGG_OPUS are '1'-'254'.
Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only 1
.
If 0
or omitted, defaults to one channel (mono).
Note: We only recognize the first channel by default.
To perform independent recognition on each channel set
enable_separate_recognition_per_channel
to 'true'.
encoding: Option<String>
Encoding of audio data sent in all RecognitionAudio
messages.
This field is optional for FLAC
and WAV
audio files and required
for all other audio formats. For details, see AudioEncoding.
enable_automatic_punctuation: Option<bool>
Optional If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses. Note: This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature.
enable_separate_recognition_per_channel: Option<bool>
This needs to be set to true
explicitly and audio_channel_count
> 1
to get each channel recognized separately. The recognition result will
contain a channel_tag
field to state which channel that result belongs
to. If this is not true, we will only recognize the first channel. The
request is billed cumulatively for all channels recognized:
audio_channel_count
multiplied by the length of the audio.
enable_word_time_offsets: Option<bool>
Optional If true
, the top result includes a list of words and
the start and end time offsets (timestamps) for those words. If
false
, no word-level time offset information is returned. The default is
false
.
max_alternatives: Option<i32>
Optional Maximum number of recognition hypotheses to be returned.
Specifically, the maximum number of SpeechRecognitionAlternative
messages
within each SpeechRecognitionResult
.
The server may return fewer than max_alternatives
.
Valid values are 0
-30
. A value of 0
or 1
will return a maximum of
one. If omitted, will return a maximum of one.
use_enhanced: Option<bool>
Optional Set to true to use an enhanced model for speech recognition.
If use_enhanced
is set to true and the model
field is not set, then
an appropriate enhanced model is chosen if an enhanced model exists for
the audio.
If use_enhanced
is true and an enhanced version of the specified model
does not exist, then the speech is recognized using the standard version
of the specified model.
sample_rate_hertz: Option<i32>
Sample rate in Hertz of the audio data sent in all
RecognitionAudio
messages. Valid values are: 8000-48000.
16000 is optimal. For best results, set the sampling rate of the audio
source to 16000 Hz. If that's not possible, use the native sample rate of
the audio source (instead of re-sampling).
This field is optional for FLAC and WAV audio files, but is
required for all other audio formats. For details, see AudioEncoding.
profanity_filter: Option<bool>
Optional If set to true
, the server will attempt to filter out
profanities, replacing all but the initial character in each filtered word
with asterisks, e.g. "f***". If set to false
or omitted, profanities
won't be filtered out.
model: Option<String>
Optional Which model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the RecognitionConfig.
Model | Description |
command_and_search |
Best for short queries such as voice commands or voice search. |
phone_call |
Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate). |
video |
Best for audio that originated from from video or includes multiple speakers. Ideally the audio is recorded at a 16khz or greater sampling rate. This is a premium model that costs more than the standard rate. |
default |
Best for audio that is not one of the specific audio models. For example, long-form audio. Ideally the audio is high-fidelity, recorded at a 16khz or greater sampling rate. |
speech_contexts: Option<Vec<SpeechContext>>
Optional array of SpeechContext. A means to provide context to assist the speech recognition. For more information, see Phrase Hints.
metadata: Option<RecognitionMetadata>
Optional Metadata regarding this request.
Trait Implementations
impl Part for RecognitionConfig
[src]
impl Default for RecognitionConfig
[src]
fn default() -> RecognitionConfig
[src]
impl Clone for RecognitionConfig
[src]
fn clone(&self) -> RecognitionConfig
[src]
fn clone_from(&mut self, source: &Self)
1.0.0[src]
Performs copy-assignment from source
. Read more
impl Debug for RecognitionConfig
[src]
impl Serialize for RecognitionConfig
[src]
fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where
__S: Serializer,
[src]
__S: Serializer,
impl<'de> Deserialize<'de> for RecognitionConfig
[src]
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where
__D: Deserializer<'de>,
[src]
__D: Deserializer<'de>,
Auto Trait Implementations
impl Send for RecognitionConfig
impl Unpin for RecognitionConfig
impl Sync for RecognitionConfig
impl UnwindSafe for RecognitionConfig
impl RefUnwindSafe for RecognitionConfig
Blanket Implementations
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T
[src]
fn clone_into(&self, target: &mut T)
[src]
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,
type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>
[src]
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Typeable for T where
T: Any,
T: Any,
impl<T> DeserializeOwned for T where
T: Deserialize<'de>,
[src]
T: Deserialize<'de>,