Struct nvidia_riva::nvidia::riva::asr::RecognitionConfig
source · pub struct RecognitionConfig {Show 14 fields
pub encoding: i32,
pub sample_rate_hertz: i32,
pub language_code: String,
pub max_alternatives: i32,
pub profanity_filter: bool,
pub speech_contexts: Vec<SpeechContext>,
pub audio_channel_count: i32,
pub enable_word_time_offsets: bool,
pub enable_automatic_punctuation: bool,
pub enable_separate_recognition_per_channel: bool,
pub model: String,
pub verbatim_transcripts: bool,
pub diarization_config: Option<SpeakerDiarizationConfig>,
pub custom_configuration: HashMap<String, String>,
}
Expand description
Provides information to the recognizer that specifies how to process the request
Fields§
§encoding: i32
The encoding of the audio data sent in the request.
All encodings support only 1 channel (mono) audio.
sample_rate_hertz: i32
The sample rate in hertz (Hz) of the audio data sent in the
RecognizeRequest
or StreamingRecognizeRequest
messages.
The Riva server will automatically down-sample/up-sample the audio to match the ASR acoustic model sample rate.
The sample rate value below 8kHz will not produce any meaningful output.
language_code: String
Required. The language of the supplied audio as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: “en-US”.
max_alternatives: i32
Maximum number of recognition hypotheses to be returned.
Specifically, the maximum number of SpeechRecognizeAlternative
messages
within each SpeechRecognizeResult
.
The server may return fewer than max_alternatives
.
If omitted, will return a maximum of one.
profanity_filter: bool
A custom field that enables profanity filtering for the generated transcripts.
If set to ‘true’, the server filters out profanities, replacing all but the initial
character in each filtered word with asterisks. For example, “x**”.
If set to false
or omitted, profanities will not be filtered out. The default is false
.
speech_contexts: Vec<SpeechContext>
Array of SpeechContext. A means to provide context to assist the speech recognition. For more information, see SpeechContext section
audio_channel_count: i32
The number of channels in the input audio data.
ONLY set this for MULTI-CHANNEL recognition.
Valid values for LINEAR16 and FLAC are 1
-8
.
Valid values for OGG_OPUS are ‘1’-‘254’.
Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only 1
.
If 0
or omitted, defaults to one channel (mono).
Note: We only recognize the first channel by default.
To perform independent recognition on each channel set
enable_separate_recognition_per_channel
to ‘true’.
enable_word_time_offsets: bool
If true
, the top result includes a list of words and the start and end
time offsets (timestamps), and confidence scores for those words. If
false
, no word-level time offset information is returned. The default
is false
.
enable_automatic_punctuation: bool
If ‘true’, adds punctuation to recognition result hypotheses. The default ‘false’ value does not add punctuation to result hypotheses.
enable_separate_recognition_per_channel: bool
This needs to be set to true
explicitly and audio_channel_count
> 1
to get each channel recognized separately. The recognition result will
contain a channel_tag
field to state which channel that result belongs
to. If this is not true, we will only recognize the first channel. The
request is billed cumulatively for all channels recognized:
audio_channel_count
multiplied by the length of the audio.
model: String
Which model to select for the given request.
If empty, Riva will select the right model based on the other RecognitionConfig parameters.
The model should correspond to the name passed to riva-build
with the --name
argument
verbatim_transcripts: bool
The verbatim_transcripts flag enables or disable inverse text normalization. ‘true’ returns exactly what was said, with no denormalization. ‘false’ applies inverse text normalization, also this is the default
diarization_config: Option<SpeakerDiarizationConfig>
Config to enable speaker diarization and set additional parameters. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
custom_configuration: HashMap<String, String>
Custom fields for passing request-level configuration options to plugins used in the model pipeline.
Implementations§
source§impl RecognitionConfig
impl RecognitionConfig
sourcepub fn encoding(&self) -> AudioEncoding
pub fn encoding(&self) -> AudioEncoding
Returns the enum value of encoding
, or the default if the field is set to an invalid enum value.
sourcepub fn set_encoding(&mut self, value: AudioEncoding)
pub fn set_encoding(&mut self, value: AudioEncoding)
Sets encoding
to the provided enum value.
Trait Implementations§
source§impl Clone for RecognitionConfig
impl Clone for RecognitionConfig
source§fn clone(&self) -> RecognitionConfig
fn clone(&self) -> RecognitionConfig
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for RecognitionConfig
impl Debug for RecognitionConfig
source§impl Default for RecognitionConfig
impl Default for RecognitionConfig
source§impl Message for RecognitionConfig
impl Message for RecognitionConfig
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError>where
B: BufMut,
Self: Sized,
fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError>where B: BufMut, Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8, Global>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8, Global>where Self: Sized,
source§fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError>where
B: BufMut,
Self: Sized,
fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError>where B: BufMut, Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8, Global>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8, Global>where Self: Sized,
source§fn decode<B>(buf: B) -> Result<Self, DecodeError>where
B: Buf,
Self: Default,
fn decode<B>(buf: B) -> Result<Self, DecodeError>where B: Buf, Self: Default,
source§fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError>where
B: Buf,
Self: Default,
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError>where B: Buf, Self: Default,
source§fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError>where
B: Buf,
Self: Sized,
fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError>where B: Buf, Self: Sized,
self
. Read moresource§fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError>where
B: Buf,
Self: Sized,
fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError>where B: Buf, Self: Sized,
self
.source§impl PartialEq<RecognitionConfig> for RecognitionConfig
impl PartialEq<RecognitionConfig> for RecognitionConfig
source§fn eq(&self, other: &RecognitionConfig) -> bool
fn eq(&self, other: &RecognitionConfig) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for RecognitionConfig
Auto Trait Implementations§
impl RefUnwindSafe for RecognitionConfig
impl Send for RecognitionConfig
impl Sync for RecognitionConfig
impl Unpin for RecognitionConfig
impl UnwindSafe for RecognitionConfig
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request