#[non_exhaustive]pub struct StreamingRecognizeResponse {
pub results: Vec<StreamingRecognitionResult>,
pub speech_event_type: SpeechEventType,
pub speech_event_offset: Option<Duration>,
pub metadata: Option<RecognitionResponseMetadata>,
/* private fields */
}Expand description
StreamingRecognizeResponse is the only message returned to the client by
StreamingRecognize. A series of zero or more StreamingRecognizeResponse
messages are streamed back to the client. If there is no recognizable
audio then no messages are streamed back to the client.
Here are some examples of StreamingRecognizeResponses that might
be returned while processing audio:
-
results { alternatives { transcript: “tube” } stability: 0.01 }
-
results { alternatives { transcript: “to be a” } stability: 0.01 }
-
results { alternatives { transcript: “to be” } stability: 0.9 } results { alternatives { transcript: “ or not to be“ } stability: 0.01 }
-
results { alternatives { transcript: “to be or not to be” confidence: 0.92 } alternatives { transcript: “to bee or not to bee” } is_final: true }
-
results { alternatives { transcript: “ that’s“ } stability: 0.01 }
-
results { alternatives { transcript: “ that is“ } stability: 0.9 } results { alternatives { transcript: “ the question“ } stability: 0.01 }
-
results { alternatives { transcript: “ that is the question“ confidence: 0.98 } alternatives { transcript: “ that was the question“ } is_final: true }
Notes:
-
Only two of the above responses #4 and #7 contain final results; they are indicated by
is_final: true. Concatenating these together generates the full transcript: “to be or not to be that is the question”. -
The others contain interim
results. #3 and #6 contain two interimresults: the first portion has a high stability and is less likely to change; the second portion has a low stability and is very likely to change. A UI designer might choose to show only high stabilityresults. -
The specific
stabilityandconfidencevalues shown above are only for illustrative purposes. Actual values may vary. -
In each response, only one of these fields will be set:
error,speech_event_type, or one or more (repeated)results.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.results: Vec<StreamingRecognitionResult>This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
It contains zero or one
is_final=true
result (the newly settled portion), followed by zero or more
is_final=false
results (the interim results).
speech_event_type: SpeechEventTypeIndicates the type of speech event.
speech_event_offset: Option<Duration>Time offset between the beginning of the audio and event emission.
metadata: Option<RecognitionResponseMetadata>Metadata about the recognition.
Implementations§
Source§impl StreamingRecognizeResponse
impl StreamingRecognizeResponse
pub fn new() -> Self
Sourcepub fn set_results<T, V>(self, v: T) -> Self
pub fn set_results<T, V>(self, v: T) -> Self
Sourcepub fn set_speech_event_type<T: Into<SpeechEventType>>(self, v: T) -> Self
pub fn set_speech_event_type<T: Into<SpeechEventType>>(self, v: T) -> Self
Sets the value of speech_event_type.
§Example
use google_cloud_speech_v2::model::streaming_recognize_response::SpeechEventType;
let x0 = StreamingRecognizeResponse::new().set_speech_event_type(SpeechEventType::EndOfSingleUtterance);
let x1 = StreamingRecognizeResponse::new().set_speech_event_type(SpeechEventType::SpeechActivityBegin);
let x2 = StreamingRecognizeResponse::new().set_speech_event_type(SpeechEventType::SpeechActivityEnd);Sourcepub fn set_speech_event_offset<T>(self, v: T) -> Self
pub fn set_speech_event_offset<T>(self, v: T) -> Self
Sets the value of speech_event_offset.
§Example
use wkt::Duration;
let x = StreamingRecognizeResponse::new().set_speech_event_offset(Duration::default()/* use setters */);Sourcepub fn set_or_clear_speech_event_offset<T>(self, v: Option<T>) -> Self
pub fn set_or_clear_speech_event_offset<T>(self, v: Option<T>) -> Self
Sets or clears the value of speech_event_offset.
§Example
use wkt::Duration;
let x = StreamingRecognizeResponse::new().set_or_clear_speech_event_offset(Some(Duration::default()/* use setters */));
let x = StreamingRecognizeResponse::new().set_or_clear_speech_event_offset(None::<Duration>);Sourcepub fn set_metadata<T>(self, v: T) -> Selfwhere
T: Into<RecognitionResponseMetadata>,
pub fn set_metadata<T>(self, v: T) -> Selfwhere
T: Into<RecognitionResponseMetadata>,
Sourcepub fn set_or_clear_metadata<T>(self, v: Option<T>) -> Selfwhere
T: Into<RecognitionResponseMetadata>,
pub fn set_or_clear_metadata<T>(self, v: Option<T>) -> Selfwhere
T: Into<RecognitionResponseMetadata>,
Sets or clears the value of metadata.
§Example
use google_cloud_speech_v2::model::RecognitionResponseMetadata;
let x = StreamingRecognizeResponse::new().set_or_clear_metadata(Some(RecognitionResponseMetadata::default()/* use setters */));
let x = StreamingRecognizeResponse::new().set_or_clear_metadata(None::<RecognitionResponseMetadata>);Trait Implementations§
Source§impl Clone for StreamingRecognizeResponse
impl Clone for StreamingRecognizeResponse
Source§fn clone(&self) -> StreamingRecognizeResponse
fn clone(&self) -> StreamingRecognizeResponse
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more