pub struct OnlineStream { /* private fields */ }Expand description
Context state for streaming speech recognition.
You can do VAD if you want to reduce compute utilization, but feeding constant streaming audio into this is perfectly reasonable. Decoding is incremental and constant latency.
Created by Model::online_stream.
Implementations§
Source§impl OnlineStream
impl OnlineStream
Sourcepub unsafe fn flush_buffers(&mut self)
pub unsafe fn flush_buffers(&mut self)
Flush extant buffers (feature frames) and signal that no further inputs will be made available.
§Safety
Do not call OnlineStream::accept_waveform after calling this function.
That restriction makes it quite useless, so ymmv. I have not observed any problems doing so so long as an intervening call to OnlineStream::reset exists:
unsafe { s.flush_buffers() };
s.decode();
s.reset();
s.accept_waveform(16000, &[ ... ]);Regardless, upstream docs state not to call OnlineStream::accept_waveform after, so do so at your own risk.
Sourcepub fn accept_waveform(&mut self, sample_rate: usize, samples: &[f32])
pub fn accept_waveform(&mut self, sample_rate: usize, samples: &[f32])
Accept ((-1, 1)) normalized) input audio samples and buffer the computed feature frames.
Sourcepub unsafe fn decode_unchecked(&mut self)
pub unsafe fn decode_unchecked(&mut self)
Decode an unspecified number of feature frames.
It is a logic error to call this function when OnlineStream::is_ready returns false.
§Safety
Ensure OnlineStream::is_ready returns true. It is probably not ever worth eliding the check, but hey, you do you.
Sourcepub fn decode_batch<I: IntoIterator<Item = Q>, Q: DerefMut<Target = Self>>(
streams: I,
)
pub fn decode_batch<I: IntoIterator<Item = Q>, Q: DerefMut<Target = Self>>( streams: I, )
Decode all available feature frames in the provided iterator of streams concurrently.
This batches all operations together, and thus is superior to calling OnlineStream::decode on every OnlineStream in separate threads (though it is not invalid to do so, if desired).
Decode all available feature frames in a shared concurrency context.
This introduces a small amount of synchronization overhead in exchange for much better compute utilization.
Sourcepub fn result(&self) -> Result<String>
pub fn result(&self) -> Result<String>
Returns recognition state since the last call to OnlineStream::reset.
Sourcepub fn result_with<F: FnOnce(Cow<'_, str>) -> R, R>(&self, f: F) -> Result<R>
pub fn result_with<F: FnOnce(Cow<'_, str>) -> R, R>(&self, f: F) -> Result<R>
Returns recognition state since the last call to OnlineStream::reset.
Sourcepub fn is_endpoint(&self) -> bool
pub fn is_endpoint(&self) -> bool
Returns true if an endpoint has been detected.
Sourcepub fn sample_rate(&self) -> usize
pub fn sample_rate(&self) -> usize
Returns the native sample rate.
Sourcepub fn chunk_size(&self) -> usize
pub fn chunk_size(&self) -> usize
Returns the chunk size at the native sample rate.
The stream becomes ready for decoding once this many samples have been accepted.