Trait oboe::AudioStreamSafe

source ·
pub trait AudioStreamSafe: AudioStreamBase {
    // Required methods
    fn get_state(&self) -> StreamState;
    fn set_buffer_size_in_frames(
        &mut self,
        _requested_frames: i32
    ) -> Result<i32>;
    fn get_xrun_count(&self) -> Result<i32>;
    fn is_xrun_count_supported(&self) -> bool;
    fn get_frames_per_burst(&mut self) -> i32;
    fn get_bytes_per_sample(&mut self) -> i32;
    fn calculate_latency_millis(&mut self) -> Result<f64>;
    fn get_timestamp(&mut self, clock_id: i32) -> Result<FrameTimestamp>;
    fn get_audio_api(&self) -> AudioApi;
    fn get_available_frames(&mut self) -> Result<i32>;

    // Provided methods
    fn get_bytes_per_frame(&mut self) -> i32 { ... }
    fn uses_aaudio(&self) -> bool { ... }
}
Expand description

Safe base trait for Oboe audio stream.

Required Methods§

source

fn get_state(&self) -> StreamState

Query the current state, eg. StreamState::Pausing

source

fn set_buffer_size_in_frames(&mut self, _requested_frames: i32) -> Result<i32>

This can be used to adjust the latency of the buffer by changing the threshold where blocking will occur. By combining this with AudioStreamSafe::get_xrun_count, the latency can be tuned at run-time for each device.

This cannot be set higher than AudioStreamBase::get_buffer_capacity_in_frames.

source

fn get_xrun_count(&self) -> Result<i32>

An XRun is an Underrun or an Overrun. During playing, an underrun will occur if the stream is not written in time and the system runs out of valid data. During recording, an overrun will occur if the stream is not read in time and there is no place to put the incoming data so it is discarded.

An underrun or overrun can cause an audible “pop” or “glitch”.

source

fn is_xrun_count_supported(&self) -> bool

Returns true if XRun counts are supported on the stream

source

fn get_frames_per_burst(&mut self) -> i32

Query the number of frames that are read or written by the endpoint at one time.

source

fn get_bytes_per_sample(&mut self) -> i32

Get the number of bytes per sample. This is calculated using the sample format. For example, a stream using 16-bit integer samples will have 2 bytes per sample.

@return the number of bytes per sample.

source

fn calculate_latency_millis(&mut self) -> Result<f64>

Calculate the latency of a stream based on getTimestamp().

Output latency is the time it takes for a given frame to travel from the app to some type of digital-to-analog converter. If the DAC is external, for example in a USB interface or a TV connected by HDMI, then there may be additional latency that the Android device is unaware of.

Input latency is the time it takes to a given frame to travel from an analog-to-digital converter (ADC) to the app.

Note that the latency of an OUTPUT stream will increase abruptly when you write data to it and then decrease slowly over time as the data is consumed.

The latency of an INPUT stream will decrease abruptly when you read data from it and then increase slowly over time as more data arrives.

The latency of an OUTPUT stream is generally higher than the INPUT latency because an app generally tries to keep the OUTPUT buffer full and the INPUT buffer empty.

source

fn get_timestamp(&mut self, clock_id: i32) -> Result<FrameTimestamp>

Get the estimated time that the frame at frame_position entered or left the audio processing pipeline.

This can be used to coordinate events and interactions with the external environment, and to estimate the latency of an audio stream. An example of usage can be found in the hello-oboe sample (search for “calculate_current_output_latency_millis”).

The time is based on the implementation’s best effort, using whatever knowledge is available to the system, but cannot account for any delay unknown to the implementation.

@param clockId the type of clock to use e.g. CLOCK_MONOTONIC @return a FrameTimestamp containing the position and time at which a particular audio frame entered or left the audio processing pipeline, or an error if the operation failed.

source

fn get_audio_api(&self) -> AudioApi

Get the underlying audio API which the stream uses.

source

fn get_available_frames(&mut self) -> Result<i32>

Returns the number of frames of data currently in the buffer

Provided Methods§

source

fn get_bytes_per_frame(&mut self) -> i32

Get the number of bytes in each audio frame. This is calculated using the channel count and the sample format. For example, a 2 channel floating point stream will have 2 * 4 = 8 bytes per frame.

source

fn uses_aaudio(&self) -> bool

Returns true if the underlying audio API is AAudio.

Implementors§

source§

impl<T: RawAudioStream + RawAudioStreamBase> AudioStreamSafe for T