Trait BaseAudioContext

Source
pub trait BaseAudioContext {
Show 28 methods // Provided methods fn decode_audio_data_sync<R: Read + Send + Sync + 'static>( &self, input: R, ) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>> { ... } fn decode_audio_data<R: Read + Send + Sync + 'static>( &self, input: R, ) -> impl Future<Output = Result<AudioBuffer, Box<dyn Error + Send + Sync>>> + Send + 'static { ... } fn create_buffer( &self, number_of_channels: usize, length: usize, sample_rate: f32, ) -> AudioBuffer { ... } fn create_analyser(&self) -> AnalyserNode { ... } fn create_biquad_filter(&self) -> BiquadFilterNode { ... } fn create_buffer_source(&self) -> AudioBufferSourceNode { ... } fn create_constant_source(&self) -> ConstantSourceNode { ... } fn create_convolver(&self) -> ConvolverNode { ... } fn create_channel_merger( &self, number_of_inputs: usize, ) -> ChannelMergerNode { ... } fn create_channel_splitter( &self, number_of_outputs: usize, ) -> ChannelSplitterNode { ... } fn create_delay(&self, max_delay_time: f64) -> DelayNode { ... } fn create_dynamics_compressor(&self) -> DynamicsCompressorNode { ... } fn create_gain(&self) -> GainNode { ... } fn create_iir_filter( &self, feedforward: Vec<f64>, feedback: Vec<f64>, ) -> IIRFilterNode { ... } fn create_oscillator(&self) -> OscillatorNode { ... } fn create_panner(&self) -> PannerNode { ... } fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave { ... } fn create_script_processor( &self, buffer_size: usize, number_of_input_channels: usize, number_of_output_channels: usize, ) -> ScriptProcessorNode { ... } fn create_stereo_panner(&self) -> StereoPannerNode { ... } fn create_wave_shaper(&self) -> WaveShaperNode { ... } fn destination(&self) -> AudioDestinationNode { ... } fn listener(&self) -> AudioListener { ... } fn sample_rate(&self) -> f32 { ... } fn state(&self) -> AudioContextState { ... } fn current_time(&self) -> f64 { ... } fn create_audio_param( &self, opts: AudioParamDescriptor, dest: &AudioContextRegistration, ) -> (AudioParam, AudioParamId) { ... } fn set_onstatechange<F: FnMut(Event) + Send + 'static>(&self, callback: F) { ... } fn clear_onstatechange(&self) { ... }
}
Expand description

The interface representing an audio-processing graph built from audio modules linked together, each represented by an AudioNode.

An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding.

Provided Methods§

Source

fn decode_audio_data_sync<R: Read + Send + Sync + 'static>( &self, input: R, ) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>

Decode an AudioBuffer from a given input stream.

The current implementation can decode FLAC, Opus, PCM, Vorbis, and Wav.

In addition to the official spec, the input parameter can be any byte stream (not just an array). This means you can decode audio data from a file, network stream, or in memory buffer, and any other std::io::Read implementer. The data is buffered internally so you should not wrap the source in a BufReader.

This function operates synchronously, which may be undesirable on the control thread. The example shows how to avoid this. See also the async method Self::decode_audio_data.

§Errors

This method returns an Error in various cases (IO, mime sniffing, decoding).

§Usage
use std::io::Cursor;
use web_audio_api::context::{BaseAudioContext, OfflineAudioContext};

let input = Cursor::new(vec![0; 32]); // or a File, TcpStream, ...

let context = OfflineAudioContext::new(2, 44_100, 44_100.);
let handle = std::thread::spawn(move || context.decode_audio_data_sync(input));

// do other things

// await result from the decoder thread
let decode_buffer_result = handle.join();
§Examples

The following example shows how to use a thread pool for audio buffer decoding:

cargo run --release --example decode_multithreaded

Source

fn decode_audio_data<R: Read + Send + Sync + 'static>( &self, input: R, ) -> impl Future<Output = Result<AudioBuffer, Box<dyn Error + Send + Sync>>> + Send + 'static

Decode an AudioBuffer from a given input stream.

The current implementation can decode FLAC, Opus, PCM, Vorbis, and Wav.

In addition to the official spec, the input parameter can be any byte stream (not just an array). This means you can decode audio data from a file, network stream, or in memory buffer, and any other std::io::Read implementer. The data is buffered internally so you should not wrap the source in a BufReader.

Warning, the current implementation still uses blocking IO so it’s best to use Tokio’s spawn_blocking to run the decoding on a thread dedicated to blocking operations. See also the async method Self::decode_audio_data_sync.

§Errors

This method returns an Error in various cases (IO, mime sniffing, decoding).

Source

fn create_buffer( &self, number_of_channels: usize, length: usize, sample_rate: f32, ) -> AudioBuffer

Create an new “in-memory” AudioBuffer with the given number of channels, length (i.e. number of samples per channel) and sample rate.

Note: In most cases you will want the sample rate to match the current audio context sample rate.

Source

fn create_analyser(&self) -> AnalyserNode

Creates a AnalyserNode

Source

fn create_biquad_filter(&self) -> BiquadFilterNode

Creates an BiquadFilterNode which implements a second order filter

Source

fn create_buffer_source(&self) -> AudioBufferSourceNode

Creates an AudioBufferSourceNode

Source

fn create_constant_source(&self) -> ConstantSourceNode

Creates an ConstantSourceNode, a source representing a constant value

Source

fn create_convolver(&self) -> ConvolverNode

Creates an ConvolverNode, a processing node which applies linear convolution

Source

fn create_channel_merger(&self, number_of_inputs: usize) -> ChannelMergerNode

Creates a ChannelMergerNode

Source

fn create_channel_splitter( &self, number_of_outputs: usize, ) -> ChannelSplitterNode

Creates a ChannelSplitterNode

Source

fn create_delay(&self, max_delay_time: f64) -> DelayNode

Creates a DelayNode, delaying the audio signal

Source

fn create_dynamics_compressor(&self) -> DynamicsCompressorNode

Creates a DynamicsCompressorNode, compressing the audio signal

Source

fn create_gain(&self) -> GainNode

Creates an GainNode, to control audio volume

Source

fn create_iir_filter( &self, feedforward: Vec<f64>, feedback: Vec<f64>, ) -> IIRFilterNode

Creates an IirFilterNode

§Arguments
  • feedforward - An array of the feedforward (numerator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20
  • feedback - An array of the feedback (denominator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20
Source

fn create_oscillator(&self) -> OscillatorNode

Creates an OscillatorNode, a source representing a periodic waveform.

Source

fn create_panner(&self) -> PannerNode

Creates a PannerNode

Source

fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave

Creates a periodic wave

Please note that this constructor deviates slightly from the spec by requiring a single argument with the periodic wave options.

Source

fn create_script_processor( &self, buffer_size: usize, number_of_input_channels: usize, number_of_output_channels: usize, ) -> ScriptProcessorNode

Creates an ScriptProcessorNode for custom audio processing (deprecated);

§Panics

This function panics if:

  • buffer_size is not 256, 512, 1024, 2048, 4096, 8192, or 16384
  • the number of input and output channels are both zero
  • either of the channel counts exceed crate::MAX_CHANNELS
Source

fn create_stereo_panner(&self) -> StereoPannerNode

Creates an StereoPannerNode to pan a stereo output

Source

fn create_wave_shaper(&self) -> WaveShaperNode

Creates a WaveShaperNode

Source

fn destination(&self) -> AudioDestinationNode

Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.

Source

fn listener(&self) -> AudioListener

Returns the AudioListener which is used for 3D spatialization

Source

fn sample_rate(&self) -> f32

The sample rate (in sample-frames per second) at which the AudioContext handles audio.

Source

fn state(&self) -> AudioContextState

Returns state of current context

Source

fn current_time(&self) -> f64

This is the time in seconds of the sample frame immediately following the last sample-frame in the block of audio most recently processed by the context’s rendering graph.

Source

fn create_audio_param( &self, opts: AudioParamDescriptor, dest: &AudioContextRegistration, ) -> (AudioParam, AudioParamId)

Create an AudioParam.

Call this inside the register closure when setting up your AudioNode

Source

fn set_onstatechange<F: FnMut(Event) + Send + 'static>(&self, callback: F)

Register callback to run when the state of the AudioContext has changed

Only a single event handler is active at any time. Calling this method multiple times will override the previous event handler.

Source

fn clear_onstatechange(&self)

Unset the callback to run when the state of the AudioContext has changed

Dyn Compatibility§

This trait is not dyn compatible.

In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.

Implementors§