pub trait AsBaseAudioContext {
Show 25 methods fn base(&self) -> &BaseAudioContext; fn decode_audio_data<R: Read + Send + 'static>(
        &self,
        input: R
    ) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>> { ... }
fn create_buffer(
        &self,
        number_of_channels: usize,
        length: usize,
        sample_rate: SampleRate
    ) -> AudioBuffer { ... }
fn create_oscillator(&self) -> OscillatorNode { ... }
fn create_stereo_panner(&self) -> StereoPannerNode { ... }
fn create_gain(&self) -> GainNode { ... }
fn create_constant_source(&self) -> ConstantSourceNode { ... }
fn create_iir_filter(
        &self,
        feedforward: Vec<f64>,
        feedback: Vec<f64>
    ) -> IirFilterNode { ... }
fn create_delay(&self, max_delay_time: f64) -> DelayNode { ... }
fn create_biquad_filter(&self) -> BiquadFilterNode { ... }
fn create_wave_shaper(&self) -> WaveShaperNode { ... }
fn create_channel_splitter(
        &self,
        number_of_outputs: u32
    ) -> ChannelSplitterNode { ... }
fn create_channel_merger(&self, number_of_inputs: u32) -> ChannelMergerNode { ... }
fn create_media_stream_source<M: MediaStream>(
        &self,
        media: M
    ) -> MediaStreamAudioSourceNode { ... }
fn create_media_element_source(
        &self,
        media: MediaElement
    ) -> MediaElementAudioSourceNode { ... }
fn create_buffer_source(&self) -> AudioBufferSourceNode { ... }
fn create_panner(&self) -> PannerNode { ... }
fn create_analyser(&self) -> AnalyserNode { ... }
fn create_periodic_wave(
        &self,
        options: Option<PeriodicWaveOptions>
    ) -> PeriodicWave { ... }
fn create_audio_param(
        &self,
        opts: AudioParamOptions,
        dest: &AudioNodeId
    ) -> (AudioParam, AudioParamId) { ... }
fn destination(&self) -> AudioDestinationNode { ... }
fn listener(&self) -> AudioListener { ... }
fn sample_rate(&self) -> f32 { ... }
fn sample_rate_raw(&self) -> SampleRate { ... }
fn current_time(&self) -> f64 { ... }
}
Expand description

Retrieve the BaseAudioContext from the concrete AudioContext

Required methods

retrieves the BaseAudioContext associated with the concrete AudioContext

Provided methods

Decode an AudioBuffer from a given input stream.

The current implementation can decode FLAC, Opus, PCM, Vorbis, and Wav.

In addition to the official spec, the input parameter can be any byte stream (not just an array). This means you can decode audio data from a file, network stream, or in memory buffer, and any other std::io::Read implementor. The data if buffered internally so you should not wrap the source in a BufReader.

This function operates synchronously, which may be undesirable on the control thread. The example shows how to avoid this.

Errors

This method returns an Error in various cases (IO, mime sniffing, decoding).

Example
use std::io::Cursor;
use web_audio_api::SampleRate;
use web_audio_api::context::{AsBaseAudioContext, OfflineAudioContext};

let input = Cursor::new(vec![0; 32]); // or a File, TcpStream, ...

let context = OfflineAudioContext::new(2, 44_100, SampleRate(44_100));
let handle = std::thread::spawn(move || context.decode_audio_data(input));

// do other things

// await result from the decoder thread
let decode_buffer_result = handle.join();

Create an new “in-memory” AudioBuffer with the given number of channels, length (i.e. number of samples per channel) and sample rate.

Note: In most cases you will want the sample rate to match the current audio context sample rate.

Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.

Creates an StereoPannerNode to pan a stereo output

Creates an GainNode, to control audio volume

Creates an ConstantSourceNode, a source representing a constant value

Creates an IirFilterNode

Arguments
  • feedforward - An array of the feedforward (numerator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20
  • feedback - An array of the feedback (denominator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20

Creates a DelayNode, delaying the audio signal

Creates an BiquadFilterNode which implements a second order filter

Creates a WaveShaperNode

Creates a ChannelSplitterNode

Creates a ChannelMergerNode

Creates a MediaStreamAudioSourceNode from a MediaElement

Creates a MediaElementAudioSourceNode from a MediaElement

Note: do not forget to start() the node.

Creates an AudioBufferSourceNode

Note: do not forget to start() the node.

Creates a PannerNode

Creates a AnalyserNode

Creates a periodic wave

Create an AudioParam.

Call this inside the register closure when setting up your AudioNode

Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.

Returns the AudioListener which is used for 3D spatialization

The sample rate (in sample-frames per second) at which the AudioContext handles audio.

The raw sample rate of the AudioContext (which has more precision than the float sample_rate() value).

This is the time in seconds of the sample frame immediately following the last sample-frame in the block of audio most recently processed by the context’s rendering graph.

Implementors