Struct OfflineAudioContext

Source
pub struct OfflineAudioContext { /* private fields */ }
Expand description

The OfflineAudioContext doesn’t render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.

Implementations§

Source§

impl OfflineAudioContext

Source

pub fn new(number_of_channels: usize, length: usize, sample_rate: f32) -> Self

Creates an OfflineAudioContext instance

§Arguments
  • channels - number of output channels to render
  • length - length of the rendering audio buffer
  • sample_rate - output sample rate
Source

pub fn start_rendering_sync(&mut self) -> AudioBuffer

Given the current connections and scheduled changes, starts rendering audio.

This function will block the current thread and returns the rendered AudioBuffer synchronously.

This method will only adhere to scheduled suspensions via Self::suspend_sync and will ignore those provided via Self::suspend.

§Panics

Panics if this method is called multiple times

Source

pub async fn start_rendering(&self) -> AudioBuffer

Given the current connections and scheduled changes, starts rendering audio.

Rendering is purely CPU bound and contains no await points, so calling this method will block the executor until completion or until the context is suspended.

This method will only adhere to scheduled suspensions via Self::suspend and will ignore those provided via Self::suspend_sync.

§Panics

Panics if this method is called multiple times.

Source

pub fn length(&self) -> usize

get the length of rendering audio buffer

Source

pub async fn suspend(&self, suspend_time: f64)

Schedules a suspension of the time progression in the audio context at the specified time and returns a promise

The specified time is quantized and rounded up to the render quantum size.

§Panics

Panics if the quantized frame number

  • is negative or
  • is less than or equal to the current time or
  • is greater than or equal to the total render duration or
  • is scheduled by another suspend for the same time
§Example usage
use futures::{executor, join};
use futures::FutureExt as _;
use std::sync::Arc;

use web_audio_api::context::BaseAudioContext;
use web_audio_api::context::OfflineAudioContext;
use web_audio_api::node::{AudioNode, AudioScheduledSourceNode};

let context = Arc::new(OfflineAudioContext::new(1, 512, 44_100.));
let context_clone = Arc::clone(&context);

let suspend_promise = context.suspend(128. / 44_100.).then(|_| async move {
    let mut src = context_clone.create_constant_source();
    src.connect(&context_clone.destination());
    src.start();
    context_clone.resume().await;
});

let render_promise = context.start_rendering();

let buffer = executor::block_on(async move { join!(suspend_promise, render_promise).1 });
assert_eq!(buffer.number_of_channels(), 1);
assert_eq!(buffer.length(), 512);
Source

pub fn suspend_sync<F: FnOnce(&mut Self) + Send + Sync + 'static>( &mut self, suspend_time: f64, callback: F, )

Schedules a suspension of the time progression in the audio context at the specified time and runs a callback.

This is a synchronous version of Self::suspend that runs the provided callback at the suspendTime. The rendering resumes automatically after the callback has run, so there is no resume_sync method.

The specified time is quantized and rounded up to the render quantum size.

§Panics

Panics if the quantized frame number

  • is negative or
  • is less than or equal to the current time or
  • is greater than or equal to the total render duration or
  • is scheduled by another suspend for the same time
§Example usage
use web_audio_api::context::BaseAudioContext;
use web_audio_api::context::OfflineAudioContext;
use web_audio_api::node::{AudioNode, AudioScheduledSourceNode};

let mut context = OfflineAudioContext::new(1, 512, 44_100.);

context.suspend_sync(128. / 44_100., |context| {
    let mut src = context.create_constant_source();
    src.connect(&context.destination());
    src.start();
});

let buffer = context.start_rendering_sync();
assert_eq!(buffer.number_of_channels(), 1);
assert_eq!(buffer.length(), 512);
Source

pub async fn resume(&self)

Resumes the progression of the OfflineAudioContext’s currentTime when it has been suspended

§Panics

Panics when the context is closed or rendering has not started

Source

pub fn set_oncomplete<F: FnOnce(OfflineAudioCompletionEvent) + Send + 'static>( &self, callback: F, )

Register callback to run when the rendering has completed

Only a single event handler is active at any time. Calling this method multiple times will override the previous event handler.

Source

pub fn clear_oncomplete(&self)

Unset the callback to run when the rendering has completed

Trait Implementations§

Source§

impl BaseAudioContext for OfflineAudioContext

Source§

fn decode_audio_data_sync<R: Read + Send + Sync + 'static>( &self, input: R, ) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>

Decode an AudioBuffer from a given input stream. Read more
Source§

fn decode_audio_data<R: Read + Send + Sync + 'static>( &self, input: R, ) -> impl Future<Output = Result<AudioBuffer, Box<dyn Error + Send + Sync>>> + Send + 'static

Decode an AudioBuffer from a given input stream. Read more
Source§

fn create_buffer( &self, number_of_channels: usize, length: usize, sample_rate: f32, ) -> AudioBuffer

Create an new “in-memory” AudioBuffer with the given number of channels, length (i.e. number of samples per channel) and sample rate. Read more
Source§

fn create_analyser(&self) -> AnalyserNode

Creates a AnalyserNode
Source§

fn create_biquad_filter(&self) -> BiquadFilterNode

Creates an BiquadFilterNode which implements a second order filter
Source§

fn create_buffer_source(&self) -> AudioBufferSourceNode

Creates an AudioBufferSourceNode
Source§

fn create_constant_source(&self) -> ConstantSourceNode

Creates an ConstantSourceNode, a source representing a constant value
Source§

fn create_convolver(&self) -> ConvolverNode

Creates an ConvolverNode, a processing node which applies linear convolution
Source§

fn create_channel_merger(&self, number_of_inputs: usize) -> ChannelMergerNode

Creates a ChannelMergerNode
Source§

fn create_channel_splitter( &self, number_of_outputs: usize, ) -> ChannelSplitterNode

Creates a ChannelSplitterNode
Source§

fn create_delay(&self, max_delay_time: f64) -> DelayNode

Creates a DelayNode, delaying the audio signal
Source§

fn create_dynamics_compressor(&self) -> DynamicsCompressorNode

Creates a DynamicsCompressorNode, compressing the audio signal
Source§

fn create_gain(&self) -> GainNode

Creates an GainNode, to control audio volume
Source§

fn create_iir_filter( &self, feedforward: Vec<f64>, feedback: Vec<f64>, ) -> IIRFilterNode

Creates an IirFilterNode Read more
Source§

fn create_oscillator(&self) -> OscillatorNode

Creates an OscillatorNode, a source representing a periodic waveform.
Source§

fn create_panner(&self) -> PannerNode

Creates a PannerNode
Source§

fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave

Creates a periodic wave Read more
Source§

fn create_script_processor( &self, buffer_size: usize, number_of_input_channels: usize, number_of_output_channels: usize, ) -> ScriptProcessorNode

Creates an ScriptProcessorNode for custom audio processing (deprecated); Read more
Source§

fn create_stereo_panner(&self) -> StereoPannerNode

Creates an StereoPannerNode to pan a stereo output
Source§

fn create_wave_shaper(&self) -> WaveShaperNode

Creates a WaveShaperNode
Source§

fn destination(&self) -> AudioDestinationNode

Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.
Source§

fn listener(&self) -> AudioListener

Returns the AudioListener which is used for 3D spatialization
Source§

fn sample_rate(&self) -> f32

The sample rate (in sample-frames per second) at which the AudioContext handles audio.
Source§

fn state(&self) -> AudioContextState

Returns state of current context
Source§

fn current_time(&self) -> f64

This is the time in seconds of the sample frame immediately following the last sample-frame in the block of audio most recently processed by the context’s rendering graph.
Source§

fn create_audio_param( &self, opts: AudioParamDescriptor, dest: &AudioContextRegistration, ) -> (AudioParam, AudioParamId)

Create an AudioParam. Read more
Source§

fn set_onstatechange<F: FnMut(Event) + Send + 'static>(&self, callback: F)

Register callback to run when the state of the AudioContext has changed Read more
Source§

fn clear_onstatechange(&self)

Unset the callback to run when the state of the AudioContext has changed
Source§

impl Debug for OfflineAudioContext

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<S> FromSample<S> for S

Source§

fn from_sample_(s: S) -> S

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<F, T> IntoSample<T> for F
where T: FromSample<F>,

Source§

fn into_sample(self) -> T

Source§

impl<T, U> ToSample<U> for T
where U: FromSample<T>,

Source§

fn to_sample_(self) -> U

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<S, T> Duplex<S> for T
where T: FromSample<S> + ToSample<S>,