pub struct OfflineAudioContext { /* private fields */ }
Expand description

The OfflineAudioContext doesn’t render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.

Implementations

Creates an OfflineAudioContext instance

Arguments
  • channels - number of output channels to render
  • length - length of the rendering audio buffer
  • sample_rate - output sample rate

Given the current connections and scheduled changes, starts rendering audio.

This function will block the current thread and returns the rendered AudioBuffer synchronously. An async version is currently not implemented.

get the length of rendering audio buffer

Trait Implementations

Returns the BaseAudioContext concrete type associated with this AudioContext
Construct a new pair of AudioNode and AudioProcessor Read more
Decode an AudioBuffer from a given input stream. Read more
Create an new “in-memory” AudioBuffer with the given number of channels, length (i.e. number of samples per channel) and sample rate. Read more
Creates a AnalyserNode
Creates an BiquadFilterNode which implements a second order filter
Creates an AudioBufferSourceNode
Creates an ConstantSourceNode, a source representing a constant value
Creates an ConvolverNode, a processing node which applies linear convolution
Creates a ChannelMergerNode
Creates a ChannelSplitterNode
Creates a DelayNode, delaying the audio signal
Creates a DynamicsCompressorNode, compressing the audio signal
Creates an GainNode, to control audio volume
Creates an IirFilterNode Read more
Creates an OscillatorNode, a source representing a periodic waveform.
Creates a PannerNode
Creates a periodic wave Read more
Creates an StereoPannerNode to pan a stereo output
Creates a WaveShaperNode
Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device. Read more
Returns the AudioListener which is used for 3D spatialization
The sample rate (in sample-frames per second) at which the AudioContext handles audio.
Returns state of current context
This is the time in seconds of the sample frame immediately following the last sample-frame in the block of audio most recently processed by the context’s rendering graph. Read more
Create an AudioParam. Read more

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The type returned in the event of a conversion error.
Performs the conversion.
The type returned in the event of a conversion error.
Performs the conversion.