Struct web_audio_api::context::OfflineAudioContext
source · pub struct OfflineAudioContext { /* private fields */ }
Expand description
The OfflineAudioContext
doesn’t render the audio to the device hardware; instead, it generates
it, as fast as it can, and outputs the result to an AudioBuffer
.
Implementations§
source§impl OfflineAudioContext
impl OfflineAudioContext
sourcepub fn new(number_of_channels: usize, length: usize, sample_rate: f32) -> Self
pub fn new(number_of_channels: usize, length: usize, sample_rate: f32) -> Self
Creates an OfflineAudioContext
instance
Arguments
channels
- number of output channels to renderlength
- length of the rendering audio buffersample_rate
- output sample rate
sourcepub fn start_rendering_sync(self) -> AudioBuffer
pub fn start_rendering_sync(self) -> AudioBuffer
Given the current connections and scheduled changes, starts rendering audio.
This function will block the current thread and returns the rendered AudioBuffer
synchronously. An async version is currently not implemented.
Trait Implementations§
source§impl BaseAudioContext for OfflineAudioContext
impl BaseAudioContext for OfflineAudioContext
source§fn base(&self) -> &ConcreteBaseAudioContext
fn base(&self) -> &ConcreteBaseAudioContext
Returns the
BaseAudioContext
concrete type associated with this AudioContext
source§fn register<T: AudioNode, F: FnOnce(AudioContextRegistration) -> (T, Box<dyn AudioProcessor>)>(
&self,
f: F
) -> T
fn register<T: AudioNode, F: FnOnce(AudioContextRegistration) -> (T, Box<dyn AudioProcessor>)>(
&self,
f: F
) -> T
source§fn decode_audio_data_sync<R: Read + Send + Sync + 'static>(
&self,
input: R
) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>
fn decode_audio_data_sync<R: Read + Send + Sync + 'static>(
&self,
input: R
) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>
Decode an
AudioBuffer
from a given input stream. Read moresource§fn create_buffer(
&self,
number_of_channels: usize,
length: usize,
sample_rate: f32
) -> AudioBuffer
fn create_buffer(
&self,
number_of_channels: usize,
length: usize,
sample_rate: f32
) -> AudioBuffer
Create an new “in-memory”
AudioBuffer
with the given number of channels,
length (i.e. number of samples per channel) and sample rate. Read moresource§fn create_analyser(&self) -> AnalyserNode
fn create_analyser(&self) -> AnalyserNode
Creates a
AnalyserNode
source§fn create_biquad_filter(&self) -> BiquadFilterNode
fn create_biquad_filter(&self) -> BiquadFilterNode
Creates an
BiquadFilterNode
which implements a second order filtersource§fn create_buffer_source(&self) -> AudioBufferSourceNode
fn create_buffer_source(&self) -> AudioBufferSourceNode
Creates an
AudioBufferSourceNode
source§fn create_constant_source(&self) -> ConstantSourceNode
fn create_constant_source(&self) -> ConstantSourceNode
Creates an
ConstantSourceNode
, a source representing a constant valuesource§fn create_convolver(&self) -> ConvolverNode
fn create_convolver(&self) -> ConvolverNode
Creates an
ConvolverNode
, a processing node which applies linear convolutionsource§fn create_channel_merger(&self, number_of_inputs: usize) -> ChannelMergerNode
fn create_channel_merger(&self, number_of_inputs: usize) -> ChannelMergerNode
Creates a
ChannelMergerNode
source§fn create_channel_splitter(
&self,
number_of_outputs: usize
) -> ChannelSplitterNode
fn create_channel_splitter(
&self,
number_of_outputs: usize
) -> ChannelSplitterNode
Creates a
ChannelSplitterNode
source§fn create_delay(&self, max_delay_time: f64) -> DelayNode
fn create_delay(&self, max_delay_time: f64) -> DelayNode
Creates a
DelayNode
, delaying the audio signalsource§fn create_dynamics_compressor(&self) -> DynamicsCompressorNode
fn create_dynamics_compressor(&self) -> DynamicsCompressorNode
Creates a
DynamicsCompressorNode
, compressing the audio signalsource§fn create_gain(&self) -> GainNode
fn create_gain(&self) -> GainNode
Creates an
GainNode
, to control audio volumesource§fn create_iir_filter(
&self,
feedforward: Vec<f64>,
feedback: Vec<f64>
) -> IIRFilterNode
fn create_iir_filter(
&self,
feedforward: Vec<f64>,
feedback: Vec<f64>
) -> IIRFilterNode
Creates an
IirFilterNode
Read moresource§fn create_oscillator(&self) -> OscillatorNode
fn create_oscillator(&self) -> OscillatorNode
Creates an
OscillatorNode
, a source representing a periodic waveform.source§fn create_panner(&self) -> PannerNode
fn create_panner(&self) -> PannerNode
Creates a
PannerNode
source§fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave
fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave
Creates a periodic wave Read more
source§fn create_stereo_panner(&self) -> StereoPannerNode
fn create_stereo_panner(&self) -> StereoPannerNode
Creates an
StereoPannerNode
to pan a stereo outputsource§fn create_wave_shaper(&self) -> WaveShaperNode
fn create_wave_shaper(&self) -> WaveShaperNode
Creates a
WaveShaperNode
source§fn destination(&self) -> AudioDestinationNode
fn destination(&self) -> AudioDestinationNode
Returns an
AudioDestinationNode
representing the final destination of all audio in the
context. It can be thought of as the audio-rendering device.source§fn listener(&self) -> AudioListener
fn listener(&self) -> AudioListener
Returns the
AudioListener
which is used for 3D spatializationsource§fn sample_rate(&self) -> f32
fn sample_rate(&self) -> f32
The sample rate (in sample-frames per second) at which the
AudioContext
handles audio.source§fn state(&self) -> AudioContextState
fn state(&self) -> AudioContextState
Returns state of current context
source§fn current_time(&self) -> f64
fn current_time(&self) -> f64
This is the time in seconds of the sample frame immediately following the last sample-frame
in the block of audio most recently processed by the context’s rendering graph.
source§fn create_audio_param(
&self,
opts: AudioParamDescriptor,
dest: &AudioContextRegistration
) -> (AudioParam, AudioParamId)
fn create_audio_param(
&self,
opts: AudioParamDescriptor,
dest: &AudioContextRegistration
) -> (AudioParam, AudioParamId)
Create an
AudioParam
. Read more