Struct web_audio_api::context::AudioContext
source · [−]pub struct AudioContext { /* private fields */ }
Expand description
This interface represents an audio graph whose AudioDestinationNode
is routed to a real-time
output device that produces a signal directed at the user.
Implementations
sourceimpl AudioContext
impl AudioContext
sourcepub fn new(options: Option<AudioContextOptions>) -> Self
pub fn new(options: Option<AudioContextOptions>) -> Self
Creates and returns a new AudioContext
object.
This will play live audio on the default output
sourcepub fn suspend_sync(&self)
pub fn suspend_sync(&self)
Suspends the progression of time in the audio context.
This will temporarily halt audio hardware access and reducing CPU/battery usage in the process.
This function operates synchronously and might block the current thread. An async version is currently not implemented.
Panics
Will panic if:
- The audio device is not available
- For a
BackendSpecificError
sourcepub fn resume_sync(&self)
pub fn resume_sync(&self)
Resumes the progression of time in an audio context that has previously been suspended/paused.
This function operates synchronously and might block the current thread. An async version is currently not implemented.
Panics
Will panic if:
- The audio device is not available
- For a
BackendSpecificError
sourcepub fn close_sync(&self)
pub fn close_sync(&self)
Closes the AudioContext
, releasing the system resources being used.
This will not automatically release all AudioContext
-created objects, but will suspend
the progression of the currentTime, and stop processing audio data.
This function operates synchronously and might block the current thread. An async version is currently not implemented.
Panics
Will panic when this function is called multiple times
Trait Implementations
sourceimpl BaseAudioContext for AudioContext
impl BaseAudioContext for AudioContext
sourcefn base(&self) -> &ConcreteBaseAudioContext
fn base(&self) -> &ConcreteBaseAudioContext
retrieves the ConcreteBaseAudioContext
associated with this AudioContext
sourcefn decode_audio_data_sync<R: Read + Send + Sync + 'static>(
&self,
input: R
) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>
fn decode_audio_data_sync<R: Read + Send + Sync + 'static>(
&self,
input: R
) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>
Decode an AudioBuffer
from a given input stream. Read more
sourcefn create_buffer(
&self,
number_of_channels: usize,
length: usize,
sample_rate: SampleRate
) -> AudioBuffer
fn create_buffer(
&self,
number_of_channels: usize,
length: usize,
sample_rate: SampleRate
) -> AudioBuffer
Create an new “in-memory” AudioBuffer
with the given number of channels,
length (i.e. number of samples per channel) and sample rate. Read more
sourcefn create_analyser(&self) -> AnalyserNode
fn create_analyser(&self) -> AnalyserNode
Creates a AnalyserNode
sourcefn create_biquad_filter(&self) -> BiquadFilterNode
fn create_biquad_filter(&self) -> BiquadFilterNode
Creates an BiquadFilterNode
which implements a second order filter
sourcefn create_buffer_source(&self) -> AudioBufferSourceNode
fn create_buffer_source(&self) -> AudioBufferSourceNode
Creates an AudioBufferSourceNode
sourcefn create_constant_source(&self) -> ConstantSourceNode
fn create_constant_source(&self) -> ConstantSourceNode
Creates an ConstantSourceNode
, a source representing a constant value
sourcefn create_channel_merger(&self, number_of_inputs: u32) -> ChannelMergerNode
fn create_channel_merger(&self, number_of_inputs: u32) -> ChannelMergerNode
Creates a ChannelMergerNode
sourcefn create_channel_splitter(&self, number_of_outputs: u32) -> ChannelSplitterNode
fn create_channel_splitter(&self, number_of_outputs: u32) -> ChannelSplitterNode
Creates a ChannelSplitterNode
sourcefn create_delay(&self, max_delay_time: f64) -> DelayNode
fn create_delay(&self, max_delay_time: f64) -> DelayNode
Creates a DelayNode
, delaying the audio signal
sourcefn create_gain(&self) -> GainNode
fn create_gain(&self) -> GainNode
Creates an GainNode
, to control audio volume
sourcefn create_iir_filter(
&self,
feedforward: Vec<f64>,
feedback: Vec<f64>
) -> IIRFilterNode
fn create_iir_filter(
&self,
feedforward: Vec<f64>,
feedback: Vec<f64>
) -> IIRFilterNode
Creates an IirFilterNode
Read more
sourcefn create_media_stream_source<M: MediaStream>(
&self,
media: M
) -> MediaStreamAudioSourceNode
fn create_media_stream_source<M: MediaStream>(
&self,
media: M
) -> MediaStreamAudioSourceNode
Creates a MediaStreamAudioSourceNode
from a MediaStream
sourcefn create_media_stream_destination(&self) -> MediaStreamAudioDestinationNode
fn create_media_stream_destination(&self) -> MediaStreamAudioDestinationNode
Creates a MediaStreamAudioDestinationNode
sourcefn create_oscillator(&self) -> OscillatorNode
fn create_oscillator(&self) -> OscillatorNode
Creates an OscillatorNode
, a source representing a periodic waveform.
sourcefn create_panner(&self) -> PannerNode
fn create_panner(&self) -> PannerNode
Creates a PannerNode
sourcefn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave
fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave
Creates a periodic wave
sourcefn create_stereo_panner(&self) -> StereoPannerNode
fn create_stereo_panner(&self) -> StereoPannerNode
Creates an StereoPannerNode
to pan a stereo output
sourcefn create_wave_shaper(&self) -> WaveShaperNode
fn create_wave_shaper(&self) -> WaveShaperNode
Creates a WaveShaperNode
sourcefn create_audio_param(
&self,
opts: AudioParamDescriptor,
dest: &AudioNodeId
) -> (AudioParam, AudioParamId)
fn create_audio_param(
&self,
opts: AudioParamDescriptor,
dest: &AudioNodeId
) -> (AudioParam, AudioParamId)
Create an AudioParam
. Read more
sourcefn destination(&self) -> AudioDestinationNode
fn destination(&self) -> AudioDestinationNode
Returns an AudioDestinationNode
representing the final destination of all audio in the
context. It can be thought of as the audio-rendering device. Read more
sourcefn listener(&self) -> AudioListener
fn listener(&self) -> AudioListener
Returns the AudioListener
which is used for 3D spatialization
sourcefn sample_rate(&self) -> f32
fn sample_rate(&self) -> f32
The sample rate (in sample-frames per second) at which the AudioContext
handles audio.
sourcefn sample_rate_raw(&self) -> SampleRate
fn sample_rate_raw(&self) -> SampleRate
The raw sample rate of the AudioContext
(which has more precision than the float
sample_rate()
value). Read more
sourcefn current_time(&self) -> f64
fn current_time(&self) -> f64
This is the time in seconds of the sample frame immediately following the last sample-frame in the block of audio most recently processed by the context’s rendering graph. Read more
Auto Trait Implementations
impl RefUnwindSafe for AudioContext
impl !Send for AudioContext
impl !Sync for AudioContext
impl Unpin for AudioContext
impl UnwindSafe for AudioContext
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more