Struct web_audio_api::context::AudioContext
source · pub struct AudioContext { /* private fields */ }
Expand description
This interface represents an audio graph whose AudioDestinationNode
is routed to a real-time
output device that produces a signal directed at the user.
Implementations
sourceimpl AudioContext
impl AudioContext
sourcepub fn new(options: AudioContextOptions) -> Self
pub fn new(options: AudioContextOptions) -> Self
Creates and returns a new AudioContext
object.
This will play live audio on the default output device.
use web_audio_api::context::{AudioContext, AudioContextOptions};
// Request a sample rate of 44.1 kHz and default latency (buffer size 128, if available)
let opts = AudioContextOptions {
sample_rate: Some(44100.),
..AudioContextOptions::default()
};
// Setup the audio context that will emit to your speakers
let context = AudioContext::new(opts);
// Alternatively, use the default constructor to get the best settings for your hardware
// let context = AudioContext::default();
Panics
The AudioContext
constructor will panic when an invalid sinkId
is provided in the
AudioContextOptions
. In a future version, a try_new
constructor will be introduced that
never panics.
sourcepub fn base_latency(&self) -> f64
pub fn base_latency(&self) -> f64
This represents the number of seconds of processing latency incurred by
the AudioContext
passing the audio from the AudioDestinationNode
to the audio subsystem.
sourcepub fn output_latency(&self) -> f64
pub fn output_latency(&self) -> f64
The estimation in seconds of audio output latency, i.e., the interval between the time the UA requests the host system to play a buffer and the time at which the first sample in the buffer is actually processed by the audio output device.
sourcepub fn sink_id(&self) -> String
pub fn sink_id(&self) -> String
Identifier or the information of the current audio output device.
The initial value is ""
, which means the default audio output device.
sourcepub fn set_sink_id_sync(&self, sink_id: String) -> Result<(), Box<dyn Error>>
pub fn set_sink_id_sync(&self, sink_id: String) -> Result<(), Box<dyn Error>>
Update the current audio output device.
The provided sink_id
string must match a device name enumerate_devices
.
Supplying "none"
for the sink_id
will process the audio graph without playing through an
audio output device.
This function operates synchronously and might block the current thread. An async version is currently not implemented.
sourcepub fn suspend_sync(&self)
pub fn suspend_sync(&self)
Suspends the progression of time in the audio context.
This will temporarily halt audio hardware access and reducing CPU/battery usage in the process.
This function operates synchronously and might block the current thread. An async version is currently not implemented.
Panics
Will panic if:
- The audio device is not available
- For a
BackendSpecificError
sourcepub fn resume_sync(&self)
pub fn resume_sync(&self)
Resumes the progression of time in an audio context that has previously been suspended/paused.
This function operates synchronously and might block the current thread. An async version is currently not implemented.
Panics
Will panic if:
- The audio device is not available
- For a
BackendSpecificError
sourcepub fn close_sync(&self)
pub fn close_sync(&self)
Closes the AudioContext
, releasing the system resources being used.
This will not automatically release all AudioContext
-created objects, but will suspend
the progression of the currentTime, and stop processing audio data.
This function operates synchronously and might block the current thread. An async version is currently not implemented.
Panics
Will panic when this function is called multiple times
sourcepub fn create_media_stream_source<M: MediaStream>(
&self,
media: M
) -> MediaStreamAudioSourceNode
pub fn create_media_stream_source<M: MediaStream>(
&self,
media: M
) -> MediaStreamAudioSourceNode
Creates a MediaStreamAudioSourceNode
from a
MediaStream
sourcepub fn create_media_stream_destination(&self) -> MediaStreamAudioDestinationNode
pub fn create_media_stream_destination(&self) -> MediaStreamAudioDestinationNode
Creates a MediaStreamAudioDestinationNode
sourcepub fn create_media_element_source(
&self,
media_element: &mut MediaElement
) -> MediaElementAudioSourceNode
pub fn create_media_element_source(
&self,
media_element: &mut MediaElement
) -> MediaElementAudioSourceNode
Creates a MediaElementAudioSourceNode
from a
MediaElement
sourcepub fn render_capacity(&self) -> &AudioRenderCapacity
pub fn render_capacity(&self) -> &AudioRenderCapacity
Returns an AudioRenderCapacity
instance associated with an AudioContext.
Trait Implementations
sourceimpl BaseAudioContext for AudioContext
impl BaseAudioContext for AudioContext
sourcefn base(&self) -> &ConcreteBaseAudioContext
fn base(&self) -> &ConcreteBaseAudioContext
BaseAudioContext
concrete type associated with this AudioContext
sourcefn register<T: AudioNode, F: FnOnce(AudioContextRegistration) -> (T, Box<dyn AudioProcessor>)>(
&self,
f: F
) -> T
fn register<T: AudioNode, F: FnOnce(AudioContextRegistration) -> (T, Box<dyn AudioProcessor>)>(
&self,
f: F
) -> T
sourcefn decode_audio_data_sync<R: Read + Send + Sync + 'static>(
&self,
input: R
) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>
fn decode_audio_data_sync<R: Read + Send + Sync + 'static>(
&self,
input: R
) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>
AudioBuffer
from a given input stream. Read moresourcefn create_buffer(
&self,
number_of_channels: usize,
length: usize,
sample_rate: f32
) -> AudioBuffer
fn create_buffer(
&self,
number_of_channels: usize,
length: usize,
sample_rate: f32
) -> AudioBuffer
AudioBuffer
with the given number of channels,
length (i.e. number of samples per channel) and sample rate. Read moresourcefn create_analyser(&self) -> AnalyserNode
fn create_analyser(&self) -> AnalyserNode
AnalyserNode
sourcefn create_biquad_filter(&self) -> BiquadFilterNode
fn create_biquad_filter(&self) -> BiquadFilterNode
BiquadFilterNode
which implements a second order filtersourcefn create_buffer_source(&self) -> AudioBufferSourceNode
fn create_buffer_source(&self) -> AudioBufferSourceNode
AudioBufferSourceNode
sourcefn create_constant_source(&self) -> ConstantSourceNode
fn create_constant_source(&self) -> ConstantSourceNode
ConstantSourceNode
, a source representing a constant valuesourcefn create_convolver(&self) -> ConvolverNode
fn create_convolver(&self) -> ConvolverNode
ConvolverNode
, a processing node which applies linear convolutionsourcefn create_channel_merger(&self, number_of_inputs: usize) -> ChannelMergerNode
fn create_channel_merger(&self, number_of_inputs: usize) -> ChannelMergerNode
ChannelMergerNode
sourcefn create_channel_splitter(
&self,
number_of_outputs: usize
) -> ChannelSplitterNode
fn create_channel_splitter(
&self,
number_of_outputs: usize
) -> ChannelSplitterNode
ChannelSplitterNode
sourcefn create_delay(&self, max_delay_time: f64) -> DelayNode
fn create_delay(&self, max_delay_time: f64) -> DelayNode
DelayNode
, delaying the audio signalsourcefn create_dynamics_compressor(&self) -> DynamicsCompressorNode
fn create_dynamics_compressor(&self) -> DynamicsCompressorNode
DynamicsCompressorNode
, compressing the audio signalsourcefn create_gain(&self) -> GainNode
fn create_gain(&self) -> GainNode
GainNode
, to control audio volumesourcefn create_iir_filter(
&self,
feedforward: Vec<f64>,
feedback: Vec<f64>
) -> IIRFilterNode
fn create_iir_filter(
&self,
feedforward: Vec<f64>,
feedback: Vec<f64>
) -> IIRFilterNode
IirFilterNode
Read moresourcefn create_oscillator(&self) -> OscillatorNode
fn create_oscillator(&self) -> OscillatorNode
OscillatorNode
, a source representing a periodic waveform.sourcefn create_panner(&self) -> PannerNode
fn create_panner(&self) -> PannerNode
PannerNode
sourcefn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave
fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave
sourcefn create_stereo_panner(&self) -> StereoPannerNode
fn create_stereo_panner(&self) -> StereoPannerNode
StereoPannerNode
to pan a stereo outputsourcefn create_wave_shaper(&self) -> WaveShaperNode
fn create_wave_shaper(&self) -> WaveShaperNode
WaveShaperNode
sourcefn destination(&self) -> AudioDestinationNode
fn destination(&self) -> AudioDestinationNode
AudioDestinationNode
representing the final destination of all audio in the
context. It can be thought of as the audio-rendering device. Read moresourcefn listener(&self) -> AudioListener
fn listener(&self) -> AudioListener
AudioListener
which is used for 3D spatializationsourcefn sample_rate(&self) -> f32
fn sample_rate(&self) -> f32
AudioContext
handles audio.sourcefn state(&self) -> AudioContextState
fn state(&self) -> AudioContextState
sourcefn current_time(&self) -> f64
fn current_time(&self) -> f64
sourcefn create_audio_param(
&self,
opts: AudioParamDescriptor,
dest: &AudioContextRegistration
) -> (AudioParam, AudioParamId)
fn create_audio_param(
&self,
opts: AudioParamDescriptor,
dest: &AudioContextRegistration
) -> (AudioParam, AudioParamId)
AudioParam
. Read more