pub struct AudioContext { /* private fields */ }
Expand description
This interface represents an audio graph whose AudioDestinationNode
is routed to a real-time
output device that produces a signal directed at the user.
Implementations§
Source§impl AudioContext
impl AudioContext
Sourcepub fn new(options: AudioContextOptions) -> Self
pub fn new(options: AudioContextOptions) -> Self
Creates and returns a new AudioContext
object.
This will play live audio on the default output device.
use web_audio_api::context::{AudioContext, AudioContextOptions};
// Request a sample rate of 44.1 kHz and default latency (buffer size 128, if available)
let opts = AudioContextOptions {
sample_rate: Some(44100.),
..AudioContextOptions::default()
};
// Setup the audio context that will emit to your speakers
let context = AudioContext::new(opts);
// Alternatively, use the default constructor to get the best settings for your hardware
// let context = AudioContext::default();
§Panics
The AudioContext
constructor will panic when an invalid sinkId
is provided in the
AudioContextOptions
. In a future version, a try_new
constructor will be introduced that
never panics.
Sourcepub fn base_latency(&self) -> f64
pub fn base_latency(&self) -> f64
This represents the number of seconds of processing latency incurred by
the AudioContext
passing the audio from the AudioDestinationNode
to the audio subsystem.
Sourcepub fn output_latency(&self) -> f64
pub fn output_latency(&self) -> f64
The estimation in seconds of audio output latency, i.e., the interval between the time the UA requests the host system to play a buffer and the time at which the first sample in the buffer is actually processed by the audio output device.
Sourcepub fn sink_id(&self) -> String
pub fn sink_id(&self) -> String
Identifier or the information of the current audio output device.
The initial value is ""
, which means the default audio output device.
Sourcepub fn render_capacity(&self) -> AudioRenderCapacity
pub fn render_capacity(&self) -> AudioRenderCapacity
Returns an AudioRenderCapacity
instance associated with an AudioContext.
Sourcepub fn set_sink_id_sync(&self, sink_id: String) -> Result<(), Box<dyn Error>>
pub fn set_sink_id_sync(&self, sink_id: String) -> Result<(), Box<dyn Error>>
Update the current audio output device.
The provided sink_id
string must match a device name enumerate_devices_sync
.
Supplying "none"
for the sink_id
will process the audio graph without playing through an
audio output device.
This function operates synchronously and might block the current thread. An async version is currently not implemented.
Sourcepub fn set_onsinkchange<F: FnMut(Event) + Send + 'static>(&self, callback: F)
pub fn set_onsinkchange<F: FnMut(Event) + Send + 'static>(&self, callback: F)
Register callback to run when the audio sink has changed
Only a single event handler is active at any time. Calling this method multiple times will override the previous event handler.
Sourcepub fn clear_onsinkchange(&self)
pub fn clear_onsinkchange(&self)
Unset the callback to run when the audio sink has changed
Sourcepub async fn suspend(&self)
pub async fn suspend(&self)
Suspends the progression of time in the audio context.
This will temporarily halt audio hardware access and reducing CPU/battery usage in the process.
§Panics
Will panic if:
- The audio device is not available
- For a
BackendSpecificError
Sourcepub async fn resume(&self)
pub async fn resume(&self)
Resumes the progression of time in an audio context that has previously been suspended/paused.
§Panics
Will panic if:
- The audio device is not available
- For a
BackendSpecificError
Sourcepub async fn close(&self)
pub async fn close(&self)
Closes the AudioContext
, releasing the system resources being used.
This will not automatically release all AudioContext
-created objects, but will suspend
the progression of the currentTime, and stop processing audio data.
§Panics
Will panic when this function is called multiple times
Sourcepub fn suspend_sync(&self)
pub fn suspend_sync(&self)
Suspends the progression of time in the audio context.
This will temporarily halt audio hardware access and reducing CPU/battery usage in the process.
This function operates synchronously and blocks the current thread until the audio thread has stopped processing.
§Panics
Will panic if:
- The audio device is not available
- For a
BackendSpecificError
Sourcepub fn resume_sync(&self)
pub fn resume_sync(&self)
Resumes the progression of time in an audio context that has previously been suspended/paused.
This function operates synchronously and blocks the current thread until the audio thread has started processing again.
§Panics
Will panic if:
- The audio device is not available
- For a
BackendSpecificError
Sourcepub fn close_sync(&self)
pub fn close_sync(&self)
Closes the AudioContext
, releasing the system resources being used.
This will not automatically release all AudioContext
-created objects, but will suspend
the progression of the currentTime, and stop processing audio data.
This function operates synchronously and blocks the current thread until the audio thread has stopped processing.
§Panics
Will panic when this function is called multiple times
Sourcepub fn create_media_stream_source(
&self,
media: &MediaStream,
) -> MediaStreamAudioSourceNode
pub fn create_media_stream_source( &self, media: &MediaStream, ) -> MediaStreamAudioSourceNode
Creates a MediaStreamAudioSourceNode
from a
MediaStream
Sourcepub fn create_media_stream_destination(&self) -> MediaStreamAudioDestinationNode
pub fn create_media_stream_destination(&self) -> MediaStreamAudioDestinationNode
Creates a MediaStreamAudioDestinationNode
Sourcepub fn create_media_stream_track_source(
&self,
media: &MediaStreamTrack,
) -> MediaStreamTrackAudioSourceNode
pub fn create_media_stream_track_source( &self, media: &MediaStreamTrack, ) -> MediaStreamTrackAudioSourceNode
Creates a MediaStreamTrackAudioSourceNode
from a
MediaStreamTrack
Sourcepub fn create_media_element_source(
&self,
media_element: &mut MediaElement,
) -> MediaElementAudioSourceNode
pub fn create_media_element_source( &self, media_element: &mut MediaElement, ) -> MediaElementAudioSourceNode
Creates a MediaElementAudioSourceNode
from a
MediaElement
Trait Implementations§
Source§impl BaseAudioContext for AudioContext
impl BaseAudioContext for AudioContext
Source§fn decode_audio_data_sync<R: Read + Send + Sync + 'static>(
&self,
input: R,
) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>
fn decode_audio_data_sync<R: Read + Send + Sync + 'static>( &self, input: R, ) -> Result<AudioBuffer, Box<dyn Error + Send + Sync>>
AudioBuffer
from a given input stream. Read moreSource§fn decode_audio_data<R: Read + Send + Sync + 'static>(
&self,
input: R,
) -> impl Future<Output = Result<AudioBuffer, Box<dyn Error + Send + Sync>>> + Send + 'static
fn decode_audio_data<R: Read + Send + Sync + 'static>( &self, input: R, ) -> impl Future<Output = Result<AudioBuffer, Box<dyn Error + Send + Sync>>> + Send + 'static
AudioBuffer
from a given input stream. Read moreSource§fn create_buffer(
&self,
number_of_channels: usize,
length: usize,
sample_rate: f32,
) -> AudioBuffer
fn create_buffer( &self, number_of_channels: usize, length: usize, sample_rate: f32, ) -> AudioBuffer
AudioBuffer
with the given number of channels,
length (i.e. number of samples per channel) and sample rate. Read moreSource§fn create_analyser(&self) -> AnalyserNode
fn create_analyser(&self) -> AnalyserNode
AnalyserNode
Source§fn create_biquad_filter(&self) -> BiquadFilterNode
fn create_biquad_filter(&self) -> BiquadFilterNode
BiquadFilterNode
which implements a second order filterSource§fn create_buffer_source(&self) -> AudioBufferSourceNode
fn create_buffer_source(&self) -> AudioBufferSourceNode
AudioBufferSourceNode
Source§fn create_constant_source(&self) -> ConstantSourceNode
fn create_constant_source(&self) -> ConstantSourceNode
ConstantSourceNode
, a source representing a constant valueSource§fn create_convolver(&self) -> ConvolverNode
fn create_convolver(&self) -> ConvolverNode
ConvolverNode
, a processing node which applies linear convolutionSource§fn create_channel_merger(&self, number_of_inputs: usize) -> ChannelMergerNode
fn create_channel_merger(&self, number_of_inputs: usize) -> ChannelMergerNode
ChannelMergerNode
Source§fn create_channel_splitter(
&self,
number_of_outputs: usize,
) -> ChannelSplitterNode
fn create_channel_splitter( &self, number_of_outputs: usize, ) -> ChannelSplitterNode
ChannelSplitterNode
Source§fn create_delay(&self, max_delay_time: f64) -> DelayNode
fn create_delay(&self, max_delay_time: f64) -> DelayNode
DelayNode
, delaying the audio signalSource§fn create_dynamics_compressor(&self) -> DynamicsCompressorNode
fn create_dynamics_compressor(&self) -> DynamicsCompressorNode
DynamicsCompressorNode
, compressing the audio signalSource§fn create_gain(&self) -> GainNode
fn create_gain(&self) -> GainNode
GainNode
, to control audio volumeSource§fn create_iir_filter(
&self,
feedforward: Vec<f64>,
feedback: Vec<f64>,
) -> IIRFilterNode
fn create_iir_filter( &self, feedforward: Vec<f64>, feedback: Vec<f64>, ) -> IIRFilterNode
IirFilterNode
Read moreSource§fn create_oscillator(&self) -> OscillatorNode
fn create_oscillator(&self) -> OscillatorNode
OscillatorNode
, a source representing a periodic waveform.Source§fn create_panner(&self) -> PannerNode
fn create_panner(&self) -> PannerNode
PannerNode
Source§fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave
fn create_periodic_wave(&self, options: PeriodicWaveOptions) -> PeriodicWave
Source§fn create_script_processor(
&self,
buffer_size: usize,
number_of_input_channels: usize,
number_of_output_channels: usize,
) -> ScriptProcessorNode
fn create_script_processor( &self, buffer_size: usize, number_of_input_channels: usize, number_of_output_channels: usize, ) -> ScriptProcessorNode
ScriptProcessorNode
for custom audio processing (deprecated); Read moreSource§fn create_stereo_panner(&self) -> StereoPannerNode
fn create_stereo_panner(&self) -> StereoPannerNode
StereoPannerNode
to pan a stereo outputSource§fn create_wave_shaper(&self) -> WaveShaperNode
fn create_wave_shaper(&self) -> WaveShaperNode
WaveShaperNode
Source§fn destination(&self) -> AudioDestinationNode
fn destination(&self) -> AudioDestinationNode
AudioDestinationNode
representing the final destination of all audio in the
context. It can be thought of as the audio-rendering device.Source§fn listener(&self) -> AudioListener
fn listener(&self) -> AudioListener
AudioListener
which is used for 3D spatializationSource§fn sample_rate(&self) -> f32
fn sample_rate(&self) -> f32
AudioContext
handles audio.Source§fn state(&self) -> AudioContextState
fn state(&self) -> AudioContextState
Source§fn current_time(&self) -> f64
fn current_time(&self) -> f64
Source§fn create_audio_param(
&self,
opts: AudioParamDescriptor,
dest: &AudioContextRegistration,
) -> (AudioParam, AudioParamId)
fn create_audio_param( &self, opts: AudioParamDescriptor, dest: &AudioContextRegistration, ) -> (AudioParam, AudioParamId)
AudioParam
. Read more