pub struct AudioContext { /* private fields */ }
Expand description

This interface represents an audio graph whose AudioDestinationNode is routed to a real-time output device that produces a signal directed at the user.

Implementations

Creates and returns a new AudioContext object.

This will play live audio on the default output device.

use web_audio_api::context::{AudioContext, AudioContextOptions};

// Request a sample rate of 44.1 kHz and default latency (buffer size 128, if available)
let opts = AudioContextOptions {
    sample_rate: Some(44100.),
    ..AudioContextOptions::default()
};

// Setup the audio context that will emit to your speakers
let context = AudioContext::new(opts);

// Alternatively, use the default constructor to get the best settings for your hardware
// let context = AudioContext::default();
Panics

The AudioContext constructor will panic when an invalid sinkId is provided in the AudioContextOptions. In a future version, a try_new constructor will be introduced that never panics.

This represents the number of seconds of processing latency incurred by the AudioContext passing the audio from the AudioDestinationNode to the audio subsystem.

The estimation in seconds of audio output latency, i.e., the interval between the time the UA requests the host system to play a buffer and the time at which the first sample in the buffer is actually processed by the audio output device.

Identifier or the information of the current audio output device.

The initial value is "", which means the default audio output device.

Update the current audio output device.

The provided sink_id string must match a device name enumerate_devices.

Supplying "none" for the sink_id will process the audio graph without playing through an audio output device.

This function operates synchronously and might block the current thread. An async version is currently not implemented.

Suspends the progression of time in the audio context.

This will temporarily halt audio hardware access and reducing CPU/battery usage in the process.

This function operates synchronously and might block the current thread. An async version is currently not implemented.

Panics

Will panic if:

  • The audio device is not available
  • For a BackendSpecificError

Resumes the progression of time in an audio context that has previously been suspended/paused.

This function operates synchronously and might block the current thread. An async version is currently not implemented.

Panics

Will panic if:

  • The audio device is not available
  • For a BackendSpecificError

Closes the AudioContext, releasing the system resources being used.

This will not automatically release all AudioContext-created objects, but will suspend the progression of the currentTime, and stop processing audio data.

This function operates synchronously and might block the current thread. An async version is currently not implemented.

Panics

Will panic when this function is called multiple times

Returns an AudioRenderCapacity instance associated with an AudioContext.

Trait Implementations

Returns the BaseAudioContext concrete type associated with this AudioContext
Construct a new pair of AudioNode and AudioProcessor Read more
Decode an AudioBuffer from a given input stream. Read more
Create an new “in-memory” AudioBuffer with the given number of channels, length (i.e. number of samples per channel) and sample rate. Read more
Creates a AnalyserNode
Creates an BiquadFilterNode which implements a second order filter
Creates an AudioBufferSourceNode
Creates an ConstantSourceNode, a source representing a constant value
Creates an ConvolverNode, a processing node which applies linear convolution
Creates a ChannelMergerNode
Creates a ChannelSplitterNode
Creates a DelayNode, delaying the audio signal
Creates a DynamicsCompressorNode, compressing the audio signal
Creates an GainNode, to control audio volume
Creates an IirFilterNode Read more
Creates an OscillatorNode, a source representing a periodic waveform.
Creates a PannerNode
Creates a periodic wave Read more
Creates an StereoPannerNode to pan a stereo output
Creates a WaveShaperNode
Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device. Read more
Returns the AudioListener which is used for 3D spatialization
The sample rate (in sample-frames per second) at which the AudioContext handles audio.
Returns state of current context
This is the time in seconds of the sample frame immediately following the last sample-frame in the block of audio most recently processed by the context’s rendering graph. Read more
Create an AudioParam. Read more
Returns the “default value” for a type. Read more

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The type returned in the event of a conversion error.
Performs the conversion.
The type returned in the event of a conversion error.
Performs the conversion.