Crate web_audio_api

Source
Expand description

§Rust Web Audio API

crates.io docs.rs

A pure Rust implementation of the Web Audio API, for use in non-browser contexts

§About the Web Audio API

The Web Audio API (MDN docs) provides a powerful and versatile system for controlling audio on the Web, allowing developers to choose audio sources, add effects to audio, create audio visualizations, apply spatial effects (such as panning) and much more.

Our Rust implementation decouples the Web Audio API from the Web. You can now use it in desktop apps, command line utilities, headless execution, etc.

§Example usage

use web_audio_api::context::{AudioContext, BaseAudioContext};
use web_audio_api::node::{AudioNode, AudioScheduledSourceNode};

// set up the audio context with optimized settings for your hardware
let context = AudioContext::default();

// for background music, read from local file
let file = std::fs::File::open("samples/major-scale.ogg").unwrap();
let buffer = context.decode_audio_data_sync(file).unwrap();

// setup an AudioBufferSourceNode
let mut src = context.create_buffer_source();
src.set_buffer(buffer);
src.set_loop(true);

// create a biquad filter
let biquad = context.create_biquad_filter();
biquad.frequency().set_value(125.);

// connect the audio nodes
src.connect(&biquad);
biquad.connect(&context.destination());

// play the buffer
src.start();

// enjoy listening
std::thread::sleep(std::time::Duration::from_secs(4));

Check out the docs for more info.

§Spec compliance

We have tried to stick to the official W3C spec as close as possible, but some deviations could not be avoided:

  • naming: snake_case instead of CamelCase
  • getters/setters methods instead of exposed attributes
  • introduced some namespacing
  • inheritance is modelled with traits

§Bindings

We provide NodeJS bindings to this library over at https://github.com/ircam-ismm/node-web-audio-api so you can use this library by simply writing native NodeJS code.

This enables us to run the official WebAudioAPI test harness and track our spec compliance score.

§Audio backends

By default, the cpal library is used for cross platform audio I/O.

We offer experimental support for the cubeb backend via the cubeb feature flag. Please note that cmake must be installed locally in order to run cubeb.

Feature flagBackends
cpal (default)ALSA, WASAPI, CoreAudio, Oboe (Android)
cpal-jackJACK
cpal-asioASIO see https://github.com/rustaudio/cpal#asio-on-windows
cubebPulseAudio, AudioUnit, WASAPI, OpenSL, AAudio, sndio, Sun, OSS

§Notes for Linux users

Using the library on Linux with the ALSA backend might lead to unexpected cranky sound with the default render size (i.e. 128 frames). In such cases, a simple workaround is to pass the AudioContextLatencyCategory::Playback latency hint when creating the audio context, which will increase the render size to 1024 frames:

let audio_context = AudioContext::new(AudioContextOptions {
    latency_hint: AudioContextLatencyCategory::Playback,
    ..AudioContextOptions::default()
});

For real-time and interactive applications where low latency is crucial, you should instead rely on the JACK backend provided by cpal. To that end you will need a running JACK server and build your application with the cpal-jack feature, e.g. cargo run --release --features "cpal-jack" --example microphone.

§Targeting the browser

We can go full circle and pipe the Rust WebAudio output back into the browser via cpal’s wasm-bindgen backend. Check out an example WASM project. Warning: experimental!

§Contributing

web-audio-api-rs welcomes contribution from everyone in the form of suggestions, bug reports, pull requests, and feedback. 💛

If you need ideas for contribution, there are several ways to get started:

  • Try out some of our examples (located in the examples/ directory) and start building your own audio graphs
  • Found a bug or have a feature request? Submit an issue!
  • Issues labeled with good first issue are relatively easy starter issues.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in web-audio-api-rs by you, shall be licensed as MIT, without any additional terms or conditions.

§License

This project is licensed under the MIT license.

§Acknowledgements

The IR files used for HRTF spatialization are part of the LISTEN database created by the EAC team from Ircam.

Modules§

context
The BaseAudioContext interface and the AudioContext and OfflineAudioContext types
media_devices
Primitives of the MediaDevices API
media_recorder
Primitives of the Media Recorder API
media_streams
Primitives of the Media Capture and Streams API
node
The AudioNode interface and concrete types
worklet
User-defined audio nodes and processors

Structs§

AudioBuffer
Memory-resident audio asset, basically a matrix of channels * samples
AudioBufferOptions
Options for constructing an AudioBuffer
AudioListener
Represents the position and orientation of the person listening to the audio scene
AudioParam
AudioParam controls an individual aspect of an AudioNode’s functionality, such as volume.
AudioParamDescriptor
Options for constructing an AudioParam
AudioProcessingEvent
The AudioProcessingEvent interface
AudioRenderCapacity
Provider for rendering performance metrics
AudioRenderCapacityEvent
Performance metrics of the rendering thread
AudioRenderCapacityOptions
Options for constructing an AudioRenderCapacity
ErrorEvent
The Error Event interface
Event
The Event interface
MediaElement
Shim of the <audio> element which allows you to efficiently play and seek audio from disk
MessagePort
One of the two ports of a message channel
OfflineAudioCompletionEvent
The OfflineAudioCompletionEvent Event interface
PeriodicWave
PeriodicWave represents an arbitrary periodic waveform to be used with an OscillatorNode.
PeriodicWaveOptions
Options for constructing a PeriodicWave

Enums§

AutomationRate
Precision of AudioParam value calculation per render quantum

Constants§

MAX_CHANNELS
Maximum number of channels for audio processing