Expand description

A high-level API for processing and synthesizing audio.

Example

use std::fs::File;
use web_audio_api::context::{BaseAudioContext, AudioContext};
use web_audio_api::node::{AudioNode, AudioScheduledSourceNode};

let context = AudioContext::new(None);

// create an audio buffer from a given file
let file = File::open("samples/sample.wav").unwrap();
let buffer = context.decode_audio_data_sync(file).unwrap();

// play the buffer at given volume
let volume = context.create_gain();
volume.connect(&context.destination());
volume.gain().set_value(0.5);

let buffer_source = context.create_buffer_source();
buffer_source.connect(&volume);
buffer_source.set_buffer(buffer);

// create oscillator branch
let osc = context.create_oscillator();
osc.connect(&context.destination());

// start the sources
buffer_source.start();
osc.start();

// enjoy listening
std::thread::sleep(std::time::Duration::from_secs(4));

Modules

General purpose audio signal data structures

The BaseAudioContext interface and the AudioContext and OfflineAudioContext types

Convenience abstractions that are not part of the WebAudio API (media decoding, microphone)

The AudioNode interface and concrete types

AudioParam interface

PeriodicWave interface

Primitives related to audio graph rendering

Structs

Represents the position and orientation of the person listening to the audio scene

Number of samples processed per second (Hertz) for a single channel of audio

Constants

Maximum number of channels for audio processing

Render quantum size, the audio graph is rendered in blocks of RENDER_QUANTUM_SIZE samples see. https://webaudio.github.io/web-audio-api/#render-quantum