axonml-audio 0.6.2

Audio processing utilities for the Axonml ML framework
Documentation

axonml-audio

Overview

axonml-audio provides audio signal-processing and dataset utilities for the AxonML framework: waveform transforms (spectrograms, MFCC, resampling, time stretch, pitch shift, augmentation), synthetic datasets, and an AudioSeq2SeqDataset for noise-reduction-style source/target pairs. FFT-based transforms use rustfft (O(n log n)).

Features

  • ResamplingResample with linear interpolation between arbitrary sample rates.
  • Mel spectrogramMelSpectrogram::new(sample_rate) (defaults: n_fft=2048, hop=512, n_mels=128) or with_params; rustfft-backed.
  • MFCCMFCC::new(sample_rate, n_mfcc) for cepstral features.
  • Time stretchingTimeStretch::new(rate) changes duration; preserves shape when rate = 1.0.
  • Pitch shiftingPitchShift::new(semitones).
  • Noise augmentationAddNoise::new(snr_db) with a configurable signal-to-noise ratio.
  • Audio normalizationNormalizeAudio::new() peak-normalizes to max-|amplitude| = 1.
  • Silence trimmingTrimSilence::new(threshold_db) or TrimSilence::default_threshold().
  • Classification datasets — generic AudioClassificationDataset plus synthetic SyntheticCommandDataset (small/medium/large), SyntheticMusicDataset (small/medium), SyntheticSpeakerDataset (small/medium). Labels are class-index tensors of shape [1] (CrossEntropyLoss-compatible).
  • Sequence-to-sequence datasetsAudioSeq2SeqDataset for source/target waveform pairs, with noise_reduction_task constructor.

Modules

Module Description
transforms Resample, MelSpectrogram, MFCC, TimeStretch, PitchShift, AddNoise, NormalizeAudio, TrimSilence (all implement axonml_data::Transform)
datasets AudioClassificationDataset, SyntheticCommandDataset, SyntheticMusicDataset, SyntheticSpeakerDataset, AudioSeq2SeqDataset

Usage

Add to your Cargo.toml:

[dependencies]
axonml-audio = "0.6.1"

Loading Audio Datasets

use axonml_audio::prelude::*;

// Synthetic command dataset (e.g., "yes"/"no"/"stop")
let dataset = SyntheticCommandDataset::small();   // 100 samples,  10 classes, 16 kHz, 0.5 s
let dataset = SyntheticCommandDataset::medium();  // 1000 samples, 10 classes
let dataset = SyntheticCommandDataset::large();   // 10000 samples, 35 classes

// Music genre / speaker presets
let music   = SyntheticMusicDataset::small();     // multiple genres
let speakers = SyntheticSpeakerDataset::small();

let (waveform, label) = dataset.get(0).unwrap();
// waveform: [n_samples] float; label: [1] class index

Mel Spectrogram

use axonml_audio::{MelSpectrogram};
use axonml_data::Transform;

let mel = MelSpectrogram::new(16000);                       // defaults: n_fft=2048, hop=512, n_mels=128
let mel = MelSpectrogram::with_params(16000, 512, 256, 40); // custom

let spectrogram = mel.apply(&waveform);
assert_eq!(spectrogram.shape()[0], 40); // n_mels

MFCC Feature Extraction

use axonml_audio::MFCC;
use axonml_data::Transform;

let mfcc = MFCC::new(16000, 13);
let coefficients = mfcc.apply(&waveform);
assert_eq!(coefficients.shape()[0], 13);

Audio Resampling

use axonml_audio::Resample;
use axonml_data::Transform;

let resample = Resample::new(22050, 16000);
let resampled = resample.apply(&waveform);
// Output length = floor(input_len * new_freq / orig_freq)

Audio Augmentation

use axonml_audio::{AddNoise, TimeStretch, PitchShift};
use axonml_data::Transform;

let noisy     = AddNoise::new(20.0).apply(&waveform);  // SNR in dB
let stretched = TimeStretch::new(1.5).apply(&waveform);
let shifted   = PitchShift::new(2.0).apply(&waveform); // semitones

Normalization and Trimming

use axonml_audio::{NormalizeAudio, TrimSilence};
use axonml_data::Transform;

let normalized = NormalizeAudio::new().apply(&waveform);      // peak to 1.0
let trimmed    = TrimSilence::new(-60.0).apply(&waveform);    // threshold in dB
let trimmed    = TrimSilence::default_threshold().apply(&waveform);

Full Audio Processing Pipeline

use axonml_audio::prelude::*;
use axonml_data::DataLoader;

let dataset = SyntheticCommandDataset::medium();
let loader  = DataLoader::new(dataset, 32).shuffle(true);

let resample  = Resample::new(16000, 8000);
let normalize = NormalizeAudio::new();
let mel       = MelSpectrogram::with_params(8000, 256, 128, 40);

for batch in loader.iter() {
    // batch.data: [B, n_samples]   batch.targets: [B, 1]
    // Apply per-sample transforms inside the training loop, or pre-apply with MapDataset.
}

Sequence-to-Sequence Audio Tasks

use axonml_audio::AudioSeq2SeqDataset;

// Noise reduction (noisy -> clean pairs)
let dataset = AudioSeq2SeqDataset::noise_reduction_task(
    100,    // num_samples
    16000,  // sample_rate
    0.5,    // duration (seconds)
);

let (noisy, clean) = dataset.get(0).unwrap();
assert_eq!(noisy.shape(), clean.shape());

// Or bring your own source/target waveforms:
let ds = AudioSeq2SeqDataset::new(sources, targets);

Tests

cargo test -p axonml-audio

License

Licensed under either of:

at your option.


Last updated: 2026-04-16 (v0.6.1)