1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
//! # Dhvani — Core Audio Engine
//!
//! Dhvani (ध्वनि, Sanskrit: sound, resonance) provides shared audio processing
//! primitives for the AGNOS ecosystem. It eliminates duplicate implementations
//! across [shruti](https://github.com/MacCracken/shruti) (DAW),
//! [jalwa](https://github.com/MacCracken/jalwa) (media player),
//! [aethersafta](https://github.com/MacCracken/aethersafta) (compositor),
//! and [tarang](https://crates.io/crates/tarang) (media framework).
//!
//! Every downstream consumer gets the same audio math — buffers, DSP,
//! analysis, MIDI, metering, and an RT-safe audio graph.
//!
//! # Modules
//!
//! | Module | Purpose |
//! |--------|---------|
//! | [`buffer`] | Audio buffers, mixing, resampling (linear + sinc), format conversion |
//! | [`clock`] | Sample-accurate transport clock, tempo, beats, PTS, A/V sync |
//! | [`ffi`] | C-compatible FFI for AudioBuffer operations |
//! | [`dsp`] | Biquad filters, parametric EQ, compressor, limiter, reverb, delay, de-esser, panner *(feature: `dsp`)* |
//! | [`analysis`] | FFT, STFT spectrograms, EBU R128 loudness, dynamics, chromagram, onset detection *(feature: `analysis`)* |
//! | [`midi`] | MIDI 1.0/2.0 events, clips, translation, voice management, routing *(feature: `midi`)* |
//! | [`graph`] | RT-safe audio graph with topological execution and double-buffered plan swap *(feature: `graph`)* |
//! | [`meter`] | Lock-free peak metering via atomics (no mutex) *(feature: `graph`)* |
//! | [`synthesis`] | Synthesis engines: subtractive, FM, additive, wavetable, granular, physical, drum, vocoder *(feature: `synthesis`)* |
//! | [`voice_synth`] | Voice synthesis: glottal source, formant, phoneme, prosody, vocal tract *(feature: `voice`)* |
//! | [`creature`] | Creature/animal vocal synthesis: species-specific voice models, call patterns *(feature: `creature`)* |
//! | [`sampler`] | Sample playback engine: key/velocity zones, loop modes, time-stretching *(feature: `sampler`)* |
//! | [`acoustics`] | Room acoustics integration via goonj: convolution reverb, FDN, ambisonics, room presets *(feature: `acoustics`)* |
//! | [`capture`] | PipeWire capture/output, ring-buffer recording *(feature: `pipewire`)* |
//!
//! # Quick Start
//!
//! ```rust
//! use dhvani::buffer::{AudioBuffer, mix};
//! use dhvani::dsp::{self, ParametricEq, EqBandConfig, BandType, Compressor, CompressorParams};
//! use dhvani::analysis;
//!
//! // Create and mix buffers
//! let vocals = AudioBuffer::from_interleaved(vec![0.5; 4096], 2, 44100).unwrap();
//! let drums = AudioBuffer::from_interleaved(vec![0.3; 4096], 2, 44100).unwrap();
//! let mut mixed = mix(&[&vocals, &drums]).unwrap();
//!
//! // 3-band parametric EQ
//! let mut eq = ParametricEq::new(vec![
//! EqBandConfig::new(BandType::HighPass, 80.0, 0.0, 0.707, true),
//! EqBandConfig::new(BandType::Peaking, 3000.0, 3.0, 1.5, true),
//! EqBandConfig::new(BandType::HighShelf, 10000.0, -2.0, 0.707, true),
//! ], 44100, 2);
//! eq.process(&mut mixed);
//!
//! // Compress and normalize
//! let mut comp = Compressor::new(CompressorParams::new()
//! .with_threshold(-18.0).with_ratio(4.0).with_attack(10.0).with_release(100.0)
//! .with_makeup_gain(3.0).with_knee(6.0),
//! 44100).unwrap();
//! comp.process(&mut mixed);
//! dsp::normalize(&mut mixed, 0.95);
//!
//! println!("Peak: {:.2}, LUFS: {:.1}", mixed.peak(), analysis::loudness_lufs(&mixed));
//! ```
//!
//! # Guide
//!
//! ## Step 1: Create and manipulate buffers
//!
//! [`AudioBuffer`] is the core type. All audio is f32 interleaved internally.
//!
//! ```rust
//! use dhvani::buffer::{AudioBuffer, mix, resample_linear};
//! use dhvani::buffer::convert::{i16_to_f32, mono_to_stereo};
//!
//! // From raw samples
//! let buf = AudioBuffer::from_interleaved(vec![0.5; 2048], 2, 44100).unwrap();
//! assert_eq!(buf.channels(), 2);
//! assert_eq!(buf.frames(), 1024);
//!
//! // Format conversion
//! let i16_data: Vec<i16> = vec![16384; 1024];
//! let f32_data = i16_to_f32(&i16_data);
//!
//! // Mono to stereo
//! let mono = AudioBuffer::from_interleaved(vec![0.5; 1024], 1, 44100).unwrap();
//! let stereo = mono_to_stereo(&mono).unwrap();
//!
//! // Resample
//! let resampled = resample_linear(&buf, 48000).unwrap();
//! ```
//!
//! ## Step 2: Apply DSP effects
//!
//! All effects operate on [`AudioBuffer`] in-place.
//! Stateful effects (EQ, reverb, compressor) have `process()` methods.
//! Stateless operations (gate, limiter, normalize) are free functions.
//!
//! ```rust
//! use dhvani::buffer::AudioBuffer;
//! use dhvani::dsp::{self, BiquadFilter, FilterType, Reverb, ReverbParams, StereoPanner};
//!
//! let mut buf = AudioBuffer::from_interleaved(vec![0.5; 4096], 2, 44100).unwrap();
//!
//! // Biquad low-pass filter
//! let mut lp = BiquadFilter::new(FilterType::LowPass, 5000.0, 0.707, 44100, 2);
//! lp.process(&mut buf);
//!
//! // Reverb
//! let mut reverb = Reverb::new(ReverbParams::new().with_room_size(0.6).with_damping(0.4).with_mix(0.3), 44100).unwrap();
//! reverb.process(&mut buf);
//!
//! // Panning
//! let panner = StereoPanner::new(0.3); // slightly right
//! panner.process(&mut buf);
//!
//! // Gate and normalize
//! dsp::noise_gate(&mut buf, 0.01);
//! dsp::normalize(&mut buf, 0.95);
//! ```
//!
//! ## Step 3: Analyze audio
//!
//! Analysis functions are non-destructive — they read the buffer without modifying it.
//!
//! ```rust
//! use dhvani::buffer::AudioBuffer;
//! use dhvani::analysis::{self, spectrum_fft, analyze_dynamics, measure_r128, chromagram, detect_onsets, compute_stft};
//!
//! let buf = AudioBuffer::from_interleaved(
//! (0..44100).map(|i| (2.0 * std::f32::consts::PI * 440.0 * i as f32 / 44100.0).sin()).collect(),
//! 1, 44100,
//! ).unwrap();
//!
//! // FFT spectrum (radix-2, O(n log n))
//! let spec = spectrum_fft(&buf, 4096).unwrap();
//! println!("Dominant freq: {:?} Hz", spec.dominant_frequency());
//!
//! // Dynamics (true peak, crest factor, dynamic range)
//! let dyn_ = analyze_dynamics(&buf);
//! println!("True peak: {:.2} dB, Crest: {:.1} dB", dyn_.max_true_peak_db(), dyn_.mean_crest_factor_db());
//!
//! // EBU R128 loudness (K-weighted, gated)
//! let r128 = measure_r128(&buf).unwrap();
//! println!("Integrated: {:.1} LUFS", r128.integrated_lufs());
//!
//! // Chromagram (pitch class detection)
//! let chroma = chromagram(&buf, 4096).unwrap();
//! println!("Dominant pitch: {}", chroma.dominant_name());
//!
//! // Onset detection
//! let onsets = detect_onsets(&buf, 2048, 512, 0.3).unwrap();
//! println!("Found {} onsets", onsets.count());
//! ```
//!
//! ## Step 4: Work with MIDI
//!
//! ```rust
//! use dhvani::midi::{MidiClip, NoteEvent, MidiEvent};
//! use dhvani::midi::voice::{VoiceManager, VoiceStealMode};
//!
//! // Create a clip with notes
//! let mut clip = MidiClip::new("melody", 0, 44100);
//! clip.add_note(0, 22050, 60, 100, 0); // C4
//! clip.add_note(22050, 22050, 64, 90, 0); // E4
//!
//! // Query notes at a position
//! let active = clip.notes_at(11025);
//! assert_eq!(active.len(), 1);
//!
//! // Voice management for polyphonic synths
//! let mut voices = VoiceManager::new(16, VoiceStealMode::Oldest);
//! let slot = voices.note_on(60, 100, 0).unwrap();
//! println!("Voice {} playing {:.1} Hz", slot, voices.voice(slot).unwrap().frequency());
//! ```
//!
//! ## Step 5: Build an audio graph
//!
//! ```rust,no_run
//! use dhvani::graph::{Graph, GraphProcessor, NodeId, AudioNode};
//! use dhvani::buffer::AudioBuffer;
//!
//! // Define a custom node
//! struct ToneGenerator { freq: f32, phase: f64 }
//! impl AudioNode for ToneGenerator {
//! fn name(&self) -> &str { "tone" }
//! fn num_inputs(&self) -> usize { 0 }
//! fn num_outputs(&self) -> usize { 1 }
//! fn process(&mut self, _inputs: &[&AudioBuffer], output: &mut AudioBuffer) {
//! for s in output.samples_mut() {
//! *s = (self.phase as f32).sin() * 0.5;
//! self.phase += 2.0 * std::f64::consts::PI * self.freq as f64 / 44100.0;
//! }
//! }
//! }
//!
//! // Build and compile graph
//! let mut graph = Graph::new();
//! let tone_id = NodeId::next();
//! graph.add_node(tone_id, Box::new(ToneGenerator { freq: 440.0, phase: 0.0 }));
//! let plan = graph.compile().unwrap();
//!
//! // Process on RT thread
//! let mut processor = GraphProcessor::new(2, 44100, 1024);
//! let handle = processor.swap_handle();
//! handle.swap(plan);
//! let output = processor.process(); // returns Option<&AudioBuffer>
//! ```
//!
//! # Error Handling
//!
//! All fallible operations return [`Result<T, NadaError>`](NadaError).
//!
//! ```rust
//! use dhvani::buffer::AudioBuffer;
//! use dhvani::NadaError;
//!
//! match AudioBuffer::from_interleaved(vec![], 0, 44100) {
//! Ok(_) => unreachable!(),
//! Err(NadaError::InvalidChannels(0)) => println!("zero channels rejected"),
//! Err(e) => println!("other error: {e}"),
//! }
//! ```
//!
//! # Cargo Features
//!
//! | Feature | Default | Description |
//! |---------|---------|-------------|
//! | `dsp` | Yes | DSP effects (EQ, compressor, limiter, reverb, delay, de-esser, panner, oscillator, LFO, envelope) |
//! | `analysis` | Yes | Audio analysis (FFT, STFT, R128, dynamics, chromagram, onsets). Implies `dsp` |
//! | `midi` | Yes | MIDI 1.0/2.0 events, voice management, routing, translation |
//! | `graph` | Yes | RT-safe audio graph and lock-free metering |
//! | `simd` | Yes | SSE2/AVX2 (x86_64) and NEON (aarch64) acceleration |
//! | `synthesis` | No | Synthesis engines via [`naad`](https://crates.io/crates/naad): subtractive, FM, additive, wavetable, granular, physical modeling, drum, vocoder |
//! | `voice` | No | Voice synthesis via [`svara`](https://crates.io/crates/svara): glottal source, formant, phoneme, prosody, vocal tract. Implies `synthesis` |
//! | `creature` | No | Creature/animal vocals via [`prani`](https://crates.io/crates/prani): species voice models, call patterns, non-human tracts. Implies `synthesis` |
//! | `sampler` | No | Sample playback via [`nidhi`](https://crates.io/crates/nidhi): key/velocity zones, loop modes, SFZ/SF2 import |
//! | `pipewire` | No | PipeWire audio capture/output backend (Linux only) |
//! | `acoustics` | No | Room acoustics via [`goonj`](https://crates.io/crates/goonj): convolution reverb from IRs, FDN, ambisonics, room presets. Implies `analysis` |
//! | `full` | No | All features including synthesis, voice, acoustics, and PipeWire |
//!
//! Core-only build (buffers, mixing, resampling, clock — no DSP/MIDI/analysis):
//! ```toml
//! dhvani = { version = "0.20", default-features = false }
//! ```
// Core (always available)
// Feature-gated modules
pub
pub use NadaError;
/// Result type alias for dhvani operations.
pub type Result<T> = Result;
// Re-export primary types for convenience.
pub use AudioBuffer;
pub use AudioClock;
pub use ;
pub use ;
pub use MidiEvent;
pub use ;
// Compile-time assertions: core public types are Send + Sync.