1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
//! # eegdino-rs
//!
//! Rust inference crate for the
//! [EEG-DINO](https://github.com/miraclefish/EEG-DINO) foundation model,
//! built on the [Burn](https://burn.dev) ML framework.
//!
//! EEG-DINO learns robust EEG representations via hierarchical self-distillation
//! on 9 000+ hours of EEG data. This crate provides a faithful port of the
//! encoder architecture with verified numerical parity (NRMSE < 1e-6) against
//! the original PyTorch implementation.
//!
//! ## Model sizes
//!
//! | Variant | Params | d_model | Heads | Layers | FFN dim |
//! |---------|--------|---------|-------|--------|---------|
//! | Small | 4.6 M | 200 | 8 | 12 | 512 |
//! | Medium | 33 M | 512 | 16 | 16 | 1 024 |
//! | Large | 201 M | 1 024 | 16 | 24 | 2 048 |
//!
//! ## Quick start (builder)
//!
//! ```rust,ignore
//! use eegdino_rs::prelude::*;
//! use burn::backend::NdArray;
//!
//! type B = NdArray;
//!
//! let encoder = EegDinoEncoder::<B>::builder()
//! .weights("weights/eeg_dino_small.safetensors")
//! .size(ModelSize::Small)
//! .device(Default::default())
//! .build()?;
//!
//! let signal = vec![0.0f32; 19 * 2000];
//! let result = encoder.encode_raw(&signal, 1, 19, 2000)?;
//! // result.shape == [1, 191, 200]
//! ```
//!
//! ## Batch encoding
//!
//! ```rust,ignore
//! let signals: Vec<Vec<f32>> = load_recordings();
//! // Single batched forward pass (fastest):
//! let result = encoder.encode_batch(&signals, 19, 2000)?;
//! // Or one-by-one:
//! let results = encoder.encode_many(&signals, 19, 2000);
//! ```
//!
//! ## Backends
//!
//! | Feature | Backend | Notes |
//! |---------|---------|-------|
//! | `ndarray` (default) | CPU | Multi-threaded via Rayon + SIMD |
//! | `blas-accelerate` | CPU + Accelerate | Recommended on Apple Silicon |
//! | `wgpu` | GPU | Metal (macOS) / Vulkan (Linux) |
//! | `wgpu-f16` | GPU f16 | Half-precision, 2x less memory |
pub
pub use ;
pub use ;
pub use ;
pub use EEGEncoder;
pub use ClassificationModel;
pub use ;
/// Configure the Rayon thread pool. Call once before model use.