cbramod-rs
Pure-Rust inference for the CBraMod (Criss-Cross Brain Model) EEG foundation model, built on Burn 0.20.
CBraMod is a compact (~4M params) foundation model pretrained on the Temple University Hospital EEG Corpus using masked patch reconstruction. It uses criss-cross attention to separately model spatial (channel) and temporal (patch) dependencies.
Architecture
EEG [B, C, T]
│
├─ Rearrange to patches: [B, C, N, P]
│
├─ Patch Embedding
│ ├─ Conv2d pipeline (3 layers) → time-domain features
│ ├─ FFT → abs → Linear → spectral features
│ └─ Depthwise Conv2d → positional encoding
│ → [B, C, N, d_model=200]
│
├─ Criss-Cross Transformer (12 layers)
│ ├─ S-Attention: self-attention across channels (spatial)
│ └─ T-Attention: self-attention across patches (temporal)
│ → [B, C, N, d_model]
│
├─ Linear projection → [B, C, N, emb_dim]
│
└─ Flatten + Linear → [B, n_outputs]
Quick Start
use CBraMod;
let model = new;
let output = model.forward; // [B, n_outputs]
Build
Numerical Parity
Python ↔ Rust output difference: < 3×10⁻⁶ (f32 precision limit).
Pretrained Weights
Available on HuggingFace.
Citation
Author
License
Apache-2.0