irithyll-core
Standalone streaming ML engine for no_std + alloc targets. Train and deploy
gradient boosted trees on anything with a heap allocator — microcontrollers,
WASM, embedded Linux, or full desktops.
What's in the box
- SGBT ensemble — streaming gradient boosted trees with 10 variants (distributional, MoE, multiclass, quantile, bagged, parallel, ARF, adaptive)
- Hoeffding trees — statistically sound split decisions via Hoeffding bound
- Histogram binning — uniform, categorical, quantile sketch, k-means, SIMD
- Drift detection — ADWIN, Page-Hinkley, DDM with automatic tree replacement
- Reservoir computing — NG-RC (time-delay polynomial) and ESN (cycle reservoir) with RLS readout
- State space models — selective SSM with diagonal A, ZOH discretization, input-dependent gating
- Spiking neural networks —
SpikeNetFixedwith Q1.14 integer LIF neurons, e-prop learning, delta encoding (64 neurons in 22KB) - Loss functions — squared, logistic, Huber, softmax, expectile, quantile
- Packed inference — 12-byte f32 nodes (66ns predict on Cortex-M0+) and 8-byte int16 nodes (integer-only traversal, zero float ops)
- Zero-copy views —
EnsembleView::from_bytes(&[u8]), no allocation after validation
Feature flags
| Feature | Default | What it enables |
|---|---|---|
alloc |
No | Training: histograms, trees, ensembles, drift detection, reservoir, SSM, SNN |
std |
No | Implies alloc. HashMap-based named features, SIMD runtime detection |
serde |
No | Serialize/deserialize configs and model state |
parallel |
No | Rayon-based parallel tree training |
kmeans-binning |
No | K-means histogram binning strategy |
simd |
No | AVX2 histogram acceleration (requires std) |
Without any features, irithyll-core provides packed inference only — runs on
bare metal with zero dependencies beyond libm.
Quick start
Training (requires alloc):
use SGBTConfig;
use SGBT;
use SquaredLoss;
use SampleRef;
let config = builder
.n_steps
.learning_rate
.build
.unwrap;
let mut model = SGBTwith_loss;
// Train one sample at a time
let sample = new;
model.train_one;
let prediction = model.predict;
Inference on embedded (no features needed):
use EnsembleView;
// Load packed binary exported from a trained model
let packed_bytes: & = include_bytes!;
let view = from_bytes.unwrap;
let prediction = view.predict;
License
MIT OR Apache-2.0