Expand description
Core types and inference engine for irithyll streaming ML.
irithyll-core is the #![no_std] foundation shared by the full irithyll
crate and embedded targets. It provides loss functions, the streaming
Observation trait, a compact 12-byte packed node format, and branch-free
ensemble traversal for deploying trained models on bare metal
(Cortex-M0+, 32 KB flash).
The crate has a hard dependency boundary: only libm is mandatory. All
dynamic-allocation paths (histogram binning, tree construction, drift
detectors, neural architectures) gate on the alloc feature.
§Feature Flags
| Feature | Description |
|---|---|
alloc | Enables dynamic-allocation types: trees, ensembles, drift, neural |
std | Enables alloc plus standard I/O (required for full irithyll crate) |
serde | Derives Serialize/Deserialize on config and snapshot types |
kmeans-binning | K-means histogram binning strategy (requires alloc) |
parallel | Rayon-parallel tree training (requires alloc) |
simd | AVX2 histogram accumulation acceleration (requires std) |
simd-avx2 | Explicit AVX2 SIMD intrinsics (requires std) |
embedded-bench | Cortex-M semihosting bench helpers |
iai-bench | iai-callgrind regression bench source |
§Embedded Inference
Train a model with the full irithyll crate, then export:
use irithyll_core::{EnsembleView, FormatError};
// packed_bytes is a &[u8] from irithyll::export_embedded()
let packed_bytes: &[u8] = &[];
let view = EnsembleView::from_bytes(packed_bytes)?;
let prediction = view.predict(&[1.0f32, 2.0, 3.0]);The packed format stores each node in 12 bytes (5 nodes per cache line).
Traversal is branch-free: child selection uses cmov/csel-equivalent
conditionals that avoid pipeline stalls on Cortex-M.
Re-exports§
pub use error::FormatError;pub use packed::EnsembleHeader;pub use packed::PackedNode;pub use packed::TreeEntry;pub use packed_i16::PackedNodeI16;pub use packed_i16::QuantizedEnsembleHeader;pub use view::EnsembleView;pub use view_i16::QuantizedEnsembleView;pub use loss::Loss;pub use loss::LossType;pub use sample::Sample;allocpub use sample::Observation;pub use sample::SampleRef;pub use drift::DriftSignal;pub use drift::DriftDetector;allocpub use drift::DriftDetectorState;allocpub use error::ConfigError;allocpub use error::IrithyllError;allocpub use error::Result;alloc
Modules§
- attention
alloc - Unified streaming linear attention engine.
- continual
alloc - Continual learning strategies for streaming neural models.
- drift
- Concept drift detection algorithms.
- ensemble
alloc - SGBT ensemble orchestrator – the core boosting loop.
- error
- Error types for irithyll-core.
- feature
alloc - Feature type declarations for streaming tree construction.
- histogram
alloc - Histogram-based feature binning for streaming tree construction.
- learner
alloc - Unified streaming learner trait for polymorphic model composition.
- loss
- Loss functions for gradient boosting.
- lstm
alloc - sLSTM (stabilized LSTM) cell with exponential gating and log-domain stabilization.
- math
- Platform-agnostic f64 math operations.
- mgrade
alloc - mGRADE (Minimal Recurrent Gating with Delay Convolutions) core cells.
- packed
- 12-byte packed node format and ensemble binary layout.
- packed_
i16 - 8-byte quantized packed node format for integer-only inference.
- quantize
- f64 → f32 quantization utilities for packed export.
- reservoir
alloc - Reservoir computing primitives for streaming temporal models.
- rng
- Deterministic xorshift64 PRNG for reproducible initialization.
- sample
- Core observation trait and zero-copy sample types.
- simd
- SIMD-accelerated math primitives for neural forward passes.
- snn
alloc - Spiking Neural Networks with online e-prop learning.
- ssm
alloc - State Space Models for streaming temporal feature extraction.
- streaming_
primitives - Shared streaming primitives for irithyll-core model building blocks.
- traverse
- Branch-free tree traversal for packed nodes.
- traverse_
i16 - Branch-free tree traversal for quantized i16 packed nodes.
- tree
alloc - Streaming decision trees with Hoeffding-bound split decisions.
- turbo_
quant alloc - TurboQuant multi-mode weight quantization with randomized Hadamard rotation.
- view
- Zero-copy, zero-alloc inference view over a packed ensemble binary.
- view_
i16 - Zero-copy, zero-alloc inference view over a quantized (int16) ensemble binary.