Skip to main content

Crate irithyll_core

Crate irithyll_core 

Source
Expand description

Core types and inference engine for irithyll streaming ML.

irithyll-core is the #![no_std] foundation shared by the full irithyll crate and embedded targets. It provides loss functions, the streaming Observation trait, a compact 12-byte packed node format, and branch-free ensemble traversal for deploying trained models on bare metal (Cortex-M0+, 32 KB flash).

The crate has a hard dependency boundary: only libm is mandatory. All dynamic-allocation paths (histogram binning, tree construction, drift detectors, neural architectures) gate on the alloc feature.

§Feature Flags

FeatureDescription
allocEnables dynamic-allocation types: trees, ensembles, drift, neural
stdEnables alloc plus standard I/O (required for full irithyll crate)
serdeDerives Serialize/Deserialize on config and snapshot types
kmeans-binningK-means histogram binning strategy (requires alloc)
parallelRayon-parallel tree training (requires alloc)
simdAVX2 histogram accumulation acceleration (requires std)
simd-avx2Explicit AVX2 SIMD intrinsics (requires std)
embedded-benchCortex-M semihosting bench helpers
iai-benchiai-callgrind regression bench source

§Embedded Inference

Train a model with the full irithyll crate, then export:

use irithyll_core::{EnsembleView, FormatError};

// packed_bytes is a &[u8] from irithyll::export_embedded()
let packed_bytes: &[u8] = &[];
let view = EnsembleView::from_bytes(packed_bytes)?;
let prediction = view.predict(&[1.0f32, 2.0, 3.0]);

The packed format stores each node in 12 bytes (5 nodes per cache line). Traversal is branch-free: child selection uses cmov/csel-equivalent conditionals that avoid pipeline stalls on Cortex-M.

Re-exports§

pub use error::FormatError;
pub use packed::EnsembleHeader;
pub use packed::PackedNode;
pub use packed::TreeEntry;
pub use packed_i16::PackedNodeI16;
pub use packed_i16::QuantizedEnsembleHeader;
pub use view::EnsembleView;
pub use view_i16::QuantizedEnsembleView;
pub use loss::Loss;
pub use loss::LossType;
pub use sample::Sample;alloc
pub use sample::Observation;
pub use sample::SampleRef;
pub use drift::DriftSignal;
pub use drift::DriftDetector;alloc
pub use drift::DriftDetectorState;alloc
pub use error::ConfigError;alloc
pub use error::IrithyllError;alloc
pub use error::Result;alloc

Modules§

attentionalloc
Unified streaming linear attention engine.
continualalloc
Continual learning strategies for streaming neural models.
drift
Concept drift detection algorithms.
ensemblealloc
SGBT ensemble orchestrator – the core boosting loop.
error
Error types for irithyll-core.
featurealloc
Feature type declarations for streaming tree construction.
histogramalloc
Histogram-based feature binning for streaming tree construction.
learneralloc
Unified streaming learner trait for polymorphic model composition.
loss
Loss functions for gradient boosting.
lstmalloc
sLSTM (stabilized LSTM) cell with exponential gating and log-domain stabilization.
math
Platform-agnostic f64 math operations.
mgradealloc
mGRADE (Minimal Recurrent Gating with Delay Convolutions) core cells.
packed
12-byte packed node format and ensemble binary layout.
packed_i16
8-byte quantized packed node format for integer-only inference.
quantize
f64 → f32 quantization utilities for packed export.
reservoiralloc
Reservoir computing primitives for streaming temporal models.
rng
Deterministic xorshift64 PRNG for reproducible initialization.
sample
Core observation trait and zero-copy sample types.
simd
SIMD-accelerated math primitives for neural forward passes.
snnalloc
Spiking Neural Networks with online e-prop learning.
ssmalloc
State Space Models for streaming temporal feature extraction.
streaming_primitives
Shared streaming primitives for irithyll-core model building blocks.
traverse
Branch-free tree traversal for packed nodes.
traverse_i16
Branch-free tree traversal for quantized i16 packed nodes.
treealloc
Streaming decision trees with Hoeffding-bound split decisions.
turbo_quantalloc
TurboQuant multi-mode weight quantization with randomized Hadamard rotation.
view
Zero-copy, zero-alloc inference view over a packed ensemble binary.
view_i16
Zero-copy, zero-alloc inference view over a quantized (int16) ensemble binary.