Skip to main content

StreamingLearner

Trait StreamingLearner 

Source
pub trait StreamingLearner: Send + Sync {
    // Required methods
    fn train_one(&mut self, features: &[f64], target: f64, weight: f64);
    fn predict(&self, features: &[f64]) -> f64;
    fn n_samples_seen(&self) -> u64;
    fn reset(&mut self);

    // Provided methods
    fn train(&mut self, features: &[f64], target: f64) { ... }
    fn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64> { ... }
    fn diagnostics_array(&self) -> [f64; 5] { ... }
    fn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64) { ... }
    fn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32) { ... }
    fn replacement_count(&self) -> u64 { ... }
}
Expand description

Object-safe trait for any streaming (online) machine learning model.

All methods use &self or &mut self with concrete return types, ensuring the trait can be used behind Box<dyn StreamingLearner> for runtime-polymorphic stacking ensembles.

The Send + Sync supertraits allow learners to be shared across threads (e.g., for parallel prediction in async pipelines).

§Required Methods

MethodPurpose
train_oneIngest a single weighted observation
predictProduce a prediction for a feature vector
n_samples_seenTotal observations ingested so far
resetClear all learned state, returning to a fresh model

§Default Methods

MethodPurpose
trainConvenience wrapper calling train_one with unit weight
predict_batchMap predict over a slice of feature vectors
diagnostics_arrayRaw diagnostic signals for adaptive tuning (all zeros by default)
adjust_configApply smooth LR/lambda adjustments (no-op by default)
apply_structural_changeApply depth/steps changes at replacement boundaries (no-op by default)
replacement_countTotal internal model replacements (0 by default)

Required Methods§

Source

fn train_one(&mut self, features: &[f64], target: f64, weight: f64)

Train on a single observation with explicit sample weight.

This is the fundamental training primitive. All streaming models must support weighted incremental updates – even if the weight is simply used to scale gradient contributions.

§Arguments
  • features – feature vector for this observation
  • target – target value (regression) or class label (classification)
  • weight – sample weight (1.0 for uniform weighting)
Source

fn predict(&self, features: &[f64]) -> f64

Predict the target for the given feature vector.

Returns the raw model output (no loss transform applied). For SGBT this is the sum of tree predictions; for linear models this is the dot product plus bias.

Source

fn n_samples_seen(&self) -> u64

Total number of observations trained on since creation or last reset.

Source

fn reset(&mut self)

Reset the model to its initial (untrained) state.

After calling reset(), the model should behave identically to a freshly constructed instance with the same configuration. In particular, n_samples_seen() must return 0.

Provided Methods§

Source

fn train(&mut self, features: &[f64], target: f64)

Train on a single observation with unit weight.

Convenience wrapper around train_one that passes weight = 1.0. This is the most common training call in practice.

Source

fn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64>

Predict for each row in a feature matrix.

Returns a Vec<f64> with one prediction per input row. The default implementation simply maps predict over the slices; concrete implementations may override this for SIMD or batch-optimized prediction paths.

§Arguments
  • feature_matrix – each element is a feature vector (one row)
Source

fn diagnostics_array(&self) -> [f64; 5]

Raw diagnostic signals for adaptive tuning.

Returns [residual_alignment, reg_sensitivity, depth_sufficiency, effective_dof, uncertainty]. These five signals drive the diagnostic adaptor in the auto-builder pipeline.

Default: all zeros (model does not provide diagnostics). Models with internal diagnostic caches (e.g. SGBT, DistributionalSGBT) override this to return real computed values.

Source

fn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64)

Apply smooth learning rate and regularization adjustments.

  • lr_multiplier – scales the current learning rate (1.0 = no change, 0.99 = 1% decrease, 1.01 = 1% increase).
  • lambda_delta – added to the L2 regularization parameter (0.0 = no change, positive = increase, negative = decrease).

Default: no-op. Override for models with adjustable hyperparameters (e.g. SGBT, DistributionalSGBT).

Source

fn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32)

Apply structural changes at model replacement boundaries.

  • depth_delta – adjust maximum tree depth (+1, -1, or 0).
  • steps_delta – adjust number of ensemble steps (+2, -2, or 0).

Structural changes take effect on the next tree replacement, not immediately. Default: no-op for models without structural config.

Source

fn replacement_count(&self) -> u64

Total number of internal model replacements (e.g. tree replacements triggered by drift detection or max-tree-samples).

External callers (e.g. the auto-builder) use this to detect when a structural boundary has occurred and apply queued structural changes. Default: 0 for models without replacement semantics.

Implementors§

Source§

impl StreamingLearner for StreamingAttentionModel

Source§

impl StreamingLearner for AutoTuner

Source§

impl StreamingLearner for ContinualLearner

Source§

impl StreamingLearner for irithyll::ensemble::adaptive_forest::AdaptiveRandomForest

Source§

impl StreamingLearner for irithyll::ensemble::distributional::DistributionalSGBT

Source§

impl StreamingLearner for irithyll::ensemble::moe_distributional::MoEDistributionalSGBT

Source§

impl StreamingLearner for irithyll::ensemble::stacked::StackedEnsemble

Source§

impl StreamingLearner for irithyll_core::ensemble::adaptive_forest::AdaptiveRandomForest

Source§

impl StreamingLearner for irithyll_core::ensemble::distributional::DistributionalSGBT

Source§

impl StreamingLearner for irithyll_core::ensemble::moe_distributional::MoEDistributionalSGBT

Source§

impl StreamingLearner for irithyll_core::ensemble::stacked::StackedEnsemble

Source§

impl StreamingLearner for StreamingKAN

Source§

impl StreamingLearner for ClassificationWrapper

Source§

impl StreamingLearner for KRLS

Source§

impl StreamingLearner for StreamingLinearModel

Source§

impl StreamingLearner for MondrianForest

Source§

impl StreamingLearner for BernoulliNB

Source§

impl StreamingLearner for MultinomialNB

Source§

impl StreamingLearner for GaussianNB

Source§

impl StreamingLearner for LocallyWeightedRegression

Source§

impl StreamingLearner for RecursiveLeastSquares

Source§

impl StreamingLearner for StreamingPolynomialRegression

Source§

impl StreamingLearner for NeuralMoE

Source§

impl StreamingLearner for Pipeline

Source§

impl StreamingLearner for EchoStateNetwork

Source§

impl StreamingLearner for NextGenRC

Source§

impl StreamingLearner for SpikeNet

Source§

impl StreamingLearner for StreamingMamba

Source§

impl StreamingLearner for HoeffdingTreeClassifier

Source§

impl StreamingLearner for HoltWinters

Source§

impl StreamingLearner for SNARIMAX

StreamingLearner implementation for SNARIMAX.

The features parameter maps to exogenous inputs, and target maps to the observed time series value. Sample weight is accepted but currently unused (all observations are weighted equally in the SGD update).

Source§

impl StreamingLearner for StreamingTTT

Source§

impl<L> StreamingLearner for irithyll_core::ensemble::adaptive::AdaptiveSGBT<L>
where L: Loss,

Source§

impl<L> StreamingLearner for irithyll_core::ensemble::bagged::BaggedSGBT<L>
where L: Loss + Clone,

Source§

impl<L> StreamingLearner for irithyll_core::ensemble::moe::MoESGBT<L>
where L: Loss,

Source§

impl<L: Loss + Clone> StreamingLearner for irithyll::ensemble::bagged::BaggedSGBT<L>

Source§

impl<L: Loss> StreamingLearner for irithyll::ensemble::adaptive::AdaptiveSGBT<L>

Source§

impl<L: Loss> StreamingLearner for irithyll::ensemble::moe::MoESGBT<L>

Source§

impl<L: Loss> StreamingLearner for SGBTLearner<L>