pub trait StreamingLearner: Send + Sync {
// Required methods
fn train_one(&mut self, features: &[f64], target: f64, weight: f64);
fn predict(&self, features: &[f64]) -> f64;
fn n_samples_seen(&self) -> u64;
fn reset(&mut self);
// Provided methods
fn train(&mut self, features: &[f64], target: f64) { ... }
fn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64> { ... }
fn diagnostics_array(&self) -> [f64; 5] { ... }
fn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64) { ... }
fn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32) { ... }
fn replacement_count(&self) -> u64 { ... }
}Expand description
Object-safe trait for any streaming (online) machine learning model.
All methods use &self or &mut self with concrete return types,
ensuring the trait can be used behind Box<dyn StreamingLearner> for
runtime-polymorphic stacking ensembles.
The Send + Sync supertraits allow learners to be shared across threads
(e.g., for parallel prediction in async pipelines).
§Required Methods
| Method | Purpose |
|---|---|
train_one | Ingest a single weighted observation |
predict | Produce a prediction for a feature vector |
n_samples_seen | Total observations ingested so far |
reset | Clear all learned state, returning to a fresh model |
§Default Methods
| Method | Purpose |
|---|---|
train | Convenience wrapper calling train_one with unit weight |
predict_batch | Map predict over a slice of feature vectors |
diagnostics_array | Raw diagnostic signals for adaptive tuning (all zeros by default) |
adjust_config | Apply smooth LR/lambda adjustments (no-op by default) |
apply_structural_change | Apply depth/steps changes at replacement boundaries (no-op by default) |
replacement_count | Total internal model replacements (0 by default) |
Required Methods§
Sourcefn train_one(&mut self, features: &[f64], target: f64, weight: f64)
fn train_one(&mut self, features: &[f64], target: f64, weight: f64)
Train on a single observation with explicit sample weight.
This is the fundamental training primitive. All streaming models must support weighted incremental updates – even if the weight is simply used to scale gradient contributions.
§Arguments
features– feature vector for this observationtarget– target value (regression) or class label (classification)weight– sample weight (1.0 for uniform weighting)
Sourcefn predict(&self, features: &[f64]) -> f64
fn predict(&self, features: &[f64]) -> f64
Predict the target for the given feature vector.
Returns the raw model output (no loss transform applied). For SGBT this is the sum of tree predictions; for linear models this is the dot product plus bias.
Sourcefn n_samples_seen(&self) -> u64
fn n_samples_seen(&self) -> u64
Total number of observations trained on since creation or last reset.
Provided Methods§
Sourcefn train(&mut self, features: &[f64], target: f64)
fn train(&mut self, features: &[f64], target: f64)
Train on a single observation with unit weight.
Convenience wrapper around train_one that passes
weight = 1.0. This is the most common training call in practice.
Sourcefn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64>
fn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64>
Predict for each row in a feature matrix.
Returns a Vec<f64> with one prediction per input row. The default
implementation simply maps predict over the slices;
concrete implementations may override this for SIMD or batch-optimized
prediction paths.
§Arguments
feature_matrix– each element is a feature vector (one row)
Sourcefn diagnostics_array(&self) -> [f64; 5]
fn diagnostics_array(&self) -> [f64; 5]
Raw diagnostic signals for adaptive tuning.
Returns [residual_alignment, reg_sensitivity, depth_sufficiency, effective_dof, uncertainty]. These five signals drive the
diagnostic adaptor in the auto-builder pipeline.
Default: all zeros (model does not provide diagnostics). Models with internal diagnostic caches (e.g. SGBT, DistributionalSGBT) override this to return real computed values.
Sourcefn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64)
fn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64)
Apply smooth learning rate and regularization adjustments.
lr_multiplier– scales the current learning rate (1.0 = no change, 0.99 = 1% decrease, 1.01 = 1% increase).lambda_delta– added to the L2 regularization parameter (0.0 = no change, positive = increase, negative = decrease).
Default: no-op. Override for models with adjustable hyperparameters (e.g. SGBT, DistributionalSGBT).
Sourcefn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32)
fn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32)
Apply structural changes at model replacement boundaries.
depth_delta– adjust maximum tree depth (+1, -1, or 0).steps_delta– adjust number of ensemble steps (+2, -2, or 0).
Structural changes take effect on the next tree replacement, not immediately. Default: no-op for models without structural config.
Sourcefn replacement_count(&self) -> u64
fn replacement_count(&self) -> u64
Total number of internal model replacements (e.g. tree replacements triggered by drift detection or max-tree-samples).
External callers (e.g. the auto-builder) use this to detect when a structural boundary has occurred and apply queued structural changes. Default: 0 for models without replacement semantics.
Implementors§
impl StreamingLearner for StreamingAttentionModel
impl StreamingLearner for AutoTuner
impl StreamingLearner for ContinualLearner
impl StreamingLearner for irithyll::ensemble::adaptive_forest::AdaptiveRandomForest
impl StreamingLearner for irithyll::ensemble::distributional::DistributionalSGBT
impl StreamingLearner for irithyll::ensemble::moe_distributional::MoEDistributionalSGBT
impl StreamingLearner for irithyll::ensemble::stacked::StackedEnsemble
impl StreamingLearner for irithyll_core::ensemble::adaptive_forest::AdaptiveRandomForest
impl StreamingLearner for irithyll_core::ensemble::distributional::DistributionalSGBT
impl StreamingLearner for irithyll_core::ensemble::moe_distributional::MoEDistributionalSGBT
impl StreamingLearner for irithyll_core::ensemble::stacked::StackedEnsemble
impl StreamingLearner for StreamingKAN
impl StreamingLearner for ClassificationWrapper
impl StreamingLearner for KRLS
impl StreamingLearner for StreamingLinearModel
impl StreamingLearner for MondrianForest
impl StreamingLearner for BernoulliNB
impl StreamingLearner for MultinomialNB
impl StreamingLearner for GaussianNB
impl StreamingLearner for LocallyWeightedRegression
impl StreamingLearner for RecursiveLeastSquares
impl StreamingLearner for StreamingPolynomialRegression
impl StreamingLearner for NeuralMoE
impl StreamingLearner for Pipeline
impl StreamingLearner for EchoStateNetwork
impl StreamingLearner for NextGenRC
impl StreamingLearner for SpikeNet
impl StreamingLearner for StreamingMamba
impl StreamingLearner for HoeffdingTreeClassifier
impl StreamingLearner for HoltWinters
impl StreamingLearner for SNARIMAX
StreamingLearner implementation for SNARIMAX.
The features parameter maps to exogenous inputs, and target maps to
the observed time series value. Sample weight is accepted but currently
unused (all observations are weighted equally in the SGD update).