pub trait StreamingLearner: Send + Sync {
Show 14 methods
// Required methods
fn train_one(&mut self, features: &[f64], target: f64, weight: f64);
fn predict(&self, features: &[f64]) -> f64;
fn n_samples_seen(&self) -> u64;
fn reset(&mut self);
// Provided methods
fn train(&mut self, features: &[f64], target: f64) { ... }
fn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64> { ... }
fn diagnostics_array(&self) -> [f64; 5] { ... }
fn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64) { ... }
fn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32) { ... }
fn replacement_count(&self) -> u64 { ... }
fn check_proactive_prune(&mut self) -> bool { ... }
fn set_prune_half_life(&mut self, _hl: usize) { ... }
fn readout_weights(&self) -> Option<&[f64]> { ... }
fn tree_structure(&self) -> Vec<(usize, usize, f64, f64, u64)> { ... }
}alloc only.Expand description
Object-safe trait for any streaming (online) machine learning model.
All methods use &self or &mut self with concrete return types,
ensuring the trait can be used behind Box<dyn StreamingLearner> for
runtime-polymorphic stacking ensembles.
The Send + Sync supertraits allow learners to be shared across threads
(e.g., for parallel prediction in async pipelines).
§Required Methods
| Method | Purpose |
|---|---|
train_one | Ingest a single weighted observation |
predict | Produce a prediction for a feature vector |
n_samples_seen | Total observations ingested so far |
reset | Clear all learned state, returning to a fresh model |
§Default Methods
| Method | Purpose |
|---|---|
train | Convenience wrapper calling train_one with unit weight |
predict_batch | Map predict over a slice of feature vectors |
diagnostics_array | Raw diagnostic signals for adaptive tuning (all zeros by default) |
adjust_config | Apply smooth LR/lambda adjustments (no-op by default) |
apply_structural_change | Apply depth/steps changes at replacement boundaries (no-op by default) |
replacement_count | Total internal model replacements (0 by default) |
readout_weights | RLS readout weights for supervised projection (None by default) |
Required Methods§
Sourcefn train_one(&mut self, features: &[f64], target: f64, weight: f64)
fn train_one(&mut self, features: &[f64], target: f64, weight: f64)
Train on a single observation with explicit sample weight.
This is the fundamental training primitive. All streaming models must support weighted incremental updates – even if the weight is simply used to scale gradient contributions.
§Arguments
features– feature vector for this observationtarget– target value (regression) or class label (classification)weight– sample weight (1.0 for uniform weighting)
Sourcefn predict(&self, features: &[f64]) -> f64
fn predict(&self, features: &[f64]) -> f64
Predict the target for the given feature vector.
Returns the raw model output (no loss transform applied). For SGBT this is the sum of tree predictions; for linear models this is the dot product plus bias.
Sourcefn n_samples_seen(&self) -> u64
fn n_samples_seen(&self) -> u64
Total number of observations trained on since creation or last reset.
Provided Methods§
Sourcefn train(&mut self, features: &[f64], target: f64)
fn train(&mut self, features: &[f64], target: f64)
Train on a single observation with unit weight.
Convenience wrapper around train_one that passes
weight = 1.0. This is the most common training call in practice.
Sourcefn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64>
fn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64>
Predict for each row in a feature matrix.
Returns a Vec<f64> with one prediction per input row. The default
implementation simply maps predict over the slices;
concrete implementations may override this for SIMD or batch-optimized
prediction paths.
§Arguments
feature_matrix– each element is a feature vector (one row)
Sourcefn diagnostics_array(&self) -> [f64; 5]
fn diagnostics_array(&self) -> [f64; 5]
Raw diagnostic signals for adaptive tuning.
Returns [residual_alignment, reg_sensitivity, depth_sufficiency, effective_dof, uncertainty]. These five signals drive the
diagnostic adaptor in the auto-builder pipeline.
Default: all zeros (model does not provide diagnostics). Models with internal diagnostic caches (e.g. SGBT, DistributionalSGBT) override this to return real computed values.
Sourcefn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64)
fn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64)
Apply smooth learning rate and regularization adjustments.
lr_multiplier– scales the current learning rate (1.0 = no change, 0.99 = 1% decrease, 1.01 = 1% increase).lambda_delta– added to the L2 regularization parameter (0.0 = no change, positive = increase, negative = decrease).
Default: no-op. Override for models with adjustable hyperparameters (e.g. SGBT, DistributionalSGBT).
Sourcefn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32)
fn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32)
Apply structural changes at model replacement boundaries.
depth_delta– adjust maximum tree depth (+1, -1, or 0).steps_delta– adjust number of ensemble steps (+2, -2, or 0).
Structural changes take effect on the next tree replacement, not immediately. Default: no-op for models without structural config.
Sourcefn replacement_count(&self) -> u64
fn replacement_count(&self) -> u64
Total number of internal model replacements (e.g. tree replacements triggered by drift detection or max-tree-samples).
External callers (e.g. the auto-builder) use this to detect when a structural boundary has occurred and apply queued structural changes. Default: 0 for models without replacement semantics.
Sourcefn check_proactive_prune(&mut self) -> bool
fn check_proactive_prune(&mut self) -> bool
Manually trigger a proactive prune check.
Returns true if an internal component was pruned/replaced.
Default: no-op (returns false).
Sourcefn set_prune_half_life(&mut self, _hl: usize)
fn set_prune_half_life(&mut self, _hl: usize)
Dynamically set the contribution accuracy EWMA half-life.
Recomputes prune_alpha so each correction batch contributes equally
regardless of size. Default: no-op.
Sourcefn readout_weights(&self) -> Option<&[f64]>
fn readout_weights(&self) -> Option<&[f64]>
Return the readout weight vector for supervised projection, if available.
Models with an RLS readout layer return Some(&weights). Models
without (KAN, SpikeNet, SGBT, etc.) return None. Used by
ProjectedLearner for supervised projection updates.