Skip to main content

StreamingLearner

Trait StreamingLearner 

Source
pub trait StreamingLearner: Send + Sync {
Show 14 methods // Required methods fn train_one(&mut self, features: &[f64], target: f64, weight: f64); fn predict(&self, features: &[f64]) -> f64; fn n_samples_seen(&self) -> u64; fn reset(&mut self); // Provided methods fn train(&mut self, features: &[f64], target: f64) { ... } fn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64> { ... } fn diagnostics_array(&self) -> [f64; 5] { ... } fn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64) { ... } fn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32) { ... } fn replacement_count(&self) -> u64 { ... } fn check_proactive_prune(&mut self) -> bool { ... } fn set_prune_half_life(&mut self, _hl: usize) { ... } fn readout_weights(&self) -> Option<&[f64]> { ... } fn tree_structure(&self) -> Vec<(usize, usize, f64, f64, u64)> { ... }
}
Available on crate feature alloc only.
Expand description

Object-safe trait for any streaming (online) machine learning model.

All methods use &self or &mut self with concrete return types, ensuring the trait can be used behind Box<dyn StreamingLearner> for runtime-polymorphic stacking ensembles.

The Send + Sync supertraits allow learners to be shared across threads (e.g., for parallel prediction in async pipelines).

§Required Methods

MethodPurpose
train_oneIngest a single weighted observation
predictProduce a prediction for a feature vector
n_samples_seenTotal observations ingested so far
resetClear all learned state, returning to a fresh model

§Default Methods

MethodPurpose
trainConvenience wrapper calling train_one with unit weight
predict_batchMap predict over a slice of feature vectors
diagnostics_arrayRaw diagnostic signals for adaptive tuning (all zeros by default)
adjust_configApply smooth LR/lambda adjustments (no-op by default)
apply_structural_changeApply depth/steps changes at replacement boundaries (no-op by default)
replacement_countTotal internal model replacements (0 by default)
readout_weightsRLS readout weights for supervised projection (None by default)

Required Methods§

Source

fn train_one(&mut self, features: &[f64], target: f64, weight: f64)

Train on a single observation with explicit sample weight.

This is the fundamental training primitive. All streaming models must support weighted incremental updates – even if the weight is simply used to scale gradient contributions.

§Arguments
  • features – feature vector for this observation
  • target – target value (regression) or class label (classification)
  • weight – sample weight (1.0 for uniform weighting)
Source

fn predict(&self, features: &[f64]) -> f64

Predict the target for the given feature vector.

Returns the raw model output (no loss transform applied). For SGBT this is the sum of tree predictions; for linear models this is the dot product plus bias.

Source

fn n_samples_seen(&self) -> u64

Total number of observations trained on since creation or last reset.

Source

fn reset(&mut self)

Reset the model to its initial (untrained) state.

After calling reset(), the model should behave identically to a freshly constructed instance with the same configuration. In particular, n_samples_seen() must return 0.

Provided Methods§

Source

fn train(&mut self, features: &[f64], target: f64)

Train on a single observation with unit weight.

Convenience wrapper around train_one that passes weight = 1.0. This is the most common training call in practice.

Source

fn predict_batch(&self, feature_matrix: &[&[f64]]) -> Vec<f64>

Predict for each row in a feature matrix.

Returns a Vec<f64> with one prediction per input row. The default implementation simply maps predict over the slices; concrete implementations may override this for SIMD or batch-optimized prediction paths.

§Arguments
  • feature_matrix – each element is a feature vector (one row)
Source

fn diagnostics_array(&self) -> [f64; 5]

Raw diagnostic signals for adaptive tuning.

Returns [residual_alignment, reg_sensitivity, depth_sufficiency, effective_dof, uncertainty]. These five signals drive the diagnostic adaptor in the auto-builder pipeline.

Default: all zeros (model does not provide diagnostics). Models with internal diagnostic caches (e.g. SGBT, DistributionalSGBT) override this to return real computed values.

Source

fn adjust_config(&mut self, _lr_multiplier: f64, _lambda_delta: f64)

Apply smooth learning rate and regularization adjustments.

  • lr_multiplier – scales the current learning rate (1.0 = no change, 0.99 = 1% decrease, 1.01 = 1% increase).
  • lambda_delta – added to the L2 regularization parameter (0.0 = no change, positive = increase, negative = decrease).

Default: no-op. Override for models with adjustable hyperparameters (e.g. SGBT, DistributionalSGBT).

Source

fn apply_structural_change(&mut self, _depth_delta: i32, _steps_delta: i32)

Apply structural changes at model replacement boundaries.

  • depth_delta – adjust maximum tree depth (+1, -1, or 0).
  • steps_delta – adjust number of ensemble steps (+2, -2, or 0).

Structural changes take effect on the next tree replacement, not immediately. Default: no-op for models without structural config.

Source

fn replacement_count(&self) -> u64

Total number of internal model replacements (e.g. tree replacements triggered by drift detection or max-tree-samples).

External callers (e.g. the auto-builder) use this to detect when a structural boundary has occurred and apply queued structural changes. Default: 0 for models without replacement semantics.

Source

fn check_proactive_prune(&mut self) -> bool

Manually trigger a proactive prune check.

Returns true if an internal component was pruned/replaced. Default: no-op (returns false).

Source

fn set_prune_half_life(&mut self, _hl: usize)

Dynamically set the contribution accuracy EWMA half-life.

Recomputes prune_alpha so each correction batch contributes equally regardless of size. Default: no-op.

Source

fn readout_weights(&self) -> Option<&[f64]>

Return the readout weight vector for supervised projection, if available.

Models with an RLS readout layer return Some(&weights). Models without (KAN, SpikeNet, SGBT, etc.) return None. Used by ProjectedLearner for supervised projection updates.

Source

fn tree_structure(&self) -> Vec<(usize, usize, f64, f64, u64)>

Optional tree-level structure diagnostics.

Returns per-tree: (depth, n_leaves, leaf_weight_mean, leaf_weight_std, samples_seen). Default: empty vec (model has no trees).

Implementors§