pub struct LinearLeafModel { /* private fields */ }alloc only.Expand description
Online ridge regression leaf model with AdaGrad optimization.
Learns a linear function w . x + b using Newton-scaled gradient descent
with per-weight AdaGrad accumulators for adaptive learning rates. Features
at different scales converge at their natural rates without manual tuning.
Weights are lazily initialized on the first update call so the model
adapts to whatever dimensionality arrives.
Optional exponential weight decay (decay) gives the model a finite memory
horizon for non-stationary streams.
Implementations§
Source§impl LinearLeafModel
impl LinearLeafModel
Sourcepub fn new(learning_rate: f64, decay: Option<f64>, use_adagrad: bool) -> Self
pub fn new(learning_rate: f64, decay: Option<f64>, use_adagrad: bool) -> Self
Create a new linear leaf model with the given base learning rate, optional exponential decay factor, and AdaGrad toggle.
When decay is Some(d) with d in (0, 1), weights are multiplied
by d before each update, giving the model a memory half-life of
ln(2) / ln(1/d) samples.
When use_adagrad is true, per-weight squared gradient accumulators
give each feature its own adaptive learning rate. When false, all
weights share a single Newton-scaled learning rate (plain SGD).