pub struct AdaptiveSGBT<L: Loss = SquaredLoss> { /* private fields */ }alloc only.Expand description
SGBT ensemble with an attached learning rate scheduler.
Before each train_one call, AdaptiveSGBT:
- Computes the current prediction to estimate loss.
- Queries the scheduler for the new learning rate.
- Sets the learning rate on the inner SGBT.
- Delegates the actual training step.
This allows any LRScheduler – exponential decay, cosine annealing,
plateau reduction – to drive the ensemble’s learning rate without touching
the core boosting logic.
§Loss Estimation
The scheduler receives squared error (target - prediction)² as its loss
signal. This is computed from the current ensemble prediction before the
training step, making it a one-step-lagged estimate. This works well for
schedulers like PlateauLR that
smooth over many steps.
Implementations§
Source§impl AdaptiveSGBT<SquaredLoss>
impl AdaptiveSGBT<SquaredLoss>
Sourcepub fn new(config: SGBTConfig, scheduler: impl LRScheduler + 'static) -> Self
pub fn new(config: SGBTConfig, scheduler: impl LRScheduler + 'static) -> Self
Create an adaptive SGBT with squared loss (regression).
The initial learning rate is taken from the config and also stored as
base_lr for reference.
use irithyll::ensemble::adaptive::AdaptiveSGBT;
use irithyll::ensemble::lr_schedule::ConstantLR;
use irithyll::SGBTConfig;
let config = SGBTConfig::builder()
.n_steps(10)
.learning_rate(0.05)
.build()
.unwrap();
let model = AdaptiveSGBT::new(config, ConstantLR::new(0.05));Source§impl<L: Loss> AdaptiveSGBT<L>
impl<L: Loss> AdaptiveSGBT<L>
Sourcepub fn with_loss(
config: SGBTConfig,
loss: L,
scheduler: impl LRScheduler + 'static,
) -> Self
pub fn with_loss( config: SGBTConfig, loss: L, scheduler: impl LRScheduler + 'static, ) -> Self
Create an adaptive SGBT with a specific loss function.
use irithyll::ensemble::adaptive::AdaptiveSGBT;
use irithyll::ensemble::lr_schedule::LinearDecayLR;
use irithyll::loss::logistic::LogisticLoss;
use irithyll::SGBTConfig;
let config = SGBTConfig::builder()
.n_steps(10)
.learning_rate(0.1)
.build()
.unwrap();
let model = AdaptiveSGBT::with_loss(
config, LogisticLoss, LinearDecayLR::new(0.1, 0.001, 10_000),
);Sourcepub fn train_one_obs(&mut self, sample: &impl Observation)
pub fn train_one_obs(&mut self, sample: &impl Observation)
Train on a single observation, adapting the learning rate first.
This is the generic version accepting any Observation implementor.
For the StreamingLearner trait interface, use train_one(features, target, weight).
Sourcepub fn current_lr(&self) -> f64
pub fn current_lr(&self) -> f64
Current learning rate (as last set by the scheduler).
Sourcepub fn step_count(&self) -> u64
pub fn step_count(&self) -> u64
Total scheduler steps (equal to samples trained).
Sourcepub fn scheduler(&self) -> &dyn LRScheduler
pub fn scheduler(&self) -> &dyn LRScheduler
Immutable access to the scheduler.
Sourcepub fn scheduler_mut(&mut self) -> &mut dyn LRScheduler
pub fn scheduler_mut(&mut self) -> &mut dyn LRScheduler
Mutable access to the scheduler.
Sourcepub fn into_inner(self) -> SGBT<L>
pub fn into_inner(self) -> SGBT<L>
Consume the wrapper and return the inner SGBT model.