pub struct SGBT<L: Loss = SquaredLoss> { /* private fields */ }alloc only.Expand description
Streaming Gradient Boosted Trees ensemble.
The primary entry point for training and prediction. Generic over L: Loss
so the loss function’s gradient/hessian calls are monomorphized (inlined)
into the boosting hot loop – no virtual dispatch overhead.
The default type parameter L = SquaredLoss means SGBT::new(config)
creates a regression model without specifying the loss type explicitly.
§Examples
use irithyll::{SGBTConfig, SGBT};
// Regression with squared loss (default):
let config = SGBTConfig::builder().n_steps(10).build().unwrap();
let model = SGBT::new(config);use irithyll::{SGBTConfig, SGBT};
use irithyll::loss::logistic::LogisticLoss;
// Classification with logistic loss -- no Box::new()!
let config = SGBTConfig::builder().n_steps(10).build().unwrap();
let model = SGBT::with_loss(config, LogisticLoss);Implementations§
Source§impl SGBT<SquaredLoss>
impl SGBT<SquaredLoss>
Sourcepub fn new(config: SGBTConfig) -> Self
pub fn new(config: SGBTConfig) -> Self
Create a new SGBT ensemble with squared loss (regression).
This is the most common constructor. For classification or custom
losses, use with_loss.
Source§impl<L: Loss> SGBT<L>
impl<L: Loss> SGBT<L>
Sourcepub fn with_loss(config: SGBTConfig, loss: L) -> Self
pub fn with_loss(config: SGBTConfig, loss: L) -> Self
Create a new SGBT ensemble with a specific loss function.
The loss is stored by value (monomorphized), giving zero-cost gradient/hessian dispatch.
use irithyll::{SGBTConfig, SGBT};
use irithyll::loss::logistic::LogisticLoss;
let config = SGBTConfig::builder().n_steps(10).build().unwrap();
let model = SGBT::with_loss(config, LogisticLoss);Sourcepub fn train_one(&mut self, sample: &impl Observation)
pub fn train_one(&mut self, sample: &impl Observation)
Train on a single observation.
Accepts any type implementing Observation, including Sample,
SampleRef, or tuples like (&[f64], f64) for zero-copy training.
Sourcepub fn train_batch<O: Observation>(&mut self, samples: &[O])
pub fn train_batch<O: Observation>(&mut self, samples: &[O])
Train on a batch of observations.
Sourcepub fn train_batch_with_callback<O: Observation, F: FnMut(usize)>(
&mut self,
samples: &[O],
interval: usize,
callback: F,
)
pub fn train_batch_with_callback<O: Observation, F: FnMut(usize)>( &mut self, samples: &[O], interval: usize, callback: F, )
Train on a batch with periodic callback for cooperative yielding.
Sourcepub fn train_batch_subsampled<O: Observation>(
&mut self,
samples: &[O],
max_samples: usize,
)
pub fn train_batch_subsampled<O: Observation>( &mut self, samples: &[O], max_samples: usize, )
Train on a random subsample of a batch using reservoir sampling (Algorithm R).
Sourcepub fn train_batch_subsampled_with_callback<O: Observation, F: FnMut(usize)>(
&mut self,
samples: &[O],
max_samples: usize,
interval: usize,
callback: F,
)
pub fn train_batch_subsampled_with_callback<O: Observation, F: FnMut(usize)>( &mut self, samples: &[O], max_samples: usize, interval: usize, callback: F, )
Train on a batch with both subsampling and periodic callbacks.
Sourcepub fn predict(&self, features: &[f64]) -> f64
pub fn predict(&self, features: &[f64]) -> f64
Predict the raw output for a feature vector.
Uses auto-calibrated per-feature bandwidths for smooth (soft) routing. Falls back to hard routing before any training has occurred.
Sourcepub fn predict_smooth(&self, features: &[f64], bandwidth: f64) -> f64
pub fn predict_smooth(&self, features: &[f64], bandwidth: f64) -> f64
Predict using sigmoid-blended soft routing with an explicit bandwidth.
Sourcepub fn auto_bandwidths(&self) -> &[f64]
pub fn auto_bandwidths(&self) -> &[f64]
Per-feature auto-calibrated bandwidths used by predict().
Sourcepub fn predict_interpolated(&self, features: &[f64]) -> f64
pub fn predict_interpolated(&self, features: &[f64]) -> f64
Predict with parent-leaf linear interpolation.
Sourcepub fn predict_sibling_interpolated(&self, features: &[f64]) -> f64
pub fn predict_sibling_interpolated(&self, features: &[f64]) -> f64
Predict with sibling-based interpolation for feature-continuous predictions.
Sourcepub fn predict_graduated(&self, features: &[f64]) -> f64
pub fn predict_graduated(&self, features: &[f64]) -> f64
Predict with graduated active-shadow blending.
Sourcepub fn predict_graduated_sibling_interpolated(&self, features: &[f64]) -> f64
pub fn predict_graduated_sibling_interpolated(&self, features: &[f64]) -> f64
Predict with graduated blending + sibling interpolation.
Sourcepub fn predict_transformed(&self, features: &[f64]) -> f64
pub fn predict_transformed(&self, features: &[f64]) -> f64
Predict with loss transform applied (e.g., sigmoid for logistic loss).
Sourcepub fn predict_proba(&self, features: &[f64]) -> f64
pub fn predict_proba(&self, features: &[f64]) -> f64
Predict probability (alias for predict_transformed).
Sourcepub fn predict_with_confidence(&self, features: &[f64]) -> (f64, f64)
pub fn predict_with_confidence(&self, features: &[f64]) -> (f64, f64)
Predict with confidence estimation.
Returns (prediction, confidence) where confidence = 1 / sqrt(sum_variance).
Sourcepub fn total_leaves(&self) -> usize
pub fn total_leaves(&self) -> usize
Total leaves across all active trees.
Sourcepub fn n_samples_seen(&self) -> u64
pub fn n_samples_seen(&self) -> u64
Total samples trained.
Sourcepub fn base_prediction(&self) -> f64
pub fn base_prediction(&self) -> f64
The current base prediction.
Sourcepub fn is_initialized(&self) -> bool
pub fn is_initialized(&self) -> bool
Whether the base prediction has been initialized.
Sourcepub fn config(&self) -> &SGBTConfig
pub fn config(&self) -> &SGBTConfig
Access the configuration.
Sourcepub fn set_learning_rate(&mut self, lr: f64)
pub fn set_learning_rate(&mut self, lr: f64)
Set the learning rate for future boosting rounds.
Sourcepub fn steps(&self) -> &[BoostingStep]
pub fn steps(&self) -> &[BoostingStep]
Immutable access to the boosting steps.
Sourcepub fn feature_importances(&self) -> Vec<f64>
pub fn feature_importances(&self) -> Vec<f64>
Feature importances based on accumulated split gains across all trees.
Returns normalized importances (sum to 1.0) indexed by feature.
Sourcepub fn feature_names(&self) -> Option<&[String]>
pub fn feature_names(&self) -> Option<&[String]>
Feature names, if configured.
Sourcepub fn named_feature_importances(&self) -> Option<Vec<(String, f64)>>
pub fn named_feature_importances(&self) -> Option<Vec<(String, f64)>>
Feature importances paired with their names.
Returns None if feature names are not configured. Otherwise returns
(name, importance) pairs sorted by importance descending.
Sourcepub fn train_one_named(&mut self, features: &HashMap<String, f64>, target: f64)
Available on crate feature std only.
pub fn train_one_named(&mut self, features: &HashMap<String, f64>, target: f64)
std only.Train on a single sample with named features.
Trait Implementations§
Auto Trait Implementations§
impl<L> Freeze for SGBT<L>where
L: Freeze,
impl<L = SquaredLoss> !RefUnwindSafe for SGBT<L>
impl<L> Send for SGBT<L>
impl<L> Sync for SGBT<L>
impl<L> Unpin for SGBT<L>where
L: Unpin,
impl<L> UnsafeUnpin for SGBT<L>where
L: UnsafeUnpin,
impl<L = SquaredLoss> !UnwindSafe for SGBT<L>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more