Struct forust_ml::gradientbooster::GradientBooster
source · [−]pub struct GradientBooster {
pub objective_type: ObjectiveType,
pub iterations: usize,
pub learning_rate: f32,
pub max_depth: usize,
pub max_leaves: usize,
pub l2: f32,
pub gamma: f32,
pub min_leaf_weight: f32,
pub base_score: f64,
pub nbins: u16,
pub parallel: bool,
pub trees: Vec<Tree>,
}Expand description
Gradient Booster object
objective_type- The name of objective function used to optimize. Valid options include “LogLoss” to use logistic loss as the objective function, or “SquaredLoss” to use Squared Error as the objective function.iterations- Total number of trees to train in the ensemble.learning_rate- Step size to use at each iteration. Each leaf weight is multiplied by this number. The smaller the value, the more conservative the weights will be.max_depth- Maximum depth of an individual tree. Valid values are 0 to infinity.max_leaves- Maximum number of leaves allowed on a tree. Valid values are 0 to infinity. This is the total number of final nodes.l2- L2 regularization term applied to the weights of the tree. Valid values are 0 to infinity.gamma- The minimum amount of loss required to further split a node. Valid values are 0 to infinity.min_leaf_weight- Minimum sum of the hessian values of the loss function required to be in a node.base_score- The initial prediction value of the model.nbins- Number of bins to calculate to partition the data. Setting this to a smaller number, will result in faster training time, while potentially sacrificing accuracy. If there are more bins, than unique values in a column, all unique values will be used.
Fields
objective_type: ObjectiveTypeiterations: usizelearning_rate: f32max_depth: usizemax_leaves: usizel2: f32gamma: f32min_leaf_weight: f32base_score: f64nbins: u16parallel: booltrees: Vec<Tree>Implementations
sourceimpl GradientBooster
impl GradientBooster
sourcepub fn new(
objective_type: ObjectiveType,
iterations: usize,
learning_rate: f32,
max_depth: usize,
max_leaves: usize,
l2: f32,
gamma: f32,
min_leaf_weight: f32,
base_score: f64,
nbins: u16,
parallel: bool
) -> Self
pub fn new(
objective_type: ObjectiveType,
iterations: usize,
learning_rate: f32,
max_depth: usize,
max_leaves: usize,
l2: f32,
gamma: f32,
min_leaf_weight: f32,
base_score: f64,
nbins: u16,
parallel: bool
) -> Self
Gradient Booster object
objective_type- The name of objective function used to optimize. Valid options include “LogLoss” to use logistic loss as the objective function, or “SquaredLoss” to use Squared Error as the objective function.iterations- Total number of trees to train in the ensemble.learning_rate- Step size to use at each iteration. Each leaf weight is multiplied by this number. The smaller the value, the more conservative the weights will be.max_depth- Maximum depth of an individual tree. Valid values are 0 to infinity.max_leaves- Maximum number of leaves allowed on a tree. Valid values are 0 to infinity. This is the total number of final nodes.l2- L2 regularization term applied to the weights of the tree. Valid values are 0 to infinity.gamma- The minimum amount of loss required to further split a node. Valid values are 0 to infinity.min_leaf_weight- Minimum sum of the hessian values of the loss function required to be in a node.base_score- The initial prediction value of the model.nbins- Number of bins to calculate to partition the data. Setting this to a smaller number, will result in faster training time, while potentially sacrificing accuracy. If there are more bins, than unique values in a column, all unique values will be used.
sourcepub fn fit(
&mut self,
data: &Matrix<'_, f64>,
y: &[f64],
sample_weight: &[f64]
) -> Result<(), ForustError>
pub fn fit(
&mut self,
data: &Matrix<'_, f64>,
y: &[f64],
sample_weight: &[f64]
) -> Result<(), ForustError>
Fit the gradient booster on a provided dataset.
data- Either a pandas DataFrame, or a 2 dimensional numpy array.y- Either a pandas Series, or a 1 dimensional numpy array.sample_weight- Instance weights to use when training the model. If None is passed, a weight of 1 will be used for every record.
sourcepub fn predict(&self, data: &Matrix<'_, f64>, parallel: bool) -> Vec<f64>
pub fn predict(&self, data: &Matrix<'_, f64>, parallel: bool) -> Vec<f64>
Generate predictions on data using the gradient booster.
data- Either a pandas DataFrame, or a 2 dimensional numpy array.
sourcepub fn save_booster(&self, path: &str) -> Result<(), ForustError>
pub fn save_booster(&self, path: &str) -> Result<(), ForustError>
Save a booster as a json object to a file.
path- Path to save booster.
sourcepub fn from_json(json_str: &str) -> Result<Self, ForustError>
pub fn from_json(json_str: &str) -> Result<Self, ForustError>
Load a booster from Json string
json_str- String object, which can be serialized to json.
sourcepub fn load_booster(path: &str) -> Result<Self, ForustError>
pub fn load_booster(path: &str) -> Result<Self, ForustError>
Load a booster from a path to a json booster object.
path- Path to load booster from.
Trait Implementations
sourceimpl Default for GradientBooster
impl Default for GradientBooster
sourceimpl<'de> Deserialize<'de> for GradientBooster
impl<'de> Deserialize<'de> for GradientBooster
sourcefn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
sourceimpl Serialize for GradientBooster
impl Serialize for GradientBooster
Auto Trait Implementations
impl RefUnwindSafe for GradientBooster
impl Send for GradientBooster
impl Sync for GradientBooster
impl Unpin for GradientBooster
impl UnwindSafe for GradientBooster
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more