[][src]Trait liblinear::LibLinearCrossValidator

pub trait LibLinearCrossValidator: HasLibLinearProblem + HasLibLinearParameter {
    fn cross_validation(&self, folds: i32) -> Result<Vec<f64>, ModelError>;
fn find_optimal_constraints_violation_cost_and_loss_sensitivity(
        &self,
        folds: i32,
        start_cost: f64,
        start_loss_sensitivity: f64
    ) -> Result<(f64, f64, f64), ModelError>; }

Represents a linear model that can be used for validation.

Required methods

fn cross_validation(&self, folds: i32) -> Result<Vec<f64>, ModelError>

Performs k-folds cross-validation and returns the predicted labels.

Number of folds must be >= 2.

fn find_optimal_constraints_violation_cost_and_loss_sensitivity(
    &self,
    folds: i32,
    start_cost: f64,
    start_loss_sensitivity: f64
) -> Result<(f64, f64, f64), ModelError>

Performs k-folds cross-validation to find the best cost value (parameter C) and regression loss sensitivity (parameter p) and returns a tuple with the following values:

  • The best cost value.
  • The accuracy of the best cost value (classification) or mean squared error (regression).
  • The best regression loss sensitivity value (only for regression).

Supported Solvers:

  • L2R_LR, L2R_L2LOSS_SVC - Cross validation is conducted many times with values of _C_ = n * start_C; n = 1, 2, 4, 8... to find the value with the highest cross validation accuracy. The procedure stops when the models of all folds become stable or when the cost reaches the upper-bound max_cost = 1024. If start_cost is <= 0, an appropriately small value is automatically calculated and used instead.

  • L2R_L2LOSS_SVR - Cross validation is conducted in a two-fold loop. The outer loop iterates over the values of _p_ = n / (20 * max_loss_sensitivity); n = 19, 18, 17...0. For each value of p, the inner loop performs cross validation with values of _C_ = n * start_C; n = 1, 2, 4, 8... to find the value with the lowest mean squared error. The procedure stops when the models of all folds become stable or when the cost reaches the upper-bound max_cost = 1048576. If start_cost is <= 0, an appropriately small value is automatically calculated and used instead.

    max_p is automatically calculated from the problem's training data. If start_loss_sensitivity is <= 0, it is set to max_loss_sensitivity. Otherwise, the outer loop starts with the first _p_ = n / (20 * max_p) that is <= start_loss_sensitivity.

Loading content...

Implementors

Loading content...