pub struct ElasticNetOptions {
pub lambda: f64,
pub alpha: f64,
pub intercept: bool,
pub standardize: bool,
pub max_iter: usize,
pub tol: f64,
pub penalty_factor: Option<Vec<f64>>,
pub warm_start: Option<Vec<f64>>,
pub weights: Option<Vec<f64>>,
pub coefficient_bounds: Option<Vec<(f64, f64)>>,
}Expand description
Options for elastic net fitting.
Configuration options for elastic net regression, which combines L1 and L2 penalties.
§Fields
lambda- Regularization strength (≥ 0, higher = more regularization)alpha- Mixing parameter (0 = Ridge, 1 = Lasso, 0.5 = equal mix)intercept- Whether to include an intercept termstandardize- Whether to standardize predictors to unit variancemax_iter- Maximum coordinate descent iterationstol- Convergence tolerance on coefficient changespenalty_factor- Optional per-feature penalty multiplierswarm_start- Optional initial coefficient values for warm startsweights- Optional observation weightscoefficient_bounds- Optional (lower, upper) bounds for each coefficient
§Example
let options = ElasticNetOptions {
lambda: 0.1,
alpha: 0.5, // Equal mix of L1 and L2
intercept: true,
standardize: true,
..Default::default()
};Fields§
§lambda: f64Regularization strength (lambda >= 0)
alpha: f64Elastic net mixing parameter (0 <= alpha <= 1). alpha=1 is Lasso, alpha=0 is Ridge.
intercept: boolWhether to include an intercept term
standardize: boolWhether to standardize predictors
max_iter: usizeMaximum coordinate descent iterations
tol: f64Convergence tolerance on coefficient changes
penalty_factor: Option<Vec<f64>>Per-feature penalty factors (optional). If None, all features have penalty factor 1.0.
warm_start: Option<Vec<f64>>Initial coefficients for warm start (optional). If provided, optimization starts from these values instead of zero. Used for efficient pathwise coordinate descent.
weights: Option<Vec<f64>>Observation weights (optional). If provided, must have length equal to the number of observations. Weights are normalized to sum to 1 internally.
coefficient_bounds: Option<Vec<(f64, f64)>>Coefficient bounds: (lower, upper) for each predictor. If None, uses (-inf, +inf) for all coefficients (no bounds).
The bounds vector length must equal the number of predictors (excluding intercept). For each predictor, the coefficient will be clamped to [lower, upper] after each coordinate descent update.
§Examples
- Non-negative least squares:
Some(vec![(0.0, f64::INFINITY); p]) - Upper bound only:
Some(vec![(-f64::INFINITY, 10.0); p]) - Both bounds:
Some(vec![(-5.0, 5.0); p])
§Notes
- Bounds are applied to coefficients on the ORIGINAL scale, not standardized scale
- The intercept is never bounded
- Each pair must satisfy
lower <= upper
Trait Implementations§
Source§impl Clone for ElasticNetOptions
impl Clone for ElasticNetOptions
Source§fn clone(&self) -> ElasticNetOptions
fn clone(&self) -> ElasticNetOptions
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more