validation_curve

Function validation_curve 

Source
pub fn validation_curve<E, F, C>(
    estimator: E,
    x: &Array2<Float>,
    y: &Array1<Float>,
    _param_name: &str,
    param_range: Vec<ParameterValue>,
    param_config: ParamConfigFn<E>,
    cv: &C,
    scoring: Option<Scoring>,
    confidence_level: Option<f64>,
) -> Result<ValidationCurveResult>
where E: Clone + Fit<Array2<Float>, Array1<Float>, Fitted = F>, F: Clone + Predict<Array2<Float>, Array1<Float>> + Score<Array2<Float>, Array1<Float>, Float = f64>, C: CrossValidator,
Expand description

Compute validation curves for an estimator

Determines training and test scores for a varying parameter value. This is useful to understand the effect of a specific parameter on model performance and to detect overfitting/underfitting.

ยงArguments

  • estimator - The estimator to evaluate
  • x - Training data features
  • y - Training data targets
  • _param_name - Name of the parameter being varied (for documentation)
  • param_range - Parameter values to test
  • param_config - Function to configure estimator with parameter values
  • cv - Cross-validation splitter
  • scoring - Scoring method to use
  • confidence_level - Confidence level for error bars (default: 0.95 for 95% confidence interval)