Expand description
Model selection utilities for sklears
Structs§
- Ablation
Analyzer - Ablation study analyzer
- Ablation
Config - Configuration for ablation studies
- Ablation
Result - Result of ablation study
- Adaptation
Config - Adaptive
Allocation Config - Configuration for adaptive resource allocation
- Adaptive
Early Stopping - Adaptive early stopping that adjusts parameters based on optimization progress
- Adaptive
Fidelity Selector - Adaptive fidelity selector
- Adaptive
Resource Allocator - Adaptive resource allocator
- Adversarial
Statistics - Detailed statistics from adversarial validation
- Adversarial
Validation Config - Configuration for adversarial validation
- Adversarial
Validation Result - Results from adversarial validation
- Adversarial
Validator - Adversarial validator for detecting data leakage and distribution shifts
- Aleatoric
Uncertainty Config - Aleatoric
Uncertainty Quantifier - Aleatoric
Uncertainty Result - Algorithm
Selection Result - Result of algorithm selection process
- Algorithm
Spec - Specific algorithm within a family
- Allocation
Plan - Plan for resource allocation
- Allocation
Statistics - Statistics about resource allocation
- Architecture
Evaluation - Architecture evaluation result
- Architecture
Search Space - Architecture search space definition
- Auto
Feature Engineer - Automated feature engineering engine
- Auto
Feature Engineering - Configuration for automated feature engineering
- AutoML
Algorithm Selector - Automated algorithm selector
- AutoML
Config - Configuration for automated algorithm selection
- AutoML
Dataset Characteristics - Dataset characteristics for algorithm selection
- AutoML
Pipeline - Complete AutoML pipeline
- AutoML
Pipeline Config - Complete AutoML configuration
- AutoML
Pipeline Result - AutoML pipeline execution result
- BMAConfig
- BMAResult
- BMA_
Averager - BMA_
Model Info - Bayes
SearchCV - Bayesian Search Cross-Validator
- Bayes
Search Config - Configuration for Bayesian Search
- Bayesian
Model Selection Result - Result of Bayesian model selection
- Bayesian
Model Selector - Bayesian model selector
- Bayesian
Model Selector_ BA - Model averaging using Bayesian weights
- Bias
Variance Analyzer - Bias-variance decomposition analyzer
- Bias
Variance Config - Configuration for bias-variance decomposition
- Bias
Variance Result - Results of bias-variance decomposition analysis
- Block
Cross Validator - Block cross-validation for time series or sequential data
- Blocked
TemporalCV - Blocked temporal cross-validator for handling irregular time series
- Blocked
Time SeriesCV - Blocked Time Series Cross-Validation
- BootstrapCV
- Bootstrap cross-validator with confidence interval estimation
- Budget
Allocator - Budget allocator
- CVConfig
- Cross-validation configuration
- CVModel
Score - Cross-validation scores for a model
- CVModel
Selection Config - Configuration for cross-validation model selection
- CVModel
Selection Result - Result of cross-validation model selection
- CVModel
Selector - Cross-validation model selector
- Categorical
Parameter - Categorical parameter definition with enhanced features
- Class
Statistics - Closure
Scorer - Custom scoring function wrapper for closures
- Cluster
Info - Coarse
ToFine Config - Configuration for coarse-to-fine optimization
- Coarse
ToFine Optimizer - Coarse-to-fine optimizer
- Coarse
ToFine Result - Result of coarse-to-fine optimization
- Complexity
Analysis Config - Configuration for complexity analysis
- Complexity
Analysis Result - Result of model complexity analysis
- Complexity
Measures - Complexity measures of the dataset
- Comprehensive
Importance Result - Comprehensive importance analysis result
- Computational
Constraints - Computational constraints for algorithm selection
- Conditional
Parameter - Conditional parameter definition
- Config
Allocation - Allocation for a specific configuration
- Config
Manager - Configuration manager for loading, saving, and validating configurations
- Configuration
With Performance - Configuration with performance information
- Conformal
Prediction Config - Configuration for conformal prediction
- Conformal
Prediction Result - Results from conformal prediction
- Conformal
Predictor - Conformal predictor for regression and classification
- Console
Progress Callback - Default progress callback that prints to console
- Convergence
Metrics - Convergence metrics for monitoring optimization progress
- Coverage
Statistics - Coverage statistics for conformal prediction
- Cross
Validate Result - Result of cross_validate
- Cross
ValidatedIC - Model selection using information criteria with cross-validation
- Custom
Cross Validator - Data
Chunk - Streaming data chunk
- Dataset
Characteristics - Dataset characteristics for meta-learning
- Distribution
Shift Metrics - Metrics for measuring distribution shift
- Diversity
Analysis - Diversity analysis results
- Diversity
Measures - Diversity measures for the ensemble
- Drift
Detection Config - Configuration for drift detection
- Drift
Detection Result - Results from drift detection
- Drift
Detector - Drift detector for monitoring data distribution changes
- Drift
Event - Detected drift event
- Drift
Statistics - Detailed drift statistics
- ESConfig
- Early stopping configuration
- Early
Stopping Config - Early stopping criterion configuration
- Early
Stopping Monitor - Early stopping monitor
- Efficiency
Metrics - Efficiency metrics for conformal prediction
- Enhanced
Scorer - Enhanced scorer that supports multiple metrics and confidence intervals
- Ensemble
Evaluation Config - Ensemble evaluation configuration
- Ensemble
Evaluation Result - Ensemble evaluation result
- Ensemble
Evaluator - Ensemble evaluator
- Ensemble
Model Info - Information about a model in the ensemble
- Ensemble
Performance - Performance metrics for the ensemble
- Ensemble
Performance Metrics - Comprehensive ensemble performance metrics
- Ensemble
Selection Config - Configuration for ensemble selection
- Ensemble
Selection Result - Result of ensemble model selection
- Ensemble
Selector - Ensemble model selector
- Epistemic
Uncertainty Config - Epistemic
Uncertainty Quantifier - Epistemic
Uncertainty Result - Evaluation
- Single evaluation record
- Evaluation
Record - Historical evaluation record
- Evaluation
Result - Individual evaluation result
- Experience
Replay Buffer - Experience replay buffer
- Experience
Replay Config - Experience replay configuration
- FANOVA
Analyzer - Functional ANOVA analyzer
- FANOVA
Config - Configuration for fANOVA analysis
- FANOVA
Result - Result of fANOVA analysis
- Feature
Engineering Result - Result of feature engineering process
- Feature
Statistics - Statistical properties of a feature
- FewShot
Config - Few-shot optimization configuration
- FewShot
Optimizer - Few-shot hyperparameter optimizer
- FewShot
Result - Few-shot optimization result
- Fidelity
Evaluation - Evaluation result at a specific fidelity
- Fixed
Threshold Classifier - Fixed threshold classifier wrapper
- Generated
Feature - Generated feature information
- Grid
SearchCV - Grid search cross-validation
- Grid
Search Results - Results from grid search cross-validation
- GroupK
Fold - Group K-Fold cross-validator with custom group definitions
- Group
Shuffle Split - Group Shuffle Split cross-validator
- Halving
Grid Search - HalvingGridSearch implementation
- Halving
Grid Search Config - Configuration for HalvingGridSearch
- Halving
Grid Search Results - Results from HalvingGridSearch
- Halving
Random SearchCV - Randomized search with successive halving for efficient hyperparameter optimization
- Hierarchical
Cross Validator - Hierarchical
Split - Hierarchical
Validation Config - Hierarchical
Validation Result - Hook
Manager - Hook manager for managing multiple hooks
- Hyperparameter
Importance Analyzer - Comprehensive hyperparameter importance analyzer
- ICModel
Comparison Result - Comparison result for multiple models
- Imbalanced
Cross Validator - Imbalanced
Split - Imbalanced
Validation Config - Imbalanced
Validation Result - Incremental
Evaluation Config - Incremental evaluation configuration
- Incremental
Evaluation Result - Incremental evaluation result
- Incremental
Evaluator - Incremental evaluator
- Information
Criterion Calculator - Information criterion calculator
- Information
Criterion Result - Result of information criterion calculation
- Jackknife
Conformal Predictor - Jackknife+ conformal prediction for better efficiency
- KFold
- K-Fold cross-validation iterator
- Label
Statistics - Learn2
Optimize Config - Learning-to-optimize configuration
- Learn2
Optimize Result - Result of learning-to-optimize
- Learned
Optimizer - Learned optimizer
- Learning
Curve Result - Learning curve results
- Leave
OneGroup Out - Leave One Group Out cross-validator
- Leave
OneOut - Leave-One-Out cross-validator
- Leave
OneRegion Out - Leave-one-region-out cross-validator for spatial data
- LeaveP
Groups Out - Leave P Groups Out cross-validator
- LeaveP
Out - Leave-P-Out cross-validator
- Logging
Hook - Simple logging hook
- Member
Contribution - Individual member contribution analysis
- Memory
Efficiency Stats - Memory efficiency statistics
- Memory
Efficient Config - Configuration for memory-efficient operations
- Memory
Efficient Cross Validator - Memory-efficient cross-validation evaluator
- Memory
Pool - Memory pool for frequently allocated objects
- Memory
Snapshot - Memory usage snapshot
- Memory
Tracker - Memory usage tracking and management
- Meta
Learning Config - Meta-learning configuration
- Meta
Learning Engine - Meta-learning engine
- Meta
Learning Recommendation - Meta-learning recommendations
- Meta
Optimization Experience - Optimization experience from historical data
- Meta
Optimization Task - Optimization task for few-shot learning
- Meta
Parameter Range - Meta
Task Characteristics - Task characteristics for transfer learning
- Metric
Registry - Metric registry
- Middleware
Pipeline - Middleware pipeline
- Model
Comparison Pair - Pairwise model comparison result
- Model
Comparison Result - Result of multiple model comparison
- Model
Complexity Analyzer - Model complexity analyzer
- Model
Evidence Data - Data required for evidence estimation
- Model
Performance - Performance metrics for individual models
- Model
Ranking - Ranking information for a model
- Model
Selection Config - Main configuration structure for model selection operations
- Monte
CarloCV - Monte Carlo Cross-Validation with random subsampling
- Multi
Fidelity Config - Multi-fidelity optimization configuration
- Multi
Fidelity Optimizer - Multi-fidelity Bayesian optimizer
- Multi
Fidelity Result - Multi-fidelity optimization result
- Multi
Label Cross Validator - Multi
Label Split - Multi
Label Validation Config - Multi
Label Validation Result - Multi
Objective Analysis - Multi-objective analysis results
- NASConfig
- NAS configuration
- NASOptimizer
- Neural Architecture Search optimizer
- NASResult
- NAS optimization result
- NestedCV
Result - Result of nested cross-validation
- Neural
Architecture - Neural network architecture representation
- Noise
Config - Noise
Injector - Noise
Statistics - Normalization
Middleware - Parameter normalization middleware
- OODConfidence
Intervals - Confidence intervals for OOD validation metrics
- OODValidation
Config - Configuration for out-of-distribution validation
- OODValidation
Result - Results from out-of-distribution validation
- OODValidator
- Out-of-Distribution Validator
- Opti
Config - Hyperparameter optimization configuration
- Optimization
Experience_ Advanced - Optimization experience
- Optimization
History - Optimization history store
- Optimization
Record - Historical optimization record
- Optimization
Statistics - Statistics about optimization history
- OutOf
BagScores - Out-of-bag evaluation scores
- Overfitting
Detector - Overfitting detector for time series data
- Parallel
Optimization Config - Parallel optimization configuration
- Parallel
Optimization Result - Parallel optimization result
- Parallel
Optimizer - Parallel hyperparameter optimizer
- Parameter
Importance Analyzer - Parameter importance analyzer
- Parameter
Sensitivity - Sensitivity of a single parameter
- Parameter
Space - Enhanced parameter space with categorical parameter support
- Performance
Snapshot - Performance snapshot at a specific time
- Permutation
Test Result - Result of permutation test
- Plugin
Config - Configuration for plugins
- Plugin
Factory - Plugin factory for creating plugin instances
- Plugin
Optimization History - Optimization history
- Plugin
Parameter Constraints - Parameter constraints for optimization
- Plugin
Registry - Global plugin registry
- Predefined
Split - Predefined Split cross-validator
- Progress
Reporting Config - Progress reporting configuration
- Progressive
Allocation Config - Configuration for progressive resource allocation
- Progressive
Allocator - Progressive resource allocator
- Progressive
Performance - Progressive performance analysis
- Purged
Group Time Series Split - Purged Group Time Series Split for financial data
- Randomized
SearchCV - Randomized search cross-validation
- Ranked
Algorithm - Algorithm with performance ranking
- Reliability
Diagram - Reliability
Metrics - RepeatedK
Fold - Repeated K-Fold cross-validator
- Repeated
StratifiedK Fold - Repeated Stratified K-Fold cross-validator
- Replay
Result - Result of replay update
- Resource
Config - Resource and performance configuration
- Resource
Configuration - Configuration being evaluated with allocated resources
- Resource
Utilization - Resource utilization metrics
- Robustness
Metrics - Robustness metrics for validation
- Robustness
Test Result - SHAP
Analyzer - SHAP value analyzer for hyperparameters
- SHAP
Config - Configuration for SHAP value computation
- SHAP
Result - Result of SHAP analysis
- Sample
Bias Variance - Bias-variance results for individual test samples
- Scenario
Result - Individual scenario result
- Score
Config - Scoring configuration
- Scorer
Registry - Scorer registry for built-in and custom scorers
- Scoring
Config - Enhanced scoring configuration
- Scoring
Result - Scoring result with confidence intervals and multiple metrics
- Seasonal
Cross Validator - Seasonal cross-validator for time series with seasonal patterns
- Sensitivity
Analyzer - Sensitivity analyzer using various methods
- Sensitivity
Config - Configuration for sensitivity analysis
- Sensitivity
Result - Result of sensitivity analysis
- Shuffle
Split - Shuffle Split cross-validator
- Significance
Test Result - Statistical significance test result
- Spatial
Coordinate - Spatial coordinates for geographic data
- Spatial
Cross Validator - Spatial cross-validator that accounts for spatial autocorrelation
- Spatial
Validation Config - Configuration for spatial cross-validation
- Stability
Analysis - Stability analysis results
- Statistical
Measures - Statistical measures of the dataset
- Statistical
Test Result - Statistical test result
- Stratified
GroupK Fold - Stratified Group K-Fold cross-validator
- StratifiedK
Fold - Stratified K-Fold cross-validation iterator
- Stratified
RegressionK Fold - Stratified K-Fold cross-validation for regression tasks
- Stratified
Shuffle Split - Stratified Shuffle Split cross-validator
- Streaming
Data Reader - Streaming data reader for memory-efficient processing
- Streaming
Evaluation Result - Result from streaming evaluation
- Streaming
Statistics - Streaming statistics
- TPEConfig
- Configuration for TPE optimizer
- TPEOptimizer
- Tree-structured Parzen Estimator (TPE) for hyperparameter optimization
- Target
Statistics - Target statistics for regression tasks
- Temporal
Cross Validator - Time series cross-validator with temporal dependency awareness
- Temporal
Validation Config - Configuration for temporal validation
- Threshold
Optimization Result - Threshold optimization results
- Time
Series Split - Time Series Split cross-validator with gap and overlapping support
- Transfer
Learning - Transfer learning for warm-start across similar problems
- Transfer
Learning Config - Configuration for transfer learning optimizer
- Transfer
Learning Optimizer - Transfer learning optimizer
- Transfer
Result - Result of transfer learning
- Transformation
Info - Information needed to transform new data
- Tuned
Threshold ClassifierCV - Tuned threshold classifier with cross-validation
- Tuned
Threshold ClassifierCV Trained - Trained tuned threshold classifier
- Uncertainty
Components - Uncertainty
Decomposition - Uncertainty
Quantification Config - Uncertainty
Quantification Result - Uncertainty
Quantifier - Validation
Curve Result - Validation curve results
- Warm
Start Config - Configuration for warm-start mechanisms
- Warm
Start Initializer - Warm-start initializer
- Worker
Statistics - Worker-specific statistics
- Worst
Case Scenario Generator - Worst-case scenario generator
- Worst
Case Validation Config - Worst-case validation configuration
- Worst
Case Validation Result - Worst-case validation result
- Worst
Case Validator - Worst-case validator
Enums§
- Acquisition
Function - Acquisition functions for multi-fidelity optimization
- Adaptation
Criterion - Criteria for adaptive window sizing
- Adaptive
Fidelity Strategy - Adaptive fidelity selection strategy
- Adversarial
Attack Method - Adversarial attack methods
- Adversarial
Method - Aleatoric
Uncertainty Method - Algorithm
Family - Algorithm family categories for classification and regression
- Allocation
Strategy - Resource allocation strategy
- AutoML
Stage - AutoML pipeline stages
- Batch
Acquisition Strategy - Batch acquisition strategies for parallel Bayesian optimization
- Bayes
Acquisition Function - Acquisition functions for Bayesian optimization
- Bayes
Param Distribution - Parameter distribution types for Bayesian search
- Budget
Allocation Strategy - Budget allocation strategy
- Calibration
Method - Coarse
ToFine Strategy - Coarse-to-fine optimization strategy
- Communication
Protocol - Communication protocols for distributed optimization
- Complexity
Measure - Complexity measures for different types of models
- Complexity
Recommendation - Recommendations based on complexity analysis
- Concept
Drift Handling - Concept drift handling strategies
- Config
Error - Configuration management error types
- Correlation
Model - Models for correlation between fidelities
- Corruption
Type - Corruption types for features
- Cost
Model - Cost models for different fidelity levels
- Distance
Method - Distance calculation methods
- Distribution
Shift Type - Distribution shift types
- Diversity
Measure - Diversity measures for ensemble evaluation
- Drift
Detection Method - Drift detection methods
- Drift
Detector Type - Types of drift detectors
- Drift
Pattern - Drift patterns for temporal data
- Drift
Type - Types of detected drift
- Early
Stopping Strategy - Different early stopping strategies
- Ensemble
Evaluation Strategy - Ensemble evaluation strategies
- Ensemble
Strategy - Ensemble composition strategies
- Epistemic
Uncertainty Method - Error
Handling Strategy - Error handling strategies
- Evidence
Estimation Method - Methods for estimating the evidence (marginal likelihood)
- Evidence
Method - Feature
Engineering Strategy - Feature engineering strategies
- Feature
Selection Method - Feature selection methods
- Feature
Transformation Type - Types of feature transformations
- Feature
Type - Feature types
- FewShot
Algorithm - Few-shot learning algorithms
- Fidelity
Level - Fidelity levels for multi-fidelity optimization
- Fidelity
Progression - Fidelity progression strategies
- Fidelity
Selection Method - Methods for selecting fidelity levels
- Fold
Update Strategy - Strategies for updating folds in streaming CV
- Grid
Parameter Value - A parameter value that can be of different types
- Group
Strategy - Strategy for defining groups in GroupKFold
- Hierarchical
Strategy - Hook
Error - Hook error type
- Imbalanced
Strategy - Importance
Weighting Method - Importance weighting methods for instance transfer
- Incremental
Evaluation Strategy - Incremental evaluation strategies
- Information
Criterion - Types of information criteria
- Load
Balancing Strategy - Load balancing strategies for parallel execution
- Memory
Error - Memory-efficient evaluation errors
- Meta
Learning Strategy - Meta-learning strategies for hyperparameter initialization
- Metric
Error - Metric error type
- Middleware
Error - Middleware error type
- Missing
Pattern - Missing data patterns
- Model
Selection Criteria - Model selection criteria
- Multi
Fidelity Strategy - Multi-fidelity optimization strategies
- Multi
Label Strategy - Multiple
Testing Correction - Multiple testing correction methods
- NASStrategy
- Neural Architecture Search strategies
- Noise
Pattern - Label noise patterns
- Noise
Type - Nonconformity
Method - Methods for computing nonconformity scores
- OODDetection
Method - Out-of-Distribution detection methods
- Optimization
Level - AutoML optimization level
- Optimization
Metric - Metric to optimize when tuning threshold
- Optimizer
Architecture - Learned optimizer architectures
- Parallel
Strategy - Parallel optimization strategies
- Parameter
Constraint - Parameter constraint type
- Parameter
Definition - Parameter definition for optimization
- Parameter
Distribution - Parameter distribution for randomized search
- Parameter
Scale - Parameter
Value - Parameter value types
- Plugin
Error - Plugin error type
- Prior
Type - Prioritization
Strategy - Prioritization strategies for experience replay
- Progressive
Allocation Strategy - Progressive resource allocation strategy
- Replay
Sampling Strategy - Sampling strategies
- Sampling
Strategy - Scoring
- Scoring method for cross-validation
- Similarity
Metric - Similarity metrics for dataset comparison
- Spatial
Clustering Method - Spatial clustering methods for grouping
- Stability
Metric - Stability metrics for ensemble evaluation
- Stop
Reason - Reason for stopping optimization
- Surrogate
Model - Surrogate models for meta-learning
- Synchronization
Strategy - Synchronization strategies for parallel optimization
- Task
Type - Task type for scoring
- Transfer
Method - Transfer learning methods
- Transfer
Strategy - Transfer learning strategies for hyperparameter optimization
- Uncertainty
Decomposition Method - Warm
Start Strategy - Warm-start strategy for optimization initialization
- Worst
Case Scenario - Worst-case scenario types
Traits§
- AutoML
Progress Callback - Progress callback for AutoML pipeline
- Cross
Validator - Trait for cross-validation iterators
- Custom
Metric - Custom metric trait
- Custom
Scorer - Custom scoring function trait
- Early
Stopping Callback - Early stopping callback trait for use with optimizers
- Optimization
Hook - Optimization hook trait for callbacks
- Optimization
Learner - Trait for optimizers that can learn from experience
- Optimization
Middleware - Middleware for optimization pipelines
- Optimizer
Plugin - Core trait for optimization plugins
- Regression
Cross Validator - Extended trait for regression cross-validation that works with continuous targets
Functions§
- analyze_
model_ complexity - Convenience function for analyzing model complexity
- analyze_
parameter_ sensitivity - Perform sensitivity analysis
- automl
- Convenience function for quick AutoML
- automl_
with_ budget - Quick AutoML with custom time budget
- bayesian_
model_ average - bias_
variance_ decompose - Convenience function for performing bias-variance decomposition
- compute_
shap_ importance - Compute SHAP values for hyperparameters
- cross_
val_ predict - Generate cross-validated estimates for each input data point
- cross_
val_ score - Evaluate a score by cross-validation
- cross_
validate - Evaluate metric(s) by cross-validation and also record fit/score times
- cv_
select_ model - Convenience function for cross-validation model selection
- detect_
overfitting_ learning_ curve - Convenience function for detecting overfitting from learning curves
- engineer_
features - Convenience function for quick feature engineering
- evaluate_
ensemble - Convenience function for ensemble evaluation
- evaluate_
incremental_ stream - Convenience function for incremental evaluation
- friedman_
test - Friedman test for comparing multiple models across multiple datasets
- hierarchical_
cross_ validate - imbalanced_
cross_ validate - learning_
curve - Compute learning curves for an estimator
- mcnemar_
test - McNemar’s test for comparing two binary classifiers
- memory_
efficient_ cross_ validate - Convenience function for memory-efficient cross-validation
- meta_
learning_ recommend - Convenience function for meta-learning based hyperparameter initialization
- multi_
fidelity_ optimize - Convenience function for multi-fidelity optimization
- multilabel_
cross_ validate - multiple_
model_ comparison - Multiple model comparison with correction for multiple testing
- nemenyi_
post_ hoc_ test - Nemenyi post-hoc test for pairwise comparisons after Friedman test
- nested_
cross_ validate - Nested cross-validation for unbiased model evaluation with hyperparameter optimization
- optimize_
threshold - Optimize threshold for a given metric
- paired_
t_ test - Paired t-test for comparing two sets of continuous performance scores
- paired_
ttest - Perform paired t-test for comparing two sets of CV scores
- parallel_
optimize - Convenience function for parallel optimization
- permutation_
test_ score - Evaluate the significance of a cross-validated score with permutations
- quantify_
aleatoric_ uncertainty - quantify_
epistemic_ uncertainty - quantify_
uncertainty - robustness_
test - select_
best_ algorithm - Convenience function for quick algorithm selection
- select_
ensemble - Convenience function for ensemble selection
- train_
test_ split - Split arrays or matrices into random train and test subsets
- validate_
ood - Convenience function for out-of-distribution validation
- validation_
curve - Compute validation curves for an estimator
- wilcoxon_
signed_ rank_ test - Wilcoxon signed-rank test (non-parametric alternative to paired t-test)
- worst_
case_ validate - Convenience function for worst-case validation
Type Aliases§
- Param
Config Fn - Parameter configuration function type
- Parameter
Distributions - Parameter distribution grid for randomized search
- Parameter
Grid - Parameter grid for grid search
- Parameter
Set - A parameter combination for one grid search iteration