#[non_exhaustive]pub struct AutoMlJobObjectiveBuilder { /* private fields */ }
Expand description
A builder for AutoMlJobObjective
.
Implementations§
source§impl AutoMlJobObjectiveBuilder
impl AutoMlJobObjectiveBuilder
sourcepub fn metric_name(self, input: AutoMlMetricEnum) -> Self
pub fn metric_name(self, input: AutoMlMetricEnum) -> Self
The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.
The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.
-
For tabular problem types:
-
List of available metrics:
-
Regression:
InferenceLatency
,MAE
,MSE
,R2
,RMSE
-
Binary classification:
Accuracy
,AUC
,BalancedAccuracy
,F1
,InferenceLatency
,LogLoss
,Precision
,Recall
-
Multiclass classification:
Accuracy
,BalancedAccuracy
,F1macro
,InferenceLatency
,LogLoss
,PrecisionMacro
,RecallMacro
For a description of each metric, see Autopilot metrics for classification and regression.
-
-
Default objective metrics:
-
Regression:
MSE
. -
Binary classification:
F1
. -
Multiclass classification:
Accuracy
.
-
-
-
For image or text classification problem types:
-
List of available metrics:
Accuracy
For a description of each metric, see Autopilot metrics for text and image classification.
-
Default objective metrics:
Accuracy
-
-
For time-series forecasting problem types:
-
List of available metrics:
RMSE
,wQL
,Average wQL
,MASE
,MAPE
,WAPE
For a description of each metric, see Autopilot metrics for time-series forecasting.
-
Default objective metrics:
AverageWeightedQuantileLoss
-
-
For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the
AutoMLJobObjective
field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see Metrics for fine-tuning LLMs in Autopilot.
sourcepub fn set_metric_name(self, input: Option<AutoMlMetricEnum>) -> Self
pub fn set_metric_name(self, input: Option<AutoMlMetricEnum>) -> Self
The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.
The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.
-
For tabular problem types:
-
List of available metrics:
-
Regression:
InferenceLatency
,MAE
,MSE
,R2
,RMSE
-
Binary classification:
Accuracy
,AUC
,BalancedAccuracy
,F1
,InferenceLatency
,LogLoss
,Precision
,Recall
-
Multiclass classification:
Accuracy
,BalancedAccuracy
,F1macro
,InferenceLatency
,LogLoss
,PrecisionMacro
,RecallMacro
For a description of each metric, see Autopilot metrics for classification and regression.
-
-
Default objective metrics:
-
Regression:
MSE
. -
Binary classification:
F1
. -
Multiclass classification:
Accuracy
.
-
-
-
For image or text classification problem types:
-
List of available metrics:
Accuracy
For a description of each metric, see Autopilot metrics for text and image classification.
-
Default objective metrics:
Accuracy
-
-
For time-series forecasting problem types:
-
List of available metrics:
RMSE
,wQL
,Average wQL
,MASE
,MAPE
,WAPE
For a description of each metric, see Autopilot metrics for time-series forecasting.
-
Default objective metrics:
AverageWeightedQuantileLoss
-
-
For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the
AutoMLJobObjective
field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see Metrics for fine-tuning LLMs in Autopilot.
sourcepub fn get_metric_name(&self) -> &Option<AutoMlMetricEnum>
pub fn get_metric_name(&self) -> &Option<AutoMlMetricEnum>
The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.
The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.
-
For tabular problem types:
-
List of available metrics:
-
Regression:
InferenceLatency
,MAE
,MSE
,R2
,RMSE
-
Binary classification:
Accuracy
,AUC
,BalancedAccuracy
,F1
,InferenceLatency
,LogLoss
,Precision
,Recall
-
Multiclass classification:
Accuracy
,BalancedAccuracy
,F1macro
,InferenceLatency
,LogLoss
,PrecisionMacro
,RecallMacro
For a description of each metric, see Autopilot metrics for classification and regression.
-
-
Default objective metrics:
-
Regression:
MSE
. -
Binary classification:
F1
. -
Multiclass classification:
Accuracy
.
-
-
-
For image or text classification problem types:
-
List of available metrics:
Accuracy
For a description of each metric, see Autopilot metrics for text and image classification.
-
Default objective metrics:
Accuracy
-
-
For time-series forecasting problem types:
-
List of available metrics:
RMSE
,wQL
,Average wQL
,MASE
,MAPE
,WAPE
For a description of each metric, see Autopilot metrics for time-series forecasting.
-
Default objective metrics:
AverageWeightedQuantileLoss
-
-
For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the
AutoMLJobObjective
field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see Metrics for fine-tuning LLMs in Autopilot.
sourcepub fn build(self) -> AutoMlJobObjective
pub fn build(self) -> AutoMlJobObjective
Consumes the builder and constructs a AutoMlJobObjective
.
Trait Implementations§
source§impl Clone for AutoMlJobObjectiveBuilder
impl Clone for AutoMlJobObjectiveBuilder
source§fn clone(&self) -> AutoMlJobObjectiveBuilder
fn clone(&self) -> AutoMlJobObjectiveBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for AutoMlJobObjectiveBuilder
impl Debug for AutoMlJobObjectiveBuilder
source§impl Default for AutoMlJobObjectiveBuilder
impl Default for AutoMlJobObjectiveBuilder
source§fn default() -> AutoMlJobObjectiveBuilder
fn default() -> AutoMlJobObjectiveBuilder
source§impl PartialEq for AutoMlJobObjectiveBuilder
impl PartialEq for AutoMlJobObjectiveBuilder
source§fn eq(&self, other: &AutoMlJobObjectiveBuilder) -> bool
fn eq(&self, other: &AutoMlJobObjectiveBuilder) -> bool
self
and other
values to be equal, and is used
by ==
.