#[non_exhaustive]pub struct AutoMlJobObjectiveBuilder { /* private fields */ }
Expand description
A builder for AutoMlJobObjective
.
Implementations§
Source§impl AutoMlJobObjectiveBuilder
impl AutoMlJobObjectiveBuilder
Sourcepub fn metric_name(self, input: AutoMlMetricEnum) -> Self
pub fn metric_name(self, input: AutoMlMetricEnum) -> Self
The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.
The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.
-
For tabular problem types:
-
List of available metrics:
-
Regression:
MAE
,MSE
,R2
,RMSE
-
Binary classification:
Accuracy
,AUC
,BalancedAccuracy
,F1
,Precision
,Recall
-
Multiclass classification:
Accuracy
,BalancedAccuracy
,F1macro
,PrecisionMacro
,RecallMacro
For a description of each metric, see Autopilot metrics for classification and regression.
-
-
Default objective metrics:
-
Regression:
MSE
. -
Binary classification:
F1
. -
Multiclass classification:
Accuracy
.
-
-
-
For image or text classification problem types:
-
List of available metrics:
Accuracy
For a description of each metric, see Autopilot metrics for text and image classification.
-
Default objective metrics:
Accuracy
-
-
For time-series forecasting problem types:
-
List of available metrics:
RMSE
,wQL
,Average wQL
,MASE
,MAPE
,WAPE
For a description of each metric, see Autopilot metrics for time-series forecasting.
-
Default objective metrics:
AverageWeightedQuantileLoss
-
-
For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the
AutoMLJobObjective
field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see Metrics for fine-tuning LLMs in Autopilot.
Sourcepub fn set_metric_name(self, input: Option<AutoMlMetricEnum>) -> Self
pub fn set_metric_name(self, input: Option<AutoMlMetricEnum>) -> Self
The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.
The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.
-
For tabular problem types:
-
List of available metrics:
-
Regression:
MAE
,MSE
,R2
,RMSE
-
Binary classification:
Accuracy
,AUC
,BalancedAccuracy
,F1
,Precision
,Recall
-
Multiclass classification:
Accuracy
,BalancedAccuracy
,F1macro
,PrecisionMacro
,RecallMacro
For a description of each metric, see Autopilot metrics for classification and regression.
-
-
Default objective metrics:
-
Regression:
MSE
. -
Binary classification:
F1
. -
Multiclass classification:
Accuracy
.
-
-
-
For image or text classification problem types:
-
List of available metrics:
Accuracy
For a description of each metric, see Autopilot metrics for text and image classification.
-
Default objective metrics:
Accuracy
-
-
For time-series forecasting problem types:
-
List of available metrics:
RMSE
,wQL
,Average wQL
,MASE
,MAPE
,WAPE
For a description of each metric, see Autopilot metrics for time-series forecasting.
-
Default objective metrics:
AverageWeightedQuantileLoss
-
-
For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the
AutoMLJobObjective
field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see Metrics for fine-tuning LLMs in Autopilot.
Sourcepub fn get_metric_name(&self) -> &Option<AutoMlMetricEnum>
pub fn get_metric_name(&self) -> &Option<AutoMlMetricEnum>
The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.
The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.
-
For tabular problem types:
-
List of available metrics:
-
Regression:
MAE
,MSE
,R2
,RMSE
-
Binary classification:
Accuracy
,AUC
,BalancedAccuracy
,F1
,Precision
,Recall
-
Multiclass classification:
Accuracy
,BalancedAccuracy
,F1macro
,PrecisionMacro
,RecallMacro
For a description of each metric, see Autopilot metrics for classification and regression.
-
-
Default objective metrics:
-
Regression:
MSE
. -
Binary classification:
F1
. -
Multiclass classification:
Accuracy
.
-
-
-
For image or text classification problem types:
-
List of available metrics:
Accuracy
For a description of each metric, see Autopilot metrics for text and image classification.
-
Default objective metrics:
Accuracy
-
-
For time-series forecasting problem types:
-
List of available metrics:
RMSE
,wQL
,Average wQL
,MASE
,MAPE
,WAPE
For a description of each metric, see Autopilot metrics for time-series forecasting.
-
Default objective metrics:
AverageWeightedQuantileLoss
-
-
For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the
AutoMLJobObjective
field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see Metrics for fine-tuning LLMs in Autopilot.
Sourcepub fn build(self) -> AutoMlJobObjective
pub fn build(self) -> AutoMlJobObjective
Consumes the builder and constructs a AutoMlJobObjective
.
Trait Implementations§
Source§impl Clone for AutoMlJobObjectiveBuilder
impl Clone for AutoMlJobObjectiveBuilder
Source§fn clone(&self) -> AutoMlJobObjectiveBuilder
fn clone(&self) -> AutoMlJobObjectiveBuilder
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for AutoMlJobObjectiveBuilder
impl Debug for AutoMlJobObjectiveBuilder
Source§impl Default for AutoMlJobObjectiveBuilder
impl Default for AutoMlJobObjectiveBuilder
Source§fn default() -> AutoMlJobObjectiveBuilder
fn default() -> AutoMlJobObjectiveBuilder
impl StructuralPartialEq for AutoMlJobObjectiveBuilder
Auto Trait Implementations§
impl Freeze for AutoMlJobObjectiveBuilder
impl RefUnwindSafe for AutoMlJobObjectiveBuilder
impl Send for AutoMlJobObjectiveBuilder
impl Sync for AutoMlJobObjectiveBuilder
impl Unpin for AutoMlJobObjectiveBuilder
impl UnwindSafe for AutoMlJobObjectiveBuilder
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);