Skip to main content

aws_sdk_sagemaker/types/
_auto_ml_job_objective.rs

1// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
2
3/// <p>Specifies a metric to minimize or maximize as the objective of an AutoML job.</p>
4#[non_exhaustive]
5#[derive(::std::clone::Clone, ::std::cmp::PartialEq, ::std::fmt::Debug)]
6pub struct AutoMlJobObjective {
7    /// <p>The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.</p>
8    /// <p>The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.</p>
9    /// <ul>
10    /// <li>
11    /// <p>For tabular problem types:</p>
12    /// <ul>
13    /// <li>
14    /// <p>List of available metrics:</p>
15    /// <ul>
16    /// <li>
17    /// <p>Regression: <code>MAE</code>, <code>MSE</code>, <code>R2</code>, <code>RMSE</code></p></li>
18    /// <li>
19    /// <p>Binary classification: <code>Accuracy</code>, <code>AUC</code>, <code>BalancedAccuracy</code>, <code>F1</code>, <code>Precision</code>, <code>Recall</code></p></li>
20    /// <li>
21    /// <p>Multiclass classification: <code>Accuracy</code>, <code>BalancedAccuracy</code>, <code>F1macro</code>, <code>PrecisionMacro</code>, <code>RecallMacro</code></p></li>
22    /// </ul>
23    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-metrics-validation.html#autopilot-metrics">Autopilot metrics for classification and regression</a>.</p></li>
24    /// <li>
25    /// <p>Default objective metrics:</p>
26    /// <ul>
27    /// <li>
28    /// <p>Regression: <code>MSE</code>.</p></li>
29    /// <li>
30    /// <p>Binary classification: <code>F1</code>.</p></li>
31    /// <li>
32    /// <p>Multiclass classification: <code>Accuracy</code>.</p></li>
33    /// </ul></li>
34    /// </ul></li>
35    /// <li>
36    /// <p>For image or text classification problem types:</p>
37    /// <ul>
38    /// <li>
39    /// <p>List of available metrics: <code>Accuracy</code></p>
40    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/text-classification-data-format-and-metric.html">Autopilot metrics for text and image classification</a>.</p></li>
41    /// <li>
42    /// <p>Default objective metrics: <code>Accuracy</code></p></li>
43    /// </ul></li>
44    /// <li>
45    /// <p>For time-series forecasting problem types:</p>
46    /// <ul>
47    /// <li>
48    /// <p>List of available metrics: <code>RMSE</code>, <code>wQL</code>, <code>Average wQL</code>, <code>MASE</code>, <code>MAPE</code>, <code>WAPE</code></p>
49    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/timeseries-objective-metric.html">Autopilot metrics for time-series forecasting</a>.</p></li>
50    /// <li>
51    /// <p>Default objective metrics: <code>AverageWeightedQuantileLoss</code></p></li>
52    /// </ul></li>
53    /// <li>
54    /// <p>For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the <code>AutoMLJobObjective</code> field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-llms-finetuning-metrics.html">Metrics for fine-tuning LLMs in Autopilot</a>.</p></li>
55    /// </ul>
56    pub metric_name: ::std::option::Option<crate::types::AutoMlMetricEnum>,
57}
58impl AutoMlJobObjective {
59    /// <p>The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.</p>
60    /// <p>The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.</p>
61    /// <ul>
62    /// <li>
63    /// <p>For tabular problem types:</p>
64    /// <ul>
65    /// <li>
66    /// <p>List of available metrics:</p>
67    /// <ul>
68    /// <li>
69    /// <p>Regression: <code>MAE</code>, <code>MSE</code>, <code>R2</code>, <code>RMSE</code></p></li>
70    /// <li>
71    /// <p>Binary classification: <code>Accuracy</code>, <code>AUC</code>, <code>BalancedAccuracy</code>, <code>F1</code>, <code>Precision</code>, <code>Recall</code></p></li>
72    /// <li>
73    /// <p>Multiclass classification: <code>Accuracy</code>, <code>BalancedAccuracy</code>, <code>F1macro</code>, <code>PrecisionMacro</code>, <code>RecallMacro</code></p></li>
74    /// </ul>
75    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-metrics-validation.html#autopilot-metrics">Autopilot metrics for classification and regression</a>.</p></li>
76    /// <li>
77    /// <p>Default objective metrics:</p>
78    /// <ul>
79    /// <li>
80    /// <p>Regression: <code>MSE</code>.</p></li>
81    /// <li>
82    /// <p>Binary classification: <code>F1</code>.</p></li>
83    /// <li>
84    /// <p>Multiclass classification: <code>Accuracy</code>.</p></li>
85    /// </ul></li>
86    /// </ul></li>
87    /// <li>
88    /// <p>For image or text classification problem types:</p>
89    /// <ul>
90    /// <li>
91    /// <p>List of available metrics: <code>Accuracy</code></p>
92    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/text-classification-data-format-and-metric.html">Autopilot metrics for text and image classification</a>.</p></li>
93    /// <li>
94    /// <p>Default objective metrics: <code>Accuracy</code></p></li>
95    /// </ul></li>
96    /// <li>
97    /// <p>For time-series forecasting problem types:</p>
98    /// <ul>
99    /// <li>
100    /// <p>List of available metrics: <code>RMSE</code>, <code>wQL</code>, <code>Average wQL</code>, <code>MASE</code>, <code>MAPE</code>, <code>WAPE</code></p>
101    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/timeseries-objective-metric.html">Autopilot metrics for time-series forecasting</a>.</p></li>
102    /// <li>
103    /// <p>Default objective metrics: <code>AverageWeightedQuantileLoss</code></p></li>
104    /// </ul></li>
105    /// <li>
106    /// <p>For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the <code>AutoMLJobObjective</code> field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-llms-finetuning-metrics.html">Metrics for fine-tuning LLMs in Autopilot</a>.</p></li>
107    /// </ul>
108    pub fn metric_name(&self) -> ::std::option::Option<&crate::types::AutoMlMetricEnum> {
109        self.metric_name.as_ref()
110    }
111}
112impl AutoMlJobObjective {
113    /// Creates a new builder-style object to manufacture [`AutoMlJobObjective`](crate::types::AutoMlJobObjective).
114    pub fn builder() -> crate::types::builders::AutoMlJobObjectiveBuilder {
115        crate::types::builders::AutoMlJobObjectiveBuilder::default()
116    }
117}
118
119/// A builder for [`AutoMlJobObjective`](crate::types::AutoMlJobObjective).
120#[derive(::std::clone::Clone, ::std::cmp::PartialEq, ::std::default::Default, ::std::fmt::Debug)]
121#[non_exhaustive]
122pub struct AutoMlJobObjectiveBuilder {
123    pub(crate) metric_name: ::std::option::Option<crate::types::AutoMlMetricEnum>,
124}
125impl AutoMlJobObjectiveBuilder {
126    /// <p>The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.</p>
127    /// <p>The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.</p>
128    /// <ul>
129    /// <li>
130    /// <p>For tabular problem types:</p>
131    /// <ul>
132    /// <li>
133    /// <p>List of available metrics:</p>
134    /// <ul>
135    /// <li>
136    /// <p>Regression: <code>MAE</code>, <code>MSE</code>, <code>R2</code>, <code>RMSE</code></p></li>
137    /// <li>
138    /// <p>Binary classification: <code>Accuracy</code>, <code>AUC</code>, <code>BalancedAccuracy</code>, <code>F1</code>, <code>Precision</code>, <code>Recall</code></p></li>
139    /// <li>
140    /// <p>Multiclass classification: <code>Accuracy</code>, <code>BalancedAccuracy</code>, <code>F1macro</code>, <code>PrecisionMacro</code>, <code>RecallMacro</code></p></li>
141    /// </ul>
142    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-metrics-validation.html#autopilot-metrics">Autopilot metrics for classification and regression</a>.</p></li>
143    /// <li>
144    /// <p>Default objective metrics:</p>
145    /// <ul>
146    /// <li>
147    /// <p>Regression: <code>MSE</code>.</p></li>
148    /// <li>
149    /// <p>Binary classification: <code>F1</code>.</p></li>
150    /// <li>
151    /// <p>Multiclass classification: <code>Accuracy</code>.</p></li>
152    /// </ul></li>
153    /// </ul></li>
154    /// <li>
155    /// <p>For image or text classification problem types:</p>
156    /// <ul>
157    /// <li>
158    /// <p>List of available metrics: <code>Accuracy</code></p>
159    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/text-classification-data-format-and-metric.html">Autopilot metrics for text and image classification</a>.</p></li>
160    /// <li>
161    /// <p>Default objective metrics: <code>Accuracy</code></p></li>
162    /// </ul></li>
163    /// <li>
164    /// <p>For time-series forecasting problem types:</p>
165    /// <ul>
166    /// <li>
167    /// <p>List of available metrics: <code>RMSE</code>, <code>wQL</code>, <code>Average wQL</code>, <code>MASE</code>, <code>MAPE</code>, <code>WAPE</code></p>
168    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/timeseries-objective-metric.html">Autopilot metrics for time-series forecasting</a>.</p></li>
169    /// <li>
170    /// <p>Default objective metrics: <code>AverageWeightedQuantileLoss</code></p></li>
171    /// </ul></li>
172    /// <li>
173    /// <p>For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the <code>AutoMLJobObjective</code> field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-llms-finetuning-metrics.html">Metrics for fine-tuning LLMs in Autopilot</a>.</p></li>
174    /// </ul>
175    /// This field is required.
176    pub fn metric_name(mut self, input: crate::types::AutoMlMetricEnum) -> Self {
177        self.metric_name = ::std::option::Option::Some(input);
178        self
179    }
180    /// <p>The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.</p>
181    /// <p>The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.</p>
182    /// <ul>
183    /// <li>
184    /// <p>For tabular problem types:</p>
185    /// <ul>
186    /// <li>
187    /// <p>List of available metrics:</p>
188    /// <ul>
189    /// <li>
190    /// <p>Regression: <code>MAE</code>, <code>MSE</code>, <code>R2</code>, <code>RMSE</code></p></li>
191    /// <li>
192    /// <p>Binary classification: <code>Accuracy</code>, <code>AUC</code>, <code>BalancedAccuracy</code>, <code>F1</code>, <code>Precision</code>, <code>Recall</code></p></li>
193    /// <li>
194    /// <p>Multiclass classification: <code>Accuracy</code>, <code>BalancedAccuracy</code>, <code>F1macro</code>, <code>PrecisionMacro</code>, <code>RecallMacro</code></p></li>
195    /// </ul>
196    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-metrics-validation.html#autopilot-metrics">Autopilot metrics for classification and regression</a>.</p></li>
197    /// <li>
198    /// <p>Default objective metrics:</p>
199    /// <ul>
200    /// <li>
201    /// <p>Regression: <code>MSE</code>.</p></li>
202    /// <li>
203    /// <p>Binary classification: <code>F1</code>.</p></li>
204    /// <li>
205    /// <p>Multiclass classification: <code>Accuracy</code>.</p></li>
206    /// </ul></li>
207    /// </ul></li>
208    /// <li>
209    /// <p>For image or text classification problem types:</p>
210    /// <ul>
211    /// <li>
212    /// <p>List of available metrics: <code>Accuracy</code></p>
213    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/text-classification-data-format-and-metric.html">Autopilot metrics for text and image classification</a>.</p></li>
214    /// <li>
215    /// <p>Default objective metrics: <code>Accuracy</code></p></li>
216    /// </ul></li>
217    /// <li>
218    /// <p>For time-series forecasting problem types:</p>
219    /// <ul>
220    /// <li>
221    /// <p>List of available metrics: <code>RMSE</code>, <code>wQL</code>, <code>Average wQL</code>, <code>MASE</code>, <code>MAPE</code>, <code>WAPE</code></p>
222    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/timeseries-objective-metric.html">Autopilot metrics for time-series forecasting</a>.</p></li>
223    /// <li>
224    /// <p>Default objective metrics: <code>AverageWeightedQuantileLoss</code></p></li>
225    /// </ul></li>
226    /// <li>
227    /// <p>For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the <code>AutoMLJobObjective</code> field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-llms-finetuning-metrics.html">Metrics for fine-tuning LLMs in Autopilot</a>.</p></li>
228    /// </ul>
229    pub fn set_metric_name(mut self, input: ::std::option::Option<crate::types::AutoMlMetricEnum>) -> Self {
230        self.metric_name = input;
231        self
232    }
233    /// <p>The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.</p>
234    /// <p>The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.</p>
235    /// <ul>
236    /// <li>
237    /// <p>For tabular problem types:</p>
238    /// <ul>
239    /// <li>
240    /// <p>List of available metrics:</p>
241    /// <ul>
242    /// <li>
243    /// <p>Regression: <code>MAE</code>, <code>MSE</code>, <code>R2</code>, <code>RMSE</code></p></li>
244    /// <li>
245    /// <p>Binary classification: <code>Accuracy</code>, <code>AUC</code>, <code>BalancedAccuracy</code>, <code>F1</code>, <code>Precision</code>, <code>Recall</code></p></li>
246    /// <li>
247    /// <p>Multiclass classification: <code>Accuracy</code>, <code>BalancedAccuracy</code>, <code>F1macro</code>, <code>PrecisionMacro</code>, <code>RecallMacro</code></p></li>
248    /// </ul>
249    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-metrics-validation.html#autopilot-metrics">Autopilot metrics for classification and regression</a>.</p></li>
250    /// <li>
251    /// <p>Default objective metrics:</p>
252    /// <ul>
253    /// <li>
254    /// <p>Regression: <code>MSE</code>.</p></li>
255    /// <li>
256    /// <p>Binary classification: <code>F1</code>.</p></li>
257    /// <li>
258    /// <p>Multiclass classification: <code>Accuracy</code>.</p></li>
259    /// </ul></li>
260    /// </ul></li>
261    /// <li>
262    /// <p>For image or text classification problem types:</p>
263    /// <ul>
264    /// <li>
265    /// <p>List of available metrics: <code>Accuracy</code></p>
266    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/text-classification-data-format-and-metric.html">Autopilot metrics for text and image classification</a>.</p></li>
267    /// <li>
268    /// <p>Default objective metrics: <code>Accuracy</code></p></li>
269    /// </ul></li>
270    /// <li>
271    /// <p>For time-series forecasting problem types:</p>
272    /// <ul>
273    /// <li>
274    /// <p>List of available metrics: <code>RMSE</code>, <code>wQL</code>, <code>Average wQL</code>, <code>MASE</code>, <code>MAPE</code>, <code>WAPE</code></p>
275    /// <p>For a description of each metric, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/timeseries-objective-metric.html">Autopilot metrics for time-series forecasting</a>.</p></li>
276    /// <li>
277    /// <p>Default objective metrics: <code>AverageWeightedQuantileLoss</code></p></li>
278    /// </ul></li>
279    /// <li>
280    /// <p>For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the <code>AutoMLJobObjective</code> field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-llms-finetuning-metrics.html">Metrics for fine-tuning LLMs in Autopilot</a>.</p></li>
281    /// </ul>
282    pub fn get_metric_name(&self) -> &::std::option::Option<crate::types::AutoMlMetricEnum> {
283        &self.metric_name
284    }
285    /// Consumes the builder and constructs a [`AutoMlJobObjective`](crate::types::AutoMlJobObjective).
286    pub fn build(self) -> crate::types::AutoMlJobObjective {
287        crate::types::AutoMlJobObjective {
288            metric_name: self.metric_name,
289        }
290    }
291}