Struct aws_sdk_sagemaker::operation::create_hyper_parameter_tuning_job::CreateHyperParameterTuningJobInput
source · #[non_exhaustive]pub struct CreateHyperParameterTuningJobInput {
pub hyper_parameter_tuning_job_name: Option<String>,
pub hyper_parameter_tuning_job_config: Option<HyperParameterTuningJobConfig>,
pub training_job_definition: Option<HyperParameterTrainingJobDefinition>,
pub training_job_definitions: Option<Vec<HyperParameterTrainingJobDefinition>>,
pub warm_start_config: Option<HyperParameterTuningJobWarmStartConfig>,
pub tags: Option<Vec<Tag>>,
pub autotune: Option<Autotune>,
}
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.hyper_parameter_tuning_job_name: Option<String>
The name of the tuning job. This name is the prefix for the names of all training jobs that this tuning job launches. The name must be unique within the same Amazon Web Services account and Amazon Web Services Region. The name must have 1 to 32 characters. Valid characters are a-z, A-Z, 0-9, and : + = @ _ % - (hyphen). The name is not case sensitive.
hyper_parameter_tuning_job_config: Option<HyperParameterTuningJobConfig>
The HyperParameterTuningJobConfig object that describes the tuning job, including the search strategy, the objective metric used to evaluate training jobs, ranges of parameters to search, and resource limits for the tuning job. For more information, see How Hyperparameter Tuning Works.
training_job_definition: Option<HyperParameterTrainingJobDefinition>
The HyperParameterTrainingJobDefinition object that describes the training jobs that this tuning job launches, including static hyperparameters, input data configuration, output data configuration, resource configuration, and stopping condition.
training_job_definitions: Option<Vec<HyperParameterTrainingJobDefinition>>
A list of the HyperParameterTrainingJobDefinition objects launched for this tuning job.
warm_start_config: Option<HyperParameterTuningJobWarmStartConfig>
Specifies the configuration for starting the hyperparameter tuning job using one or more previous tuning jobs as a starting point. The results of previous tuning jobs are used to inform which combinations of hyperparameters to search over in the new tuning job.
All training jobs launched by the new hyperparameter tuning job are evaluated by using the objective metric. If you specify IDENTICAL_DATA_AND_ALGORITHM
as the WarmStartType
value for the warm start configuration, the training job that performs the best in the new tuning job is compared to the best training jobs from the parent tuning jobs. From these, the training job that performs the best as measured by the objective metric is returned as the overall best training job.
All training jobs launched by parent hyperparameter tuning jobs and the new hyperparameter tuning jobs count against the limit of training jobs for the tuning job.
An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.
Tags that you specify for the tuning job are also added to all training jobs that the tuning job launches.
autotune: Option<Autotune>
Configures SageMaker Automatic model tuning (AMT) to automatically find optimal parameters for the following fields:
-
ParameterRanges: The names and ranges of parameters that a hyperparameter tuning job can optimize.
-
ResourceLimits: The maximum resources that can be used for a training job. These resources include the maximum number of training jobs, the maximum runtime of a tuning job, and the maximum number of training jobs to run at the same time.
-
TrainingJobEarlyStoppingType: A flag that specifies whether or not to use early stopping for training jobs launched by a hyperparameter tuning job.
-
RetryStrategy: The number of times to retry a training job.
-
Strategy: Specifies how hyperparameter tuning chooses the combinations of hyperparameter values to use for the training jobs that it launches.
-
ConvergenceDetected: A flag to indicate that Automatic model tuning (AMT) has detected model convergence.
Implementations§
source§impl CreateHyperParameterTuningJobInput
impl CreateHyperParameterTuningJobInput
sourcepub fn hyper_parameter_tuning_job_name(&self) -> Option<&str>
pub fn hyper_parameter_tuning_job_name(&self) -> Option<&str>
The name of the tuning job. This name is the prefix for the names of all training jobs that this tuning job launches. The name must be unique within the same Amazon Web Services account and Amazon Web Services Region. The name must have 1 to 32 characters. Valid characters are a-z, A-Z, 0-9, and : + = @ _ % - (hyphen). The name is not case sensitive.
sourcepub fn hyper_parameter_tuning_job_config(
&self,
) -> Option<&HyperParameterTuningJobConfig>
pub fn hyper_parameter_tuning_job_config( &self, ) -> Option<&HyperParameterTuningJobConfig>
The HyperParameterTuningJobConfig object that describes the tuning job, including the search strategy, the objective metric used to evaluate training jobs, ranges of parameters to search, and resource limits for the tuning job. For more information, see How Hyperparameter Tuning Works.
sourcepub fn training_job_definition(
&self,
) -> Option<&HyperParameterTrainingJobDefinition>
pub fn training_job_definition( &self, ) -> Option<&HyperParameterTrainingJobDefinition>
The HyperParameterTrainingJobDefinition object that describes the training jobs that this tuning job launches, including static hyperparameters, input data configuration, output data configuration, resource configuration, and stopping condition.
sourcepub fn training_job_definitions(&self) -> &[HyperParameterTrainingJobDefinition]
pub fn training_job_definitions(&self) -> &[HyperParameterTrainingJobDefinition]
A list of the HyperParameterTrainingJobDefinition objects launched for this tuning job.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .training_job_definitions.is_none()
.
sourcepub fn warm_start_config(
&self,
) -> Option<&HyperParameterTuningJobWarmStartConfig>
pub fn warm_start_config( &self, ) -> Option<&HyperParameterTuningJobWarmStartConfig>
Specifies the configuration for starting the hyperparameter tuning job using one or more previous tuning jobs as a starting point. The results of previous tuning jobs are used to inform which combinations of hyperparameters to search over in the new tuning job.
All training jobs launched by the new hyperparameter tuning job are evaluated by using the objective metric. If you specify IDENTICAL_DATA_AND_ALGORITHM
as the WarmStartType
value for the warm start configuration, the training job that performs the best in the new tuning job is compared to the best training jobs from the parent tuning jobs. From these, the training job that performs the best as measured by the objective metric is returned as the overall best training job.
All training jobs launched by parent hyperparameter tuning jobs and the new hyperparameter tuning jobs count against the limit of training jobs for the tuning job.
An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.
Tags that you specify for the tuning job are also added to all training jobs that the tuning job launches.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .tags.is_none()
.
sourcepub fn autotune(&self) -> Option<&Autotune>
pub fn autotune(&self) -> Option<&Autotune>
Configures SageMaker Automatic model tuning (AMT) to automatically find optimal parameters for the following fields:
-
ParameterRanges: The names and ranges of parameters that a hyperparameter tuning job can optimize.
-
ResourceLimits: The maximum resources that can be used for a training job. These resources include the maximum number of training jobs, the maximum runtime of a tuning job, and the maximum number of training jobs to run at the same time.
-
TrainingJobEarlyStoppingType: A flag that specifies whether or not to use early stopping for training jobs launched by a hyperparameter tuning job.
-
RetryStrategy: The number of times to retry a training job.
-
Strategy: Specifies how hyperparameter tuning chooses the combinations of hyperparameter values to use for the training jobs that it launches.
-
ConvergenceDetected: A flag to indicate that Automatic model tuning (AMT) has detected model convergence.
source§impl CreateHyperParameterTuningJobInput
impl CreateHyperParameterTuningJobInput
sourcepub fn builder() -> CreateHyperParameterTuningJobInputBuilder
pub fn builder() -> CreateHyperParameterTuningJobInputBuilder
Creates a new builder-style object to manufacture CreateHyperParameterTuningJobInput
.
Trait Implementations§
source§impl Clone for CreateHyperParameterTuningJobInput
impl Clone for CreateHyperParameterTuningJobInput
source§fn clone(&self) -> CreateHyperParameterTuningJobInput
fn clone(&self) -> CreateHyperParameterTuningJobInput
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl PartialEq for CreateHyperParameterTuningJobInput
impl PartialEq for CreateHyperParameterTuningJobInput
source§fn eq(&self, other: &CreateHyperParameterTuningJobInput) -> bool
fn eq(&self, other: &CreateHyperParameterTuningJobInput) -> bool
self
and other
values to be equal, and is used by ==
.impl StructuralPartialEq for CreateHyperParameterTuningJobInput
Auto Trait Implementations§
impl Freeze for CreateHyperParameterTuningJobInput
impl RefUnwindSafe for CreateHyperParameterTuningJobInput
impl Send for CreateHyperParameterTuningJobInput
impl Sync for CreateHyperParameterTuningJobInput
impl Unpin for CreateHyperParameterTuningJobInput
impl UnwindSafe for CreateHyperParameterTuningJobInput
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more