Struct aws_sdk_sagemaker::model::HyperParameterTrainingJobDefinition [−][src]
#[non_exhaustive]pub struct HyperParameterTrainingJobDefinition {Show 16 fields
pub definition_name: Option<String>,
pub tuning_objective: Option<HyperParameterTuningJobObjective>,
pub hyper_parameter_ranges: Option<ParameterRanges>,
pub static_hyper_parameters: Option<HashMap<String, String>>,
pub algorithm_specification: Option<HyperParameterAlgorithmSpecification>,
pub role_arn: Option<String>,
pub input_data_config: Option<Vec<Channel>>,
pub vpc_config: Option<VpcConfig>,
pub output_data_config: Option<OutputDataConfig>,
pub resource_config: Option<ResourceConfig>,
pub stopping_condition: Option<StoppingCondition>,
pub enable_network_isolation: bool,
pub enable_inter_container_traffic_encryption: bool,
pub enable_managed_spot_training: bool,
pub checkpoint_config: Option<CheckpointConfig>,
pub retry_strategy: Option<RetryStrategy>,
}
Expand description
Defines the training jobs launched by a hyperparameter tuning job.
Fields (Non-exhaustive)
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.definition_name: Option<String>
The job definition name.
tuning_objective: Option<HyperParameterTuningJobObjective>
Defines the objective metric for a hyperparameter tuning job.
Hyperparameter
tuning uses the value of this metric to evaluate the training jobs it launches, and
returns the training job that results in either the highest or lowest value for this
metric, depending on the value you specify for the Type
parameter.
hyper_parameter_ranges: Option<ParameterRanges>
Specifies ranges of integer, continuous, and categorical hyperparameters that a hyperparameter tuning job searches. The hyperparameter tuning job launches training jobs with hyperparameter values within these ranges to find the combination of values that result in the training job with the best performance as measured by the objective metric of the hyperparameter tuning job.
You can specify a maximum of 20 hyperparameters that a hyperparameter tuning job can search over. Every possible value of a categorical parameter range counts against this limit.
static_hyper_parameters: Option<HashMap<String, String>>
Specifies the values of hyperparameters that do not change for the tuning job.
algorithm_specification: Option<HyperParameterAlgorithmSpecification>
The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the training jobs that the tuning job launches.
role_arn: Option<String>
The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches.
input_data_config: Option<Vec<Channel>>
An array of Channel objects that specify the input for the training jobs that the tuning job launches.
vpc_config: Option<VpcConfig>
The VpcConfig object that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches to connect to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.
output_data_config: Option<OutputDataConfig>
Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the tuning job launches.
resource_config: Option<ResourceConfig>
The resources, including the compute instances and storage volumes, to use for the training jobs that the tuning job launches.
Storage
volumes store model artifacts and
incremental
states. Training algorithms might also use storage volumes for
scratch
space. If you want Amazon SageMaker to use the storage volume
to store the training data, choose File
as the
TrainingInputMode
in the algorithm specification. For distributed
training algorithms, specify an instance count greater than 1.
stopping_condition: Option<StoppingCondition>
Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.
enable_network_isolation: bool
Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If network isolation is used for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.
enable_inter_container_traffic_encryption: bool
To encrypt all communications between ML compute instances in distributed training,
choose True
. Encryption provides greater security for distributed training,
but training might take longer. How long it takes depends on the amount of communication
between compute instances, especially if you use a deep learning algorithm in
distributed training.
enable_managed_spot_training: bool
A Boolean indicating whether managed spot training is enabled (True
) or
not (False
).
checkpoint_config: Option<CheckpointConfig>
Contains information about the output location for managed spot training checkpoint data.
retry_strategy: Option<RetryStrategy>
The number of times to retry the job when the job fails due to an
InternalServerError
.
Implementations
The job definition name.
Defines the objective metric for a hyperparameter tuning job.
Hyperparameter
tuning uses the value of this metric to evaluate the training jobs it launches, and
returns the training job that results in either the highest or lowest value for this
metric, depending on the value you specify for the Type
parameter.
Specifies ranges of integer, continuous, and categorical hyperparameters that a hyperparameter tuning job searches. The hyperparameter tuning job launches training jobs with hyperparameter values within these ranges to find the combination of values that result in the training job with the best performance as measured by the objective metric of the hyperparameter tuning job.
You can specify a maximum of 20 hyperparameters that a hyperparameter tuning job can search over. Every possible value of a categorical parameter range counts against this limit.
Specifies the values of hyperparameters that do not change for the tuning job.
The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the training jobs that the tuning job launches.
The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches.
An array of Channel objects that specify the input for the training jobs that the tuning job launches.
The VpcConfig object that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches to connect to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.
Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the tuning job launches.
The resources, including the compute instances and storage volumes, to use for the training jobs that the tuning job launches.
Storage
volumes store model artifacts and
incremental
states. Training algorithms might also use storage volumes for
scratch
space. If you want Amazon SageMaker to use the storage volume
to store the training data, choose File
as the
TrainingInputMode
in the algorithm specification. For distributed
training algorithms, specify an instance count greater than 1.
Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.
Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If network isolation is used for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.
To encrypt all communications between ML compute instances in distributed training,
choose True
. Encryption provides greater security for distributed training,
but training might take longer. How long it takes depends on the amount of communication
between compute instances, especially if you use a deep learning algorithm in
distributed training.
A Boolean indicating whether managed spot training is enabled (True
) or
not (False
).
Contains information about the output location for managed spot training checkpoint data.
The number of times to retry the job when the job fails due to an
InternalServerError
.
Creates a new builder-style object to manufacture HyperParameterTrainingJobDefinition
Trait Implementations
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
This method tests for !=
.
Auto Trait Implementations
impl Send for HyperParameterTrainingJobDefinition
impl Sync for HyperParameterTrainingJobDefinition
impl Unpin for HyperParameterTrainingJobDefinition
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more