Struct aws_sdk_sagemaker::model::resource_config::Builder
source · pub struct Builder { /* private fields */ }
Expand description
A builder for ResourceConfig
.
Implementations§
source§impl Builder
impl Builder
sourcepub fn instance_type(self, input: TrainingInstanceType) -> Self
pub fn instance_type(self, input: TrainingInstanceType) -> Self
The ML compute instance type.
SageMaker Training on Amazon Elastic Compute Cloud (EC2) P4de instances is in preview release starting December 9th, 2022.
Amazon EC2 P4de instances (currently in preview) are powered by 8 NVIDIA A100 GPUs with 80GB high-performance HBM2e GPU memory, which accelerate the speed of training ML models that need to be trained on large datasets of high-resolution data. In this preview release, Amazon SageMaker supports ML training jobs on P4de instances (ml.p4de.24xlarge
) to reduce model training time. The ml.p4de.24xlarge
instances are available in the following Amazon Web Services Regions.
-
US East (N. Virginia) (us-east-1)
-
US West (Oregon) (us-west-2)
To request quota limit increase and start using P4de instances, contact the SageMaker Training service team through your account team.
sourcepub fn set_instance_type(self, input: Option<TrainingInstanceType>) -> Self
pub fn set_instance_type(self, input: Option<TrainingInstanceType>) -> Self
The ML compute instance type.
SageMaker Training on Amazon Elastic Compute Cloud (EC2) P4de instances is in preview release starting December 9th, 2022.
Amazon EC2 P4de instances (currently in preview) are powered by 8 NVIDIA A100 GPUs with 80GB high-performance HBM2e GPU memory, which accelerate the speed of training ML models that need to be trained on large datasets of high-resolution data. In this preview release, Amazon SageMaker supports ML training jobs on P4de instances (ml.p4de.24xlarge
) to reduce model training time. The ml.p4de.24xlarge
instances are available in the following Amazon Web Services Regions.
-
US East (N. Virginia) (us-east-1)
-
US West (Oregon) (us-west-2)
To request quota limit increase and start using P4de instances, contact the SageMaker Training service team through your account team.
sourcepub fn instance_count(self, input: i32) -> Self
pub fn instance_count(self, input: i32) -> Self
The number of ML compute instances to use. For distributed training, provide a value greater than 1.
sourcepub fn set_instance_count(self, input: Option<i32>) -> Self
pub fn set_instance_count(self, input: Option<i32>) -> Self
The number of ML compute instances to use. For distributed training, provide a value greater than 1.
sourcepub fn volume_size_in_gb(self, input: i32) -> Self
pub fn volume_size_in_gb(self, input: i32) -> Self
The size of the ML storage volume that you want to provision.
ML storage volumes store model artifacts and incremental states. Training algorithms might also use the ML storage volume for scratch space. If you want to store the training data in the ML storage volume, choose File
as the TrainingInputMode
in the algorithm specification.
When using an ML instance with NVMe SSD volumes, SageMaker doesn't provision Amazon EBS General Purpose SSD (gp2) storage. Available storage is fixed to the NVMe-type instance's storage capacity. SageMaker configures storage paths for training datasets, checkpoints, model artifacts, and outputs to use the entire capacity of the instance storage. For example, ML instance families with the NVMe-type instance storage include ml.p4d
, ml.g4dn
, and ml.g5
.
When using an ML instance with the EBS-only storage option and without instance storage, you must define the size of EBS volume through VolumeSizeInGB
in the ResourceConfig
API. For example, ML instance families that use EBS volumes include ml.c5
and ml.p2
.
To look up instance types and their instance storage types and volumes, see Amazon EC2 Instance Types.
To find the default local paths defined by the SageMaker training platform, see Amazon SageMaker Training Storage Folders for Training Datasets, Checkpoints, Model Artifacts, and Outputs.
sourcepub fn set_volume_size_in_gb(self, input: Option<i32>) -> Self
pub fn set_volume_size_in_gb(self, input: Option<i32>) -> Self
The size of the ML storage volume that you want to provision.
ML storage volumes store model artifacts and incremental states. Training algorithms might also use the ML storage volume for scratch space. If you want to store the training data in the ML storage volume, choose File
as the TrainingInputMode
in the algorithm specification.
When using an ML instance with NVMe SSD volumes, SageMaker doesn't provision Amazon EBS General Purpose SSD (gp2) storage. Available storage is fixed to the NVMe-type instance's storage capacity. SageMaker configures storage paths for training datasets, checkpoints, model artifacts, and outputs to use the entire capacity of the instance storage. For example, ML instance families with the NVMe-type instance storage include ml.p4d
, ml.g4dn
, and ml.g5
.
When using an ML instance with the EBS-only storage option and without instance storage, you must define the size of EBS volume through VolumeSizeInGB
in the ResourceConfig
API. For example, ML instance families that use EBS volumes include ml.c5
and ml.p2
.
To look up instance types and their instance storage types and volumes, see Amazon EC2 Instance Types.
To find the default local paths defined by the SageMaker training platform, see Amazon SageMaker Training Storage Folders for Training Datasets, Checkpoints, Model Artifacts, and Outputs.
sourcepub fn volume_kms_key_id(self, input: impl Into<String>) -> Self
pub fn volume_kms_key_id(self, input: impl Into<String>) -> Self
The Amazon Web Services KMS key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the training job.
Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can't request a VolumeKmsKeyId
when using an instance type with local storage.
For a list of instance types that support local instance storage, see Instance Store Volumes.
For more information about local instance storage encryption, see SSD Instance Store Volumes.
The VolumeKmsKeyId
can be in any of the following formats:
-
// KMS Key ID
"1234abcd-12ab-34cd-56ef-1234567890ab"
-
// Amazon Resource Name (ARN) of a KMS Key
"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
sourcepub fn set_volume_kms_key_id(self, input: Option<String>) -> Self
pub fn set_volume_kms_key_id(self, input: Option<String>) -> Self
The Amazon Web Services KMS key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the training job.
Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can't request a VolumeKmsKeyId
when using an instance type with local storage.
For a list of instance types that support local instance storage, see Instance Store Volumes.
For more information about local instance storage encryption, see SSD Instance Store Volumes.
The VolumeKmsKeyId
can be in any of the following formats:
-
// KMS Key ID
"1234abcd-12ab-34cd-56ef-1234567890ab"
-
// Amazon Resource Name (ARN) of a KMS Key
"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
sourcepub fn instance_groups(self, input: InstanceGroup) -> Self
pub fn instance_groups(self, input: InstanceGroup) -> Self
Appends an item to instance_groups
.
To override the contents of this collection use set_instance_groups
.
The configuration of a heterogeneous cluster in JSON format.
sourcepub fn set_instance_groups(self, input: Option<Vec<InstanceGroup>>) -> Self
pub fn set_instance_groups(self, input: Option<Vec<InstanceGroup>>) -> Self
The configuration of a heterogeneous cluster in JSON format.
sourcepub fn keep_alive_period_in_seconds(self, input: i32) -> Self
pub fn keep_alive_period_in_seconds(self, input: i32) -> Self
The duration of time in seconds to retain configured resources in a warm pool for subsequent training jobs.
sourcepub fn set_keep_alive_period_in_seconds(self, input: Option<i32>) -> Self
pub fn set_keep_alive_period_in_seconds(self, input: Option<i32>) -> Self
The duration of time in seconds to retain configured resources in a warm pool for subsequent training jobs.
sourcepub fn build(self) -> ResourceConfig
pub fn build(self) -> ResourceConfig
Consumes the builder and constructs a ResourceConfig
.