Struct aws_sdk_neptunedata::operation::start_ml_model_training_job::builders::StartMLModelTrainingJobFluentBuilder
source · pub struct StartMLModelTrainingJobFluentBuilder { /* private fields */ }
Expand description
Fluent builder constructing a request to StartMLModelTrainingJob
.
Creates a new Neptune ML model training job. See Model training using the modeltraining
command.
When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:StartMLModelTrainingJob IAM action in that cluster.
Implementations§
source§impl StartMLModelTrainingJobFluentBuilder
impl StartMLModelTrainingJobFluentBuilder
sourcepub fn as_input(&self) -> &StartMlModelTrainingJobInputBuilder
pub fn as_input(&self) -> &StartMlModelTrainingJobInputBuilder
Access the StartMLModelTrainingJob as a reference.
sourcepub async fn send(
self
) -> Result<StartMlModelTrainingJobOutput, SdkError<StartMLModelTrainingJobError, HttpResponse>>
pub async fn send( self ) -> Result<StartMlModelTrainingJobOutput, SdkError<StartMLModelTrainingJobError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self
) -> CustomizableOperation<StartMlModelTrainingJobOutput, StartMLModelTrainingJobError, Self>
pub fn customize( self ) -> CustomizableOperation<StartMlModelTrainingJobOutput, StartMLModelTrainingJobError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn id(self, input: impl Into<String>) -> Self
pub fn id(self, input: impl Into<String>) -> Self
A unique identifier for the new job. The default is An autogenerated UUID.
sourcepub fn set_id(self, input: Option<String>) -> Self
pub fn set_id(self, input: Option<String>) -> Self
A unique identifier for the new job. The default is An autogenerated UUID.
sourcepub fn get_id(&self) -> &Option<String>
pub fn get_id(&self) -> &Option<String>
A unique identifier for the new job. The default is An autogenerated UUID.
sourcepub fn previous_model_training_job_id(self, input: impl Into<String>) -> Self
pub fn previous_model_training_job_id(self, input: impl Into<String>) -> Self
The job ID of a completed model-training job that you want to update incrementally based on updated data.
sourcepub fn set_previous_model_training_job_id(self, input: Option<String>) -> Self
pub fn set_previous_model_training_job_id(self, input: Option<String>) -> Self
The job ID of a completed model-training job that you want to update incrementally based on updated data.
sourcepub fn get_previous_model_training_job_id(&self) -> &Option<String>
pub fn get_previous_model_training_job_id(&self) -> &Option<String>
The job ID of a completed model-training job that you want to update incrementally based on updated data.
sourcepub fn data_processing_job_id(self, input: impl Into<String>) -> Self
pub fn data_processing_job_id(self, input: impl Into<String>) -> Self
The job ID of the completed data-processing job that has created the data that the training will work with.
sourcepub fn set_data_processing_job_id(self, input: Option<String>) -> Self
pub fn set_data_processing_job_id(self, input: Option<String>) -> Self
The job ID of the completed data-processing job that has created the data that the training will work with.
sourcepub fn get_data_processing_job_id(&self) -> &Option<String>
pub fn get_data_processing_job_id(&self) -> &Option<String>
The job ID of the completed data-processing job that has created the data that the training will work with.
sourcepub fn train_model_s3_location(self, input: impl Into<String>) -> Self
pub fn train_model_s3_location(self, input: impl Into<String>) -> Self
The location in Amazon S3 where the model artifacts are to be stored.
sourcepub fn set_train_model_s3_location(self, input: Option<String>) -> Self
pub fn set_train_model_s3_location(self, input: Option<String>) -> Self
The location in Amazon S3 where the model artifacts are to be stored.
sourcepub fn get_train_model_s3_location(&self) -> &Option<String>
pub fn get_train_model_s3_location(&self) -> &Option<String>
The location in Amazon S3 where the model artifacts are to be stored.
sourcepub fn sagemaker_iam_role_arn(self, input: impl Into<String>) -> Self
pub fn sagemaker_iam_role_arn(self, input: impl Into<String>) -> Self
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn set_sagemaker_iam_role_arn(self, input: Option<String>) -> Self
pub fn set_sagemaker_iam_role_arn(self, input: Option<String>) -> Self
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn get_sagemaker_iam_role_arn(&self) -> &Option<String>
pub fn get_sagemaker_iam_role_arn(&self) -> &Option<String>
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn neptune_iam_role_arn(self, input: impl Into<String>) -> Self
pub fn neptune_iam_role_arn(self, input: impl Into<String>) -> Self
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn set_neptune_iam_role_arn(self, input: Option<String>) -> Self
pub fn set_neptune_iam_role_arn(self, input: Option<String>) -> Self
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn get_neptune_iam_role_arn(&self) -> &Option<String>
pub fn get_neptune_iam_role_arn(&self) -> &Option<String>
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn base_processing_instance_type(self, input: impl Into<String>) -> Self
pub fn base_processing_instance_type(self, input: impl Into<String>) -> Self
The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.
sourcepub fn set_base_processing_instance_type(self, input: Option<String>) -> Self
pub fn set_base_processing_instance_type(self, input: Option<String>) -> Self
The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.
sourcepub fn get_base_processing_instance_type(&self) -> &Option<String>
pub fn get_base_processing_instance_type(&self) -> &Option<String>
The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.
sourcepub fn training_instance_type(self, input: impl Into<String>) -> Self
pub fn training_instance_type(self, input: impl Into<String>) -> Self
The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is ml.p3.2xlarge
. Choosing the right instance type for training depends on the task type, graph size, and your budget.
sourcepub fn set_training_instance_type(self, input: Option<String>) -> Self
pub fn set_training_instance_type(self, input: Option<String>) -> Self
The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is ml.p3.2xlarge
. Choosing the right instance type for training depends on the task type, graph size, and your budget.
sourcepub fn get_training_instance_type(&self) -> &Option<String>
pub fn get_training_instance_type(&self) -> &Option<String>
The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is ml.p3.2xlarge
. Choosing the right instance type for training depends on the task type, graph size, and your budget.
sourcepub fn training_instance_volume_size_in_gb(self, input: i32) -> Self
pub fn training_instance_volume_size_in_gb(self, input: i32) -> Self
The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
sourcepub fn set_training_instance_volume_size_in_gb(self, input: Option<i32>) -> Self
pub fn set_training_instance_volume_size_in_gb(self, input: Option<i32>) -> Self
The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
sourcepub fn get_training_instance_volume_size_in_gb(&self) -> &Option<i32>
pub fn get_training_instance_volume_size_in_gb(&self) -> &Option<i32>
The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
sourcepub fn training_time_out_in_seconds(self, input: i32) -> Self
pub fn training_time_out_in_seconds(self, input: i32) -> Self
Timeout in seconds for the training job. The default is 86,400 (1 day).
sourcepub fn set_training_time_out_in_seconds(self, input: Option<i32>) -> Self
pub fn set_training_time_out_in_seconds(self, input: Option<i32>) -> Self
Timeout in seconds for the training job. The default is 86,400 (1 day).
sourcepub fn get_training_time_out_in_seconds(&self) -> &Option<i32>
pub fn get_training_time_out_in_seconds(&self) -> &Option<i32>
Timeout in seconds for the training job. The default is 86,400 (1 day).
sourcepub fn max_hpo_number_of_training_jobs(self, input: i32) -> Self
pub fn max_hpo_number_of_training_jobs(self, input: i32) -> Self
Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs
to 10). In general, the more tuning runs, the better the results.
sourcepub fn set_max_hpo_number_of_training_jobs(self, input: Option<i32>) -> Self
pub fn set_max_hpo_number_of_training_jobs(self, input: Option<i32>) -> Self
Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs
to 10). In general, the more tuning runs, the better the results.
sourcepub fn get_max_hpo_number_of_training_jobs(&self) -> &Option<i32>
pub fn get_max_hpo_number_of_training_jobs(&self) -> &Option<i32>
Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs
to 10). In general, the more tuning runs, the better the results.
sourcepub fn max_hpo_parallel_training_jobs(self, input: i32) -> Self
pub fn max_hpo_parallel_training_jobs(self, input: i32) -> Self
Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.
sourcepub fn set_max_hpo_parallel_training_jobs(self, input: Option<i32>) -> Self
pub fn set_max_hpo_parallel_training_jobs(self, input: Option<i32>) -> Self
Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.
sourcepub fn get_max_hpo_parallel_training_jobs(&self) -> &Option<i32>
pub fn get_max_hpo_parallel_training_jobs(&self) -> &Option<i32>
Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.
sourcepub fn subnets(self, input: impl Into<String>) -> Self
pub fn subnets(self, input: impl Into<String>) -> Self
Appends an item to subnets
.
To override the contents of this collection use set_subnets
.
The IDs of the subnets in the Neptune VPC. The default is None.
sourcepub fn set_subnets(self, input: Option<Vec<String>>) -> Self
pub fn set_subnets(self, input: Option<Vec<String>>) -> Self
The IDs of the subnets in the Neptune VPC. The default is None.
sourcepub fn get_subnets(&self) -> &Option<Vec<String>>
pub fn get_subnets(&self) -> &Option<Vec<String>>
The IDs of the subnets in the Neptune VPC. The default is None.
sourcepub fn security_group_ids(self, input: impl Into<String>) -> Self
pub fn security_group_ids(self, input: impl Into<String>) -> Self
Appends an item to securityGroupIds
.
To override the contents of this collection use set_security_group_ids
.
The VPC security group IDs. The default is None.
sourcepub fn set_security_group_ids(self, input: Option<Vec<String>>) -> Self
pub fn set_security_group_ids(self, input: Option<Vec<String>>) -> Self
The VPC security group IDs. The default is None.
sourcepub fn get_security_group_ids(&self) -> &Option<Vec<String>>
pub fn get_security_group_ids(&self) -> &Option<Vec<String>>
The VPC security group IDs. The default is None.
sourcepub fn volume_encryption_kms_key(self, input: impl Into<String>) -> Self
pub fn volume_encryption_kms_key(self, input: impl Into<String>) -> Self
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
sourcepub fn set_volume_encryption_kms_key(self, input: Option<String>) -> Self
pub fn set_volume_encryption_kms_key(self, input: Option<String>) -> Self
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
sourcepub fn get_volume_encryption_kms_key(&self) -> &Option<String>
pub fn get_volume_encryption_kms_key(&self) -> &Option<String>
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
sourcepub fn s3_output_encryption_kms_key(self, input: impl Into<String>) -> Self
pub fn s3_output_encryption_kms_key(self, input: impl Into<String>) -> Self
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
sourcepub fn set_s3_output_encryption_kms_key(self, input: Option<String>) -> Self
pub fn set_s3_output_encryption_kms_key(self, input: Option<String>) -> Self
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
sourcepub fn get_s3_output_encryption_kms_key(&self) -> &Option<String>
pub fn get_s3_output_encryption_kms_key(&self) -> &Option<String>
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
sourcepub fn enable_managed_spot_training(self, input: bool) -> Self
pub fn enable_managed_spot_training(self, input: bool) -> Self
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is False
.
sourcepub fn set_enable_managed_spot_training(self, input: Option<bool>) -> Self
pub fn set_enable_managed_spot_training(self, input: Option<bool>) -> Self
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is False
.
sourcepub fn get_enable_managed_spot_training(&self) -> &Option<bool>
pub fn get_enable_managed_spot_training(&self) -> &Option<bool>
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is False
.
sourcepub fn custom_model_training_parameters(
self,
input: CustomModelTrainingParameters
) -> Self
pub fn custom_model_training_parameters( self, input: CustomModelTrainingParameters ) -> Self
The configuration for custom model training. This is a JSON object.
sourcepub fn set_custom_model_training_parameters(
self,
input: Option<CustomModelTrainingParameters>
) -> Self
pub fn set_custom_model_training_parameters( self, input: Option<CustomModelTrainingParameters> ) -> Self
The configuration for custom model training. This is a JSON object.
sourcepub fn get_custom_model_training_parameters(
&self
) -> &Option<CustomModelTrainingParameters>
pub fn get_custom_model_training_parameters( &self ) -> &Option<CustomModelTrainingParameters>
The configuration for custom model training. This is a JSON object.
Trait Implementations§
source§impl Clone for StartMLModelTrainingJobFluentBuilder
impl Clone for StartMLModelTrainingJobFluentBuilder
source§fn clone(&self) -> StartMLModelTrainingJobFluentBuilder
fn clone(&self) -> StartMLModelTrainingJobFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl Freeze for StartMLModelTrainingJobFluentBuilder
impl !RefUnwindSafe for StartMLModelTrainingJobFluentBuilder
impl Send for StartMLModelTrainingJobFluentBuilder
impl Sync for StartMLModelTrainingJobFluentBuilder
impl Unpin for StartMLModelTrainingJobFluentBuilder
impl !UnwindSafe for StartMLModelTrainingJobFluentBuilder
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more