pub struct StartMLModelTrainingJobFluentBuilder { /* private fields */ }
Expand description

Fluent builder constructing a request to StartMLModelTrainingJob.

Creates a new Neptune ML model training job. See Model training using the modeltraining command.

When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:StartMLModelTrainingJob IAM action in that cluster.

Implementations§

source§

impl StartMLModelTrainingJobFluentBuilder

source

pub fn as_input(&self) -> &StartMlModelTrainingJobInputBuilder

Access the StartMLModelTrainingJob as a reference.

source

pub async fn send( self ) -> Result<StartMlModelTrainingJobOutput, SdkError<StartMLModelTrainingJobError, HttpResponse>>

Sends the request and returns the response.

If an error occurs, an SdkError will be returned with additional details that can be matched against.

By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.

source

pub fn customize( self ) -> CustomizableOperation<StartMlModelTrainingJobOutput, StartMLModelTrainingJobError, Self>

Consumes this builder, creating a customizable operation that can be modified before being sent.

source

pub fn id(self, input: impl Into<String>) -> Self

A unique identifier for the new job. The default is An autogenerated UUID.

source

pub fn set_id(self, input: Option<String>) -> Self

A unique identifier for the new job. The default is An autogenerated UUID.

source

pub fn get_id(&self) -> &Option<String>

A unique identifier for the new job. The default is An autogenerated UUID.

source

pub fn previous_model_training_job_id(self, input: impl Into<String>) -> Self

The job ID of a completed model-training job that you want to update incrementally based on updated data.

source

pub fn set_previous_model_training_job_id(self, input: Option<String>) -> Self

The job ID of a completed model-training job that you want to update incrementally based on updated data.

source

pub fn get_previous_model_training_job_id(&self) -> &Option<String>

The job ID of a completed model-training job that you want to update incrementally based on updated data.

source

pub fn data_processing_job_id(self, input: impl Into<String>) -> Self

The job ID of the completed data-processing job that has created the data that the training will work with.

source

pub fn set_data_processing_job_id(self, input: Option<String>) -> Self

The job ID of the completed data-processing job that has created the data that the training will work with.

source

pub fn get_data_processing_job_id(&self) -> &Option<String>

The job ID of the completed data-processing job that has created the data that the training will work with.

source

pub fn train_model_s3_location(self, input: impl Into<String>) -> Self

The location in Amazon S3 where the model artifacts are to be stored.

source

pub fn set_train_model_s3_location(self, input: Option<String>) -> Self

The location in Amazon S3 where the model artifacts are to be stored.

source

pub fn get_train_model_s3_location(&self) -> &Option<String>

The location in Amazon S3 where the model artifacts are to be stored.

source

pub fn sagemaker_iam_role_arn(self, input: impl Into<String>) -> Self

The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn set_sagemaker_iam_role_arn(self, input: Option<String>) -> Self

The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn get_sagemaker_iam_role_arn(&self) -> &Option<String>

The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn neptune_iam_role_arn(self, input: impl Into<String>) -> Self

The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn set_neptune_iam_role_arn(self, input: Option<String>) -> Self

The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn get_neptune_iam_role_arn(&self) -> &Option<String>

The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn base_processing_instance_type(self, input: impl Into<String>) -> Self

The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.

source

pub fn set_base_processing_instance_type(self, input: Option<String>) -> Self

The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.

source

pub fn get_base_processing_instance_type(&self) -> &Option<String>

The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.

source

pub fn training_instance_type(self, input: impl Into<String>) -> Self

The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is ml.p3.2xlarge. Choosing the right instance type for training depends on the task type, graph size, and your budget.

source

pub fn set_training_instance_type(self, input: Option<String>) -> Self

The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is ml.p3.2xlarge. Choosing the right instance type for training depends on the task type, graph size, and your budget.

source

pub fn get_training_instance_type(&self) -> &Option<String>

The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is ml.p3.2xlarge. Choosing the right instance type for training depends on the task type, graph size, and your budget.

source

pub fn training_instance_volume_size_in_gb(self, input: i32) -> Self

The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.

source

pub fn set_training_instance_volume_size_in_gb(self, input: Option<i32>) -> Self

The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.

source

pub fn get_training_instance_volume_size_in_gb(&self) -> &Option<i32>

The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.

source

pub fn training_time_out_in_seconds(self, input: i32) -> Self

Timeout in seconds for the training job. The default is 86,400 (1 day).

source

pub fn set_training_time_out_in_seconds(self, input: Option<i32>) -> Self

Timeout in seconds for the training job. The default is 86,400 (1 day).

source

pub fn get_training_time_out_in_seconds(&self) -> &Option<i32>

Timeout in seconds for the training job. The default is 86,400 (1 day).

source

pub fn max_hpo_number_of_training_jobs(self, input: i32) -> Self

Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs to 10). In general, the more tuning runs, the better the results.

source

pub fn set_max_hpo_number_of_training_jobs(self, input: Option<i32>) -> Self

Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs to 10). In general, the more tuning runs, the better the results.

source

pub fn get_max_hpo_number_of_training_jobs(&self) -> &Option<i32>

Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs to 10). In general, the more tuning runs, the better the results.

source

pub fn max_hpo_parallel_training_jobs(self, input: i32) -> Self

Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.

source

pub fn set_max_hpo_parallel_training_jobs(self, input: Option<i32>) -> Self

Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.

source

pub fn get_max_hpo_parallel_training_jobs(&self) -> &Option<i32>

Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.

source

pub fn subnets(self, input: impl Into<String>) -> Self

Appends an item to subnets.

To override the contents of this collection use set_subnets.

The IDs of the subnets in the Neptune VPC. The default is None.

source

pub fn set_subnets(self, input: Option<Vec<String>>) -> Self

The IDs of the subnets in the Neptune VPC. The default is None.

source

pub fn get_subnets(&self) -> &Option<Vec<String>>

The IDs of the subnets in the Neptune VPC. The default is None.

source

pub fn security_group_ids(self, input: impl Into<String>) -> Self

Appends an item to securityGroupIds.

To override the contents of this collection use set_security_group_ids.

The VPC security group IDs. The default is None.

source

pub fn set_security_group_ids(self, input: Option<Vec<String>>) -> Self

The VPC security group IDs. The default is None.

source

pub fn get_security_group_ids(&self) -> &Option<Vec<String>>

The VPC security group IDs. The default is None.

source

pub fn volume_encryption_kms_key(self, input: impl Into<String>) -> Self

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

source

pub fn set_volume_encryption_kms_key(self, input: Option<String>) -> Self

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

source

pub fn get_volume_encryption_kms_key(&self) -> &Option<String>

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

source

pub fn s3_output_encryption_kms_key(self, input: impl Into<String>) -> Self

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.

source

pub fn set_s3_output_encryption_kms_key(self, input: Option<String>) -> Self

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.

source

pub fn get_s3_output_encryption_kms_key(&self) -> &Option<String>

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.

source

pub fn enable_managed_spot_training(self, input: bool) -> Self

Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is False.

source

pub fn set_enable_managed_spot_training(self, input: Option<bool>) -> Self

Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is False.

source

pub fn get_enable_managed_spot_training(&self) -> &Option<bool>

Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is False.

source

pub fn custom_model_training_parameters( self, input: CustomModelTrainingParameters ) -> Self

The configuration for custom model training. This is a JSON object.

source

pub fn set_custom_model_training_parameters( self, input: Option<CustomModelTrainingParameters> ) -> Self

The configuration for custom model training. This is a JSON object.

source

pub fn get_custom_model_training_parameters( &self ) -> &Option<CustomModelTrainingParameters>

The configuration for custom model training. This is a JSON object.

Trait Implementations§

source§

impl Clone for StartMLModelTrainingJobFluentBuilder

source§

fn clone(&self) -> StartMLModelTrainingJobFluentBuilder

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for StartMLModelTrainingJobFluentBuilder

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more