pub struct StartMLModelTransformJobFluentBuilder { /* private fields */ }
Expand description

Fluent builder constructing a request to StartMLModelTransformJob.

Creates a new model transform job. See Use a trained model to generate new model artifacts.

When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:StartMLModelTransformJob IAM action in that cluster.

Implementations§

source§

impl StartMLModelTransformJobFluentBuilder

source

pub fn as_input(&self) -> &StartMlModelTransformJobInputBuilder

Access the StartMLModelTransformJob as a reference.

source

pub async fn send( self ) -> Result<StartMlModelTransformJobOutput, SdkError<StartMLModelTransformJobError, HttpResponse>>

Sends the request and returns the response.

If an error occurs, an SdkError will be returned with additional details that can be matched against.

By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.

source

pub fn customize( self ) -> CustomizableOperation<StartMlModelTransformJobOutput, StartMLModelTransformJobError, Self>

Consumes this builder, creating a customizable operation that can be modified before being sent.

source

pub fn id(self, input: impl Into<String>) -> Self

A unique identifier for the new job. The default is an autogenerated UUID.

source

pub fn set_id(self, input: Option<String>) -> Self

A unique identifier for the new job. The default is an autogenerated UUID.

source

pub fn get_id(&self) -> &Option<String>

A unique identifier for the new job. The default is an autogenerated UUID.

source

pub fn data_processing_job_id(self, input: impl Into<String>) -> Self

The job ID of a completed data-processing job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn set_data_processing_job_id(self, input: Option<String>) -> Self

The job ID of a completed data-processing job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn get_data_processing_job_id(&self) -> &Option<String>

The job ID of a completed data-processing job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn ml_model_training_job_id(self, input: impl Into<String>) -> Self

The job ID of a completed model-training job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn set_ml_model_training_job_id(self, input: Option<String>) -> Self

The job ID of a completed model-training job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn get_ml_model_training_job_id(&self) -> &Option<String>

The job ID of a completed model-training job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn training_job_name(self, input: impl Into<String>) -> Self

The name of a completed SageMaker training job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn set_training_job_name(self, input: Option<String>) -> Self

The name of a completed SageMaker training job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn get_training_job_name(&self) -> &Option<String>

The name of a completed SageMaker training job. You must include either dataProcessingJobId and a mlModelTrainingJobId, or a trainingJobName.

source

pub fn model_transform_output_s3_location( self, input: impl Into<String> ) -> Self

The location in Amazon S3 where the model artifacts are to be stored.

source

pub fn set_model_transform_output_s3_location( self, input: Option<String> ) -> Self

The location in Amazon S3 where the model artifacts are to be stored.

source

pub fn get_model_transform_output_s3_location(&self) -> &Option<String>

The location in Amazon S3 where the model artifacts are to be stored.

source

pub fn sagemaker_iam_role_arn(self, input: impl Into<String>) -> Self

The ARN of an IAM role for SageMaker execution. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn set_sagemaker_iam_role_arn(self, input: Option<String>) -> Self

The ARN of an IAM role for SageMaker execution. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn get_sagemaker_iam_role_arn(&self) -> &Option<String>

The ARN of an IAM role for SageMaker execution. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn neptune_iam_role_arn(self, input: impl Into<String>) -> Self

The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn set_neptune_iam_role_arn(self, input: Option<String>) -> Self

The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn get_neptune_iam_role_arn(&self) -> &Option<String>

The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

source

pub fn custom_model_transform_parameters( self, input: CustomModelTransformParameters ) -> Self

Configuration information for a model transform using a custom model. The customModelTransformParameters object contains the following fields, which must have values compatible with the saved model parameters from the training job:

source

pub fn set_custom_model_transform_parameters( self, input: Option<CustomModelTransformParameters> ) -> Self

Configuration information for a model transform using a custom model. The customModelTransformParameters object contains the following fields, which must have values compatible with the saved model parameters from the training job:

source

pub fn get_custom_model_transform_parameters( &self ) -> &Option<CustomModelTransformParameters>

Configuration information for a model transform using a custom model. The customModelTransformParameters object contains the following fields, which must have values compatible with the saved model parameters from the training job:

source

pub fn base_processing_instance_type(self, input: impl Into<String>) -> Self

The type of ML instance used in preparing and managing training of ML models. This is an ML compute instance chosen based on memory requirements for processing the training data and model.

source

pub fn set_base_processing_instance_type(self, input: Option<String>) -> Self

The type of ML instance used in preparing and managing training of ML models. This is an ML compute instance chosen based on memory requirements for processing the training data and model.

source

pub fn get_base_processing_instance_type(&self) -> &Option<String>

The type of ML instance used in preparing and managing training of ML models. This is an ML compute instance chosen based on memory requirements for processing the training data and model.

source

pub fn base_processing_instance_volume_size_in_gb(self, input: i32) -> Self

The disk volume size of the training instance in gigabytes. The default is 0. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.

source

pub fn set_base_processing_instance_volume_size_in_gb( self, input: Option<i32> ) -> Self

The disk volume size of the training instance in gigabytes. The default is 0. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.

source

pub fn get_base_processing_instance_volume_size_in_gb(&self) -> &Option<i32>

The disk volume size of the training instance in gigabytes. The default is 0. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.

source

pub fn subnets(self, input: impl Into<String>) -> Self

Appends an item to subnets.

To override the contents of this collection use set_subnets.

The IDs of the subnets in the Neptune VPC. The default is None.

source

pub fn set_subnets(self, input: Option<Vec<String>>) -> Self

The IDs of the subnets in the Neptune VPC. The default is None.

source

pub fn get_subnets(&self) -> &Option<Vec<String>>

The IDs of the subnets in the Neptune VPC. The default is None.

source

pub fn security_group_ids(self, input: impl Into<String>) -> Self

Appends an item to securityGroupIds.

To override the contents of this collection use set_security_group_ids.

The VPC security group IDs. The default is None.

source

pub fn set_security_group_ids(self, input: Option<Vec<String>>) -> Self

The VPC security group IDs. The default is None.

source

pub fn get_security_group_ids(&self) -> &Option<Vec<String>>

The VPC security group IDs. The default is None.

source

pub fn volume_encryption_kms_key(self, input: impl Into<String>) -> Self

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

source

pub fn set_volume_encryption_kms_key(self, input: Option<String>) -> Self

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

source

pub fn get_volume_encryption_kms_key(&self) -> &Option<String>

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

source

pub fn s3_output_encryption_kms_key(self, input: impl Into<String>) -> Self

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.

source

pub fn set_s3_output_encryption_kms_key(self, input: Option<String>) -> Self

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.

source

pub fn get_s3_output_encryption_kms_key(&self) -> &Option<String>

The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.

Trait Implementations§

source§

impl Clone for StartMLModelTransformJobFluentBuilder

source§

fn clone(&self) -> StartMLModelTransformJobFluentBuilder

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for StartMLModelTransformJobFluentBuilder

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more