Struct aws_sdk_neptunedata::operation::start_ml_model_transform_job::builders::StartMLModelTransformJobFluentBuilder
source · pub struct StartMLModelTransformJobFluentBuilder { /* private fields */ }
Expand description
Fluent builder constructing a request to StartMLModelTransformJob
.
Creates a new model transform job. See Use a trained model to generate new model artifacts.
When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:StartMLModelTransformJob IAM action in that cluster.
Implementations§
source§impl StartMLModelTransformJobFluentBuilder
impl StartMLModelTransformJobFluentBuilder
sourcepub fn as_input(&self) -> &StartMlModelTransformJobInputBuilder
pub fn as_input(&self) -> &StartMlModelTransformJobInputBuilder
Access the StartMLModelTransformJob as a reference.
sourcepub async fn send(
self
) -> Result<StartMlModelTransformJobOutput, SdkError<StartMLModelTransformJobError, HttpResponse>>
pub async fn send( self ) -> Result<StartMlModelTransformJobOutput, SdkError<StartMLModelTransformJobError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self
) -> CustomizableOperation<StartMlModelTransformJobOutput, StartMLModelTransformJobError, Self>
pub fn customize( self ) -> CustomizableOperation<StartMlModelTransformJobOutput, StartMLModelTransformJobError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn id(self, input: impl Into<String>) -> Self
pub fn id(self, input: impl Into<String>) -> Self
A unique identifier for the new job. The default is an autogenerated UUID.
sourcepub fn set_id(self, input: Option<String>) -> Self
pub fn set_id(self, input: Option<String>) -> Self
A unique identifier for the new job. The default is an autogenerated UUID.
sourcepub fn get_id(&self) -> &Option<String>
pub fn get_id(&self) -> &Option<String>
A unique identifier for the new job. The default is an autogenerated UUID.
sourcepub fn data_processing_job_id(self, input: impl Into<String>) -> Self
pub fn data_processing_job_id(self, input: impl Into<String>) -> Self
The job ID of a completed data-processing job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn set_data_processing_job_id(self, input: Option<String>) -> Self
pub fn set_data_processing_job_id(self, input: Option<String>) -> Self
The job ID of a completed data-processing job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn get_data_processing_job_id(&self) -> &Option<String>
pub fn get_data_processing_job_id(&self) -> &Option<String>
The job ID of a completed data-processing job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn ml_model_training_job_id(self, input: impl Into<String>) -> Self
pub fn ml_model_training_job_id(self, input: impl Into<String>) -> Self
The job ID of a completed model-training job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn set_ml_model_training_job_id(self, input: Option<String>) -> Self
pub fn set_ml_model_training_job_id(self, input: Option<String>) -> Self
The job ID of a completed model-training job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn get_ml_model_training_job_id(&self) -> &Option<String>
pub fn get_ml_model_training_job_id(&self) -> &Option<String>
The job ID of a completed model-training job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn training_job_name(self, input: impl Into<String>) -> Self
pub fn training_job_name(self, input: impl Into<String>) -> Self
The name of a completed SageMaker training job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn set_training_job_name(self, input: Option<String>) -> Self
pub fn set_training_job_name(self, input: Option<String>) -> Self
The name of a completed SageMaker training job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn get_training_job_name(&self) -> &Option<String>
pub fn get_training_job_name(&self) -> &Option<String>
The name of a completed SageMaker training job. You must include either dataProcessingJobId
and a mlModelTrainingJobId
, or a trainingJobName
.
sourcepub fn model_transform_output_s3_location(
self,
input: impl Into<String>
) -> Self
pub fn model_transform_output_s3_location( self, input: impl Into<String> ) -> Self
The location in Amazon S3 where the model artifacts are to be stored.
sourcepub fn set_model_transform_output_s3_location(
self,
input: Option<String>
) -> Self
pub fn set_model_transform_output_s3_location( self, input: Option<String> ) -> Self
The location in Amazon S3 where the model artifacts are to be stored.
sourcepub fn get_model_transform_output_s3_location(&self) -> &Option<String>
pub fn get_model_transform_output_s3_location(&self) -> &Option<String>
The location in Amazon S3 where the model artifacts are to be stored.
sourcepub fn sagemaker_iam_role_arn(self, input: impl Into<String>) -> Self
pub fn sagemaker_iam_role_arn(self, input: impl Into<String>) -> Self
The ARN of an IAM role for SageMaker execution. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn set_sagemaker_iam_role_arn(self, input: Option<String>) -> Self
pub fn set_sagemaker_iam_role_arn(self, input: Option<String>) -> Self
The ARN of an IAM role for SageMaker execution. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn get_sagemaker_iam_role_arn(&self) -> &Option<String>
pub fn get_sagemaker_iam_role_arn(&self) -> &Option<String>
The ARN of an IAM role for SageMaker execution. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn neptune_iam_role_arn(self, input: impl Into<String>) -> Self
pub fn neptune_iam_role_arn(self, input: impl Into<String>) -> Self
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn set_neptune_iam_role_arn(self, input: Option<String>) -> Self
pub fn set_neptune_iam_role_arn(self, input: Option<String>) -> Self
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn get_neptune_iam_role_arn(&self) -> &Option<String>
pub fn get_neptune_iam_role_arn(&self) -> &Option<String>
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
sourcepub fn custom_model_transform_parameters(
self,
input: CustomModelTransformParameters
) -> Self
pub fn custom_model_transform_parameters( self, input: CustomModelTransformParameters ) -> Self
Configuration information for a model transform using a custom model. The customModelTransformParameters
object contains the following fields, which must have values compatible with the saved model parameters from the training job:
sourcepub fn set_custom_model_transform_parameters(
self,
input: Option<CustomModelTransformParameters>
) -> Self
pub fn set_custom_model_transform_parameters( self, input: Option<CustomModelTransformParameters> ) -> Self
Configuration information for a model transform using a custom model. The customModelTransformParameters
object contains the following fields, which must have values compatible with the saved model parameters from the training job:
sourcepub fn get_custom_model_transform_parameters(
&self
) -> &Option<CustomModelTransformParameters>
pub fn get_custom_model_transform_parameters( &self ) -> &Option<CustomModelTransformParameters>
Configuration information for a model transform using a custom model. The customModelTransformParameters
object contains the following fields, which must have values compatible with the saved model parameters from the training job:
sourcepub fn base_processing_instance_type(self, input: impl Into<String>) -> Self
pub fn base_processing_instance_type(self, input: impl Into<String>) -> Self
The type of ML instance used in preparing and managing training of ML models. This is an ML compute instance chosen based on memory requirements for processing the training data and model.
sourcepub fn set_base_processing_instance_type(self, input: Option<String>) -> Self
pub fn set_base_processing_instance_type(self, input: Option<String>) -> Self
The type of ML instance used in preparing and managing training of ML models. This is an ML compute instance chosen based on memory requirements for processing the training data and model.
sourcepub fn get_base_processing_instance_type(&self) -> &Option<String>
pub fn get_base_processing_instance_type(&self) -> &Option<String>
The type of ML instance used in preparing and managing training of ML models. This is an ML compute instance chosen based on memory requirements for processing the training data and model.
sourcepub fn base_processing_instance_volume_size_in_gb(self, input: i32) -> Self
pub fn base_processing_instance_volume_size_in_gb(self, input: i32) -> Self
The disk volume size of the training instance in gigabytes. The default is 0. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
sourcepub fn set_base_processing_instance_volume_size_in_gb(
self,
input: Option<i32>
) -> Self
pub fn set_base_processing_instance_volume_size_in_gb( self, input: Option<i32> ) -> Self
The disk volume size of the training instance in gigabytes. The default is 0. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
sourcepub fn get_base_processing_instance_volume_size_in_gb(&self) -> &Option<i32>
pub fn get_base_processing_instance_volume_size_in_gb(&self) -> &Option<i32>
The disk volume size of the training instance in gigabytes. The default is 0. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
sourcepub fn subnets(self, input: impl Into<String>) -> Self
pub fn subnets(self, input: impl Into<String>) -> Self
Appends an item to subnets
.
To override the contents of this collection use set_subnets
.
The IDs of the subnets in the Neptune VPC. The default is None.
sourcepub fn set_subnets(self, input: Option<Vec<String>>) -> Self
pub fn set_subnets(self, input: Option<Vec<String>>) -> Self
The IDs of the subnets in the Neptune VPC. The default is None.
sourcepub fn get_subnets(&self) -> &Option<Vec<String>>
pub fn get_subnets(&self) -> &Option<Vec<String>>
The IDs of the subnets in the Neptune VPC. The default is None.
sourcepub fn security_group_ids(self, input: impl Into<String>) -> Self
pub fn security_group_ids(self, input: impl Into<String>) -> Self
Appends an item to securityGroupIds
.
To override the contents of this collection use set_security_group_ids
.
The VPC security group IDs. The default is None.
sourcepub fn set_security_group_ids(self, input: Option<Vec<String>>) -> Self
pub fn set_security_group_ids(self, input: Option<Vec<String>>) -> Self
The VPC security group IDs. The default is None.
sourcepub fn get_security_group_ids(&self) -> &Option<Vec<String>>
pub fn get_security_group_ids(&self) -> &Option<Vec<String>>
The VPC security group IDs. The default is None.
sourcepub fn volume_encryption_kms_key(self, input: impl Into<String>) -> Self
pub fn volume_encryption_kms_key(self, input: impl Into<String>) -> Self
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
sourcepub fn set_volume_encryption_kms_key(self, input: Option<String>) -> Self
pub fn set_volume_encryption_kms_key(self, input: Option<String>) -> Self
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
sourcepub fn get_volume_encryption_kms_key(&self) -> &Option<String>
pub fn get_volume_encryption_kms_key(&self) -> &Option<String>
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
sourcepub fn s3_output_encryption_kms_key(self, input: impl Into<String>) -> Self
pub fn s3_output_encryption_kms_key(self, input: impl Into<String>) -> Self
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
sourcepub fn set_s3_output_encryption_kms_key(self, input: Option<String>) -> Self
pub fn set_s3_output_encryption_kms_key(self, input: Option<String>) -> Self
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
sourcepub fn get_s3_output_encryption_kms_key(&self) -> &Option<String>
pub fn get_s3_output_encryption_kms_key(&self) -> &Option<String>
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
Trait Implementations§
source§impl Clone for StartMLModelTransformJobFluentBuilder
impl Clone for StartMLModelTransformJobFluentBuilder
source§fn clone(&self) -> StartMLModelTransformJobFluentBuilder
fn clone(&self) -> StartMLModelTransformJobFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl Freeze for StartMLModelTransformJobFluentBuilder
impl !RefUnwindSafe for StartMLModelTransformJobFluentBuilder
impl Send for StartMLModelTransformJobFluentBuilder
impl Sync for StartMLModelTransformJobFluentBuilder
impl Unpin for StartMLModelTransformJobFluentBuilder
impl !UnwindSafe for StartMLModelTransformJobFluentBuilder
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more