Struct aws_sdk_personalize::operation::create_dataset_import_job::builders::CreateDatasetImportJobFluentBuilder
source · pub struct CreateDatasetImportJobFluentBuilder { /* private fields */ }
Expand description
Fluent builder constructing a request to CreateDatasetImportJob
.
Creates a job that imports training data from your data source (an Amazon S3 bucket) to an Amazon Personalize dataset. To allow Amazon Personalize to import the training data, you must specify an IAM service role that has permission to read from the data source, as Amazon Personalize makes a copy of your data and processes it internally. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources.
By default, a dataset import job replaces any existing data in the dataset that you imported in bulk. To add new records without replacing existing data, specify INCREMENTAL for the import mode in the CreateDatasetImportJob operation.
Status
A dataset import job can be in one of the following states:
-
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of the import job, call DescribeDatasetImportJob, providing the Amazon Resource Name (ARN) of the dataset import job. The dataset import is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Importing takes time. You must wait until the status shows as ACTIVE before training a model using the dataset.
Related APIs
Implementations§
source§impl CreateDatasetImportJobFluentBuilder
impl CreateDatasetImportJobFluentBuilder
sourcepub fn as_input(&self) -> &CreateDatasetImportJobInputBuilder
pub fn as_input(&self) -> &CreateDatasetImportJobInputBuilder
Access the CreateDatasetImportJob as a reference.
sourcepub async fn send(
self
) -> Result<CreateDatasetImportJobOutput, SdkError<CreateDatasetImportJobError, HttpResponse>>
pub async fn send( self ) -> Result<CreateDatasetImportJobOutput, SdkError<CreateDatasetImportJobError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self
) -> CustomizableOperation<CreateDatasetImportJobOutput, CreateDatasetImportJobError, Self>
pub fn customize( self ) -> CustomizableOperation<CreateDatasetImportJobOutput, CreateDatasetImportJobError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn set_job_name(self, input: Option<String>) -> Self
pub fn set_job_name(self, input: Option<String>) -> Self
The name for the dataset import job.
sourcepub fn get_job_name(&self) -> &Option<String>
pub fn get_job_name(&self) -> &Option<String>
The name for the dataset import job.
sourcepub fn dataset_arn(self, input: impl Into<String>) -> Self
pub fn dataset_arn(self, input: impl Into<String>) -> Self
The ARN of the dataset that receives the imported data.
sourcepub fn set_dataset_arn(self, input: Option<String>) -> Self
pub fn set_dataset_arn(self, input: Option<String>) -> Self
The ARN of the dataset that receives the imported data.
sourcepub fn get_dataset_arn(&self) -> &Option<String>
pub fn get_dataset_arn(&self) -> &Option<String>
The ARN of the dataset that receives the imported data.
sourcepub fn data_source(self, input: DataSource) -> Self
pub fn data_source(self, input: DataSource) -> Self
The Amazon S3 bucket that contains the training data to import.
sourcepub fn set_data_source(self, input: Option<DataSource>) -> Self
pub fn set_data_source(self, input: Option<DataSource>) -> Self
The Amazon S3 bucket that contains the training data to import.
sourcepub fn get_data_source(&self) -> &Option<DataSource>
pub fn get_data_source(&self) -> &Option<DataSource>
The Amazon S3 bucket that contains the training data to import.
sourcepub fn role_arn(self, input: impl Into<String>) -> Self
pub fn role_arn(self, input: impl Into<String>) -> Self
The ARN of the IAM role that has permissions to read from the Amazon S3 data source.
sourcepub fn set_role_arn(self, input: Option<String>) -> Self
pub fn set_role_arn(self, input: Option<String>) -> Self
The ARN of the IAM role that has permissions to read from the Amazon S3 data source.
sourcepub fn get_role_arn(&self) -> &Option<String>
pub fn get_role_arn(&self) -> &Option<String>
The ARN of the IAM role that has permissions to read from the Amazon S3 data source.
A list of tags to apply to the dataset import job.
A list of tags to apply to the dataset import job.
sourcepub fn import_mode(self, input: ImportMode) -> Self
pub fn import_mode(self, input: ImportMode) -> Self
Specify how to add the new records to an existing dataset. The default import mode is FULL
. If you haven't imported bulk records into the dataset previously, you can only specify FULL
.
-
Specify
FULL
to overwrite all existing bulk data in your dataset. Data you imported individually is not replaced. -
Specify
INCREMENTAL
to append the new records to the existing data in your dataset. Amazon Personalize replaces any record with the same ID with the new one.
sourcepub fn set_import_mode(self, input: Option<ImportMode>) -> Self
pub fn set_import_mode(self, input: Option<ImportMode>) -> Self
Specify how to add the new records to an existing dataset. The default import mode is FULL
. If you haven't imported bulk records into the dataset previously, you can only specify FULL
.
-
Specify
FULL
to overwrite all existing bulk data in your dataset. Data you imported individually is not replaced. -
Specify
INCREMENTAL
to append the new records to the existing data in your dataset. Amazon Personalize replaces any record with the same ID with the new one.
sourcepub fn get_import_mode(&self) -> &Option<ImportMode>
pub fn get_import_mode(&self) -> &Option<ImportMode>
Specify how to add the new records to an existing dataset. The default import mode is FULL
. If you haven't imported bulk records into the dataset previously, you can only specify FULL
.
-
Specify
FULL
to overwrite all existing bulk data in your dataset. Data you imported individually is not replaced. -
Specify
INCREMENTAL
to append the new records to the existing data in your dataset. Amazon Personalize replaces any record with the same ID with the new one.
sourcepub fn publish_attribution_metrics_to_s3(self, input: bool) -> Self
pub fn publish_attribution_metrics_to_s3(self, input: bool) -> Self
If you created a metric attribution, specify whether to publish metrics for this import job to Amazon S3
sourcepub fn set_publish_attribution_metrics_to_s3(self, input: Option<bool>) -> Self
pub fn set_publish_attribution_metrics_to_s3(self, input: Option<bool>) -> Self
If you created a metric attribution, specify whether to publish metrics for this import job to Amazon S3
sourcepub fn get_publish_attribution_metrics_to_s3(&self) -> &Option<bool>
pub fn get_publish_attribution_metrics_to_s3(&self) -> &Option<bool>
If you created a metric attribution, specify whether to publish metrics for this import job to Amazon S3
Trait Implementations§
source§impl Clone for CreateDatasetImportJobFluentBuilder
impl Clone for CreateDatasetImportJobFluentBuilder
source§fn clone(&self) -> CreateDatasetImportJobFluentBuilder
fn clone(&self) -> CreateDatasetImportJobFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more