Struct aws_sdk_sagemaker::types::builders::S3DataSourceBuilder
source · #[non_exhaustive]pub struct S3DataSourceBuilder { /* private fields */ }
Expand description
A builder for S3DataSource
.
Implementations§
source§impl S3DataSourceBuilder
impl S3DataSourceBuilder
sourcepub fn s3_data_type(self, input: S3DataType) -> Self
pub fn s3_data_type(self, input: S3DataType) -> Self
If you choose S3Prefix
, S3Uri
identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training.
If you choose ManifestFile
, S3Uri
identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training.
If you choose AugmentedManifestFile
, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. AugmentedManifestFile
can only be used if the Channel's input mode is Pipe
.
sourcepub fn set_s3_data_type(self, input: Option<S3DataType>) -> Self
pub fn set_s3_data_type(self, input: Option<S3DataType>) -> Self
If you choose S3Prefix
, S3Uri
identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training.
If you choose ManifestFile
, S3Uri
identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training.
If you choose AugmentedManifestFile
, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. AugmentedManifestFile
can only be used if the Channel's input mode is Pipe
.
sourcepub fn get_s3_data_type(&self) -> &Option<S3DataType>
pub fn get_s3_data_type(&self) -> &Option<S3DataType>
If you choose S3Prefix
, S3Uri
identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training.
If you choose ManifestFile
, S3Uri
identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training.
If you choose AugmentedManifestFile
, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. AugmentedManifestFile
can only be used if the Channel's input mode is Pipe
.
sourcepub fn s3_uri(self, input: impl Into<String>) -> Self
pub fn s3_uri(self, input: impl Into<String>) -> Self
Depending on the value specified for the S3DataType
, identifies either a key name prefix or a manifest. For example:
-
A key name prefix might look like this:
s3://bucketname/exampleprefix/
-
A manifest might look like this:
s3://bucketname/example.manifest
A manifest is an S3 object which is a JSON file consisting of an array of elements. The first element is a prefix which is followed by one or more suffixes. SageMaker appends the suffix elements to the prefix to get a full set of
S3Uri
. Note that the prefix must be a valid non-emptyS3Uri
that precludes users from specifying a manifest whose individualS3Uri
is sourced from different S3 buckets.The following code example shows a valid manifest format:
\[ {"prefix": "s3://customer_bucket/some/prefix/"},
"relative/path/to/custdata-1",
"relative/path/custdata-2",
...
"relative/path/custdata-N"
\]
This JSON is equivalent to the following
S3Uri
list:s3://customer_bucket/some/prefix/relative/path/to/custdata-1
s3://customer_bucket/some/prefix/relative/path/custdata-2
...
s3://customer_bucket/some/prefix/relative/path/custdata-N
The complete set of
S3Uri
in this manifest is the input data for the channel for this data source. The object that eachS3Uri
points to must be readable by the IAM role that SageMaker uses to perform tasks on your behalf.
Your input bucket must be located in same Amazon Web Services region as your training job.
This field is required.sourcepub fn set_s3_uri(self, input: Option<String>) -> Self
pub fn set_s3_uri(self, input: Option<String>) -> Self
Depending on the value specified for the S3DataType
, identifies either a key name prefix or a manifest. For example:
-
A key name prefix might look like this:
s3://bucketname/exampleprefix/
-
A manifest might look like this:
s3://bucketname/example.manifest
A manifest is an S3 object which is a JSON file consisting of an array of elements. The first element is a prefix which is followed by one or more suffixes. SageMaker appends the suffix elements to the prefix to get a full set of
S3Uri
. Note that the prefix must be a valid non-emptyS3Uri
that precludes users from specifying a manifest whose individualS3Uri
is sourced from different S3 buckets.The following code example shows a valid manifest format:
\[ {"prefix": "s3://customer_bucket/some/prefix/"},
"relative/path/to/custdata-1",
"relative/path/custdata-2",
...
"relative/path/custdata-N"
\]
This JSON is equivalent to the following
S3Uri
list:s3://customer_bucket/some/prefix/relative/path/to/custdata-1
s3://customer_bucket/some/prefix/relative/path/custdata-2
...
s3://customer_bucket/some/prefix/relative/path/custdata-N
The complete set of
S3Uri
in this manifest is the input data for the channel for this data source. The object that eachS3Uri
points to must be readable by the IAM role that SageMaker uses to perform tasks on your behalf.
Your input bucket must be located in same Amazon Web Services region as your training job.
sourcepub fn get_s3_uri(&self) -> &Option<String>
pub fn get_s3_uri(&self) -> &Option<String>
Depending on the value specified for the S3DataType
, identifies either a key name prefix or a manifest. For example:
-
A key name prefix might look like this:
s3://bucketname/exampleprefix/
-
A manifest might look like this:
s3://bucketname/example.manifest
A manifest is an S3 object which is a JSON file consisting of an array of elements. The first element is a prefix which is followed by one or more suffixes. SageMaker appends the suffix elements to the prefix to get a full set of
S3Uri
. Note that the prefix must be a valid non-emptyS3Uri
that precludes users from specifying a manifest whose individualS3Uri
is sourced from different S3 buckets.The following code example shows a valid manifest format:
\[ {"prefix": "s3://customer_bucket/some/prefix/"},
"relative/path/to/custdata-1",
"relative/path/custdata-2",
...
"relative/path/custdata-N"
\]
This JSON is equivalent to the following
S3Uri
list:s3://customer_bucket/some/prefix/relative/path/to/custdata-1
s3://customer_bucket/some/prefix/relative/path/custdata-2
...
s3://customer_bucket/some/prefix/relative/path/custdata-N
The complete set of
S3Uri
in this manifest is the input data for the channel for this data source. The object that eachS3Uri
points to must be readable by the IAM role that SageMaker uses to perform tasks on your behalf.
Your input bucket must be located in same Amazon Web Services region as your training job.
sourcepub fn s3_data_distribution_type(self, input: S3DataDistribution) -> Self
pub fn s3_data_distribution_type(self, input: S3DataDistribution) -> Self
If you want SageMaker to replicate the entire dataset on each ML compute instance that is launched for model training, specify FullyReplicated
.
If you want SageMaker to replicate a subset of data on each ML compute instance that is launched for model training, specify ShardedByS3Key
. If there are n ML compute instances launched for a training job, each instance gets approximately 1/n of the number of S3 objects. In this case, model training on each machine uses only the subset of training data.
Don't choose more ML compute instances for training than available S3 objects. If you do, some nodes won't get any data and you will pay for nodes that aren't getting any training data. This applies in both File and Pipe modes. Keep this in mind when developing algorithms.
In distributed training, where you use multiple ML compute EC2 instances, you might choose ShardedByS3Key
. If the algorithm requires copying training data to the ML storage volume (when TrainingInputMode
is set to File
), this copies 1/n of the number of objects.
sourcepub fn set_s3_data_distribution_type(
self,
input: Option<S3DataDistribution>,
) -> Self
pub fn set_s3_data_distribution_type( self, input: Option<S3DataDistribution>, ) -> Self
If you want SageMaker to replicate the entire dataset on each ML compute instance that is launched for model training, specify FullyReplicated
.
If you want SageMaker to replicate a subset of data on each ML compute instance that is launched for model training, specify ShardedByS3Key
. If there are n ML compute instances launched for a training job, each instance gets approximately 1/n of the number of S3 objects. In this case, model training on each machine uses only the subset of training data.
Don't choose more ML compute instances for training than available S3 objects. If you do, some nodes won't get any data and you will pay for nodes that aren't getting any training data. This applies in both File and Pipe modes. Keep this in mind when developing algorithms.
In distributed training, where you use multiple ML compute EC2 instances, you might choose ShardedByS3Key
. If the algorithm requires copying training data to the ML storage volume (when TrainingInputMode
is set to File
), this copies 1/n of the number of objects.
sourcepub fn get_s3_data_distribution_type(&self) -> &Option<S3DataDistribution>
pub fn get_s3_data_distribution_type(&self) -> &Option<S3DataDistribution>
If you want SageMaker to replicate the entire dataset on each ML compute instance that is launched for model training, specify FullyReplicated
.
If you want SageMaker to replicate a subset of data on each ML compute instance that is launched for model training, specify ShardedByS3Key
. If there are n ML compute instances launched for a training job, each instance gets approximately 1/n of the number of S3 objects. In this case, model training on each machine uses only the subset of training data.
Don't choose more ML compute instances for training than available S3 objects. If you do, some nodes won't get any data and you will pay for nodes that aren't getting any training data. This applies in both File and Pipe modes. Keep this in mind when developing algorithms.
In distributed training, where you use multiple ML compute EC2 instances, you might choose ShardedByS3Key
. If the algorithm requires copying training data to the ML storage volume (when TrainingInputMode
is set to File
), this copies 1/n of the number of objects.
sourcepub fn attribute_names(self, input: impl Into<String>) -> Self
pub fn attribute_names(self, input: impl Into<String>) -> Self
Appends an item to attribute_names
.
To override the contents of this collection use set_attribute_names
.
A list of one or more attribute names to use that are found in a specified augmented manifest file.
sourcepub fn set_attribute_names(self, input: Option<Vec<String>>) -> Self
pub fn set_attribute_names(self, input: Option<Vec<String>>) -> Self
A list of one or more attribute names to use that are found in a specified augmented manifest file.
sourcepub fn get_attribute_names(&self) -> &Option<Vec<String>>
pub fn get_attribute_names(&self) -> &Option<Vec<String>>
A list of one or more attribute names to use that are found in a specified augmented manifest file.
sourcepub fn instance_group_names(self, input: impl Into<String>) -> Self
pub fn instance_group_names(self, input: impl Into<String>) -> Self
Appends an item to instance_group_names
.
To override the contents of this collection use set_instance_group_names
.
A list of names of instance groups that get data from the S3 data source.
sourcepub fn set_instance_group_names(self, input: Option<Vec<String>>) -> Self
pub fn set_instance_group_names(self, input: Option<Vec<String>>) -> Self
A list of names of instance groups that get data from the S3 data source.
sourcepub fn get_instance_group_names(&self) -> &Option<Vec<String>>
pub fn get_instance_group_names(&self) -> &Option<Vec<String>>
A list of names of instance groups that get data from the S3 data source.
sourcepub fn build(self) -> S3DataSource
pub fn build(self) -> S3DataSource
Consumes the builder and constructs a S3DataSource
.
Trait Implementations§
source§impl Clone for S3DataSourceBuilder
impl Clone for S3DataSourceBuilder
source§fn clone(&self) -> S3DataSourceBuilder
fn clone(&self) -> S3DataSourceBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for S3DataSourceBuilder
impl Debug for S3DataSourceBuilder
source§impl Default for S3DataSourceBuilder
impl Default for S3DataSourceBuilder
source§fn default() -> S3DataSourceBuilder
fn default() -> S3DataSourceBuilder
source§impl PartialEq for S3DataSourceBuilder
impl PartialEq for S3DataSourceBuilder
impl StructuralPartialEq for S3DataSourceBuilder
Auto Trait Implementations§
impl Freeze for S3DataSourceBuilder
impl RefUnwindSafe for S3DataSourceBuilder
impl Send for S3DataSourceBuilder
impl Sync for S3DataSourceBuilder
impl Unpin for S3DataSourceBuilder
impl UnwindSafe for S3DataSourceBuilder
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more