pub struct ProcessingS3Input {
pub local_path: String,
pub s3_compression_type: Option<String>,
pub s3_data_distribution_type: Option<String>,
pub s3_data_type: String,
pub s3_input_mode: String,
pub s3_uri: String,
}
Expand description
Information about where and how you want to obtain the inputs for an processing job.
Fields§
§local_path: String
The local path to the Amazon S3 bucket where you want Amazon SageMaker to download the inputs to run a processing job. LocalPath
is an absolute path to the input data.
s3_compression_type: Option<String>
Whether to use Gzip
compression for Amazon S3 storage.
s3_data_distribution_type: Option<String>
Whether the data stored in Amazon S3 is FullyReplicated
or ShardedByS3Key
.
s3_data_type: String
Whether you use an S3Prefix
or a ManifestFile
for the data type. If you choose S3Prefix
, S3Uri
identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for the processing job. If you choose ManifestFile
, S3Uri
identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for the processing job.
s3_input_mode: String
Whether to use File
or Pipe
input mode. In File
mode, Amazon SageMaker copies the data from the input source onto the local Amazon Elastic Block Store (Amazon EBS) volumes before starting your training algorithm. This is the most commonly used input mode. In Pipe
mode, Amazon SageMaker streams input data from the source directly to your algorithm without using the EBS volume.
s3_uri: String
The URI for the Amazon S3 storage where you want Amazon SageMaker to download the artifacts needed to run a processing job.
Trait Implementations§
Source§impl Clone for ProcessingS3Input
impl Clone for ProcessingS3Input
Source§fn clone(&self) -> ProcessingS3Input
fn clone(&self) -> ProcessingS3Input
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more