Struct aws_sdk_machinelearning::types::builders::S3DataSpecBuilder
source · #[non_exhaustive]pub struct S3DataSpecBuilder { /* private fields */ }Expand description
A builder for S3DataSpec.
Implementations§
source§impl S3DataSpecBuilder
impl S3DataSpecBuilder
sourcepub fn data_location_s3(self, input: impl Into<String>) -> Self
pub fn data_location_s3(self, input: impl Into<String>) -> Self
The location of the data file(s) used by a DataSource. The URI specifies a data file or an Amazon Simple Storage Service (Amazon S3) directory or bucket containing data files.
sourcepub fn set_data_location_s3(self, input: Option<String>) -> Self
pub fn set_data_location_s3(self, input: Option<String>) -> Self
The location of the data file(s) used by a DataSource. The URI specifies a data file or an Amazon Simple Storage Service (Amazon S3) directory or bucket containing data files.
sourcepub fn get_data_location_s3(&self) -> &Option<String>
pub fn get_data_location_s3(&self) -> &Option<String>
The location of the data file(s) used by a DataSource. The URI specifies a data file or an Amazon Simple Storage Service (Amazon S3) directory or bucket containing data files.
sourcepub fn data_rearrangement(self, input: impl Into<String>) -> Self
pub fn data_rearrangement(self, input: impl Into<String>) -> Self
A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.
There are multiple parameters that control what data is used to create a datasource:
-
percentBeginUse
percentBeginto indicate the beginning of the range of the data used to create the Datasource. If you do not includepercentBeginandpercentEnd, Amazon ML includes all of the data when creating the datasource. -
percentEndUse
percentEndto indicate the end of the range of the data used to create the Datasource. If you do not includepercentBeginandpercentEnd, Amazon ML includes all of the data when creating the datasource. -
complementThe
complementparameter instructs Amazon ML to use the data that is not included in the range ofpercentBegintopercentEndto create a datasource. Thecomplementparameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values forpercentBeginandpercentEnd, along with thecomplementparameter.For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.
Datasource for evaluation:
{"splitting":{"percentBegin":0, "percentEnd":25}}Datasource for training:
{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} -
strategyTo change how Amazon ML splits the data for a datasource, use the
strategyparameter.The default value for the
strategyparameter issequential, meaning that Amazon ML takes all of the data records between thepercentBeginandpercentEndparameters for the datasource, in the order that the records appear in the input data.The following two
DataRearrangementlines are examples of sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the
strategyparameter torandomand provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number betweenpercentBeginandpercentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.The following two
DataRearrangementlines are examples of non-sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}
sourcepub fn set_data_rearrangement(self, input: Option<String>) -> Self
pub fn set_data_rearrangement(self, input: Option<String>) -> Self
A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.
There are multiple parameters that control what data is used to create a datasource:
-
percentBeginUse
percentBeginto indicate the beginning of the range of the data used to create the Datasource. If you do not includepercentBeginandpercentEnd, Amazon ML includes all of the data when creating the datasource. -
percentEndUse
percentEndto indicate the end of the range of the data used to create the Datasource. If you do not includepercentBeginandpercentEnd, Amazon ML includes all of the data when creating the datasource. -
complementThe
complementparameter instructs Amazon ML to use the data that is not included in the range ofpercentBegintopercentEndto create a datasource. Thecomplementparameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values forpercentBeginandpercentEnd, along with thecomplementparameter.For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.
Datasource for evaluation:
{"splitting":{"percentBegin":0, "percentEnd":25}}Datasource for training:
{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} -
strategyTo change how Amazon ML splits the data for a datasource, use the
strategyparameter.The default value for the
strategyparameter issequential, meaning that Amazon ML takes all of the data records between thepercentBeginandpercentEndparameters for the datasource, in the order that the records appear in the input data.The following two
DataRearrangementlines are examples of sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the
strategyparameter torandomand provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number betweenpercentBeginandpercentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.The following two
DataRearrangementlines are examples of non-sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}
sourcepub fn get_data_rearrangement(&self) -> &Option<String>
pub fn get_data_rearrangement(&self) -> &Option<String>
A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.
There are multiple parameters that control what data is used to create a datasource:
-
percentBeginUse
percentBeginto indicate the beginning of the range of the data used to create the Datasource. If you do not includepercentBeginandpercentEnd, Amazon ML includes all of the data when creating the datasource. -
percentEndUse
percentEndto indicate the end of the range of the data used to create the Datasource. If you do not includepercentBeginandpercentEnd, Amazon ML includes all of the data when creating the datasource. -
complementThe
complementparameter instructs Amazon ML to use the data that is not included in the range ofpercentBegintopercentEndto create a datasource. Thecomplementparameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values forpercentBeginandpercentEnd, along with thecomplementparameter.For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.
Datasource for evaluation:
{"splitting":{"percentBegin":0, "percentEnd":25}}Datasource for training:
{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} -
strategyTo change how Amazon ML splits the data for a datasource, use the
strategyparameter.The default value for the
strategyparameter issequential, meaning that Amazon ML takes all of the data records between thepercentBeginandpercentEndparameters for the datasource, in the order that the records appear in the input data.The following two
DataRearrangementlines are examples of sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the
strategyparameter torandomand provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number betweenpercentBeginandpercentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.The following two
DataRearrangementlines are examples of non-sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}
sourcepub fn data_schema(self, input: impl Into<String>) -> Self
pub fn data_schema(self, input: impl Into<String>) -> Self
A JSON string that represents the schema for an Amazon S3 DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.
You must provide either the DataSchema or the DataSchemaLocationS3.
Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.
{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
sourcepub fn set_data_schema(self, input: Option<String>) -> Self
pub fn set_data_schema(self, input: Option<String>) -> Self
A JSON string that represents the schema for an Amazon S3 DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.
You must provide either the DataSchema or the DataSchemaLocationS3.
Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.
{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
sourcepub fn get_data_schema(&self) -> &Option<String>
pub fn get_data_schema(&self) -> &Option<String>
A JSON string that represents the schema for an Amazon S3 DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.
You must provide either the DataSchema or the DataSchemaLocationS3.
Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.
{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
sourcepub fn data_schema_location_s3(self, input: impl Into<String>) -> Self
pub fn data_schema_location_s3(self, input: impl Into<String>) -> Self
Describes the schema location in Amazon S3. You must provide either the DataSchema or the DataSchemaLocationS3.
sourcepub fn set_data_schema_location_s3(self, input: Option<String>) -> Self
pub fn set_data_schema_location_s3(self, input: Option<String>) -> Self
Describes the schema location in Amazon S3. You must provide either the DataSchema or the DataSchemaLocationS3.
sourcepub fn get_data_schema_location_s3(&self) -> &Option<String>
pub fn get_data_schema_location_s3(&self) -> &Option<String>
Describes the schema location in Amazon S3. You must provide either the DataSchema or the DataSchemaLocationS3.
sourcepub fn build(self) -> Result<S3DataSpec, BuildError>
pub fn build(self) -> Result<S3DataSpec, BuildError>
Consumes the builder and constructs a S3DataSpec.
This method will fail if any of the following fields are not set:
Trait Implementations§
source§impl Clone for S3DataSpecBuilder
impl Clone for S3DataSpecBuilder
source§fn clone(&self) -> S3DataSpecBuilder
fn clone(&self) -> S3DataSpecBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moresource§impl Debug for S3DataSpecBuilder
impl Debug for S3DataSpecBuilder
source§impl Default for S3DataSpecBuilder
impl Default for S3DataSpecBuilder
source§fn default() -> S3DataSpecBuilder
fn default() -> S3DataSpecBuilder
source§impl PartialEq for S3DataSpecBuilder
impl PartialEq for S3DataSpecBuilder
source§fn eq(&self, other: &S3DataSpecBuilder) -> bool
fn eq(&self, other: &S3DataSpecBuilder) -> bool
self and other values to be equal, and is used
by ==.