Struct aws_sdk_machinelearning::model::RdsDataSpec
source · [−]#[non_exhaustive]pub struct RdsDataSpec {
pub database_information: Option<RdsDatabase>,
pub select_sql_query: Option<String>,
pub database_credentials: Option<RdsDatabaseCredentials>,
pub s3_staging_location: Option<String>,
pub data_rearrangement: Option<String>,
pub data_schema: Option<String>,
pub data_schema_uri: Option<String>,
pub resource_role: Option<String>,
pub service_role: Option<String>,
pub subnet_id: Option<String>,
pub security_group_ids: Option<Vec<String>>,
}
Expand description
The data specification of an Amazon Relational Database Service (Amazon RDS) DataSource
.
Fields (Non-exhaustive)
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.database_information: Option<RdsDatabase>
Describes the DatabaseName
and InstanceIdentifier
of an Amazon RDS database.
select_sql_query: Option<String>
The query that is used to retrieve the observation data for the DataSource
.
database_credentials: Option<RdsDatabaseCredentials>
The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.
s3_staging_location: Option<String>
The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery
is stored in this location.
data_rearrangement: Option<String>
A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource
. If the DataRearrangement
parameter is not provided, all of the input data is used to create the Datasource
.
There are multiple parameters that control what data is used to create a datasource:
-
percentBegin
Use
percentBegin
to indicate the beginning of the range of the data used to create the Datasource. If you do not includepercentBegin
andpercentEnd
, Amazon ML includes all of the data when creating the datasource. -
percentEnd
Use
percentEnd
to indicate the end of the range of the data used to create the Datasource. If you do not includepercentBegin
andpercentEnd
, Amazon ML includes all of the data when creating the datasource. -
complement
The
complement
parameter instructs Amazon ML to use the data that is not included in the range ofpercentBegin
topercentEnd
to create a datasource. Thecomplement
parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values forpercentBegin
andpercentEnd
, along with thecomplement
parameter.For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.
Datasource for evaluation:
{"splitting":{"percentBegin":0, "percentEnd":25}}
Datasource for training:
{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}
-
strategy
To change how Amazon ML splits the data for a datasource, use the
strategy
parameter.The default value for the
strategy
parameter issequential
, meaning that Amazon ML takes all of the data records between thepercentBegin
andpercentEnd
parameters for the datasource, in the order that the records appear in the input data.The following two
DataRearrangement
lines are examples of sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}
Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}
To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the
strategy
parameter torandom
and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number betweenpercentBegin
andpercentEnd
. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.The following two
DataRearrangement
lines are examples of non-sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}
Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}
data_schema: Option<String>
A JSON string that represents the schema for an Amazon RDS DataSource
. The DataSchema
defines the structure of the observation data in the data file(s) referenced in the DataSource
.
A DataSchema
is not required if you specify a DataSchemaUri
Define your DataSchema
as a series of key-value pairs. attributes
and excludedVariableNames
have an array of key-value pairs for their value. Use the following format to define your DataSchema
.
{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
data_schema_uri: Option<String>
The Amazon S3 location of the DataSchema
.
resource_role: Option<String>
The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.
service_role: Option<String>
The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.
subnet_id: Option<String>
The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.
security_group_ids: Option<Vec<String>>
The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.
Implementations
sourceimpl RdsDataSpec
impl RdsDataSpec
sourcepub fn database_information(&self) -> Option<&RdsDatabase>
pub fn database_information(&self) -> Option<&RdsDatabase>
Describes the DatabaseName
and InstanceIdentifier
of an Amazon RDS database.
sourcepub fn select_sql_query(&self) -> Option<&str>
pub fn select_sql_query(&self) -> Option<&str>
The query that is used to retrieve the observation data for the DataSource
.
sourcepub fn database_credentials(&self) -> Option<&RdsDatabaseCredentials>
pub fn database_credentials(&self) -> Option<&RdsDatabaseCredentials>
The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.
sourcepub fn s3_staging_location(&self) -> Option<&str>
pub fn s3_staging_location(&self) -> Option<&str>
The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery
is stored in this location.
sourcepub fn data_rearrangement(&self) -> Option<&str>
pub fn data_rearrangement(&self) -> Option<&str>
A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource
. If the DataRearrangement
parameter is not provided, all of the input data is used to create the Datasource
.
There are multiple parameters that control what data is used to create a datasource:
-
percentBegin
Use
percentBegin
to indicate the beginning of the range of the data used to create the Datasource. If you do not includepercentBegin
andpercentEnd
, Amazon ML includes all of the data when creating the datasource. -
percentEnd
Use
percentEnd
to indicate the end of the range of the data used to create the Datasource. If you do not includepercentBegin
andpercentEnd
, Amazon ML includes all of the data when creating the datasource. -
complement
The
complement
parameter instructs Amazon ML to use the data that is not included in the range ofpercentBegin
topercentEnd
to create a datasource. Thecomplement
parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values forpercentBegin
andpercentEnd
, along with thecomplement
parameter.For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.
Datasource for evaluation:
{"splitting":{"percentBegin":0, "percentEnd":25}}
Datasource for training:
{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}
-
strategy
To change how Amazon ML splits the data for a datasource, use the
strategy
parameter.The default value for the
strategy
parameter issequential
, meaning that Amazon ML takes all of the data records between thepercentBegin
andpercentEnd
parameters for the datasource, in the order that the records appear in the input data.The following two
DataRearrangement
lines are examples of sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}
Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}
To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the
strategy
parameter torandom
and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number betweenpercentBegin
andpercentEnd
. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.The following two
DataRearrangement
lines are examples of non-sequentially ordered training and evaluation datasources:Datasource for evaluation:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}
Datasource for training:
{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}
sourcepub fn data_schema(&self) -> Option<&str>
pub fn data_schema(&self) -> Option<&str>
A JSON string that represents the schema for an Amazon RDS DataSource
. The DataSchema
defines the structure of the observation data in the data file(s) referenced in the DataSource
.
A DataSchema
is not required if you specify a DataSchemaUri
Define your DataSchema
as a series of key-value pairs. attributes
and excludedVariableNames
have an array of key-value pairs for their value. Use the following format to define your DataSchema
.
{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
sourcepub fn data_schema_uri(&self) -> Option<&str>
pub fn data_schema_uri(&self) -> Option<&str>
The Amazon S3 location of the DataSchema
.
sourcepub fn resource_role(&self) -> Option<&str>
pub fn resource_role(&self) -> Option<&str>
The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.
sourcepub fn service_role(&self) -> Option<&str>
pub fn service_role(&self) -> Option<&str>
The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.
sourcepub fn subnet_id(&self) -> Option<&str>
pub fn subnet_id(&self) -> Option<&str>
The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.
sourcepub fn security_group_ids(&self) -> Option<&[String]>
pub fn security_group_ids(&self) -> Option<&[String]>
The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.
sourceimpl RdsDataSpec
impl RdsDataSpec
sourcepub fn builder() -> Builder
pub fn builder() -> Builder
Creates a new builder-style object to manufacture RdsDataSpec
Trait Implementations
sourceimpl Clone for RdsDataSpec
impl Clone for RdsDataSpec
sourcefn clone(&self) -> RdsDataSpec
fn clone(&self) -> RdsDataSpec
Returns a copy of the value. Read more
1.0.0 · sourcefn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
sourceimpl Debug for RdsDataSpec
impl Debug for RdsDataSpec
sourceimpl PartialEq<RdsDataSpec> for RdsDataSpec
impl PartialEq<RdsDataSpec> for RdsDataSpec
sourcefn eq(&self, other: &RdsDataSpec) -> bool
fn eq(&self, other: &RdsDataSpec) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
sourcefn ne(&self, other: &RdsDataSpec) -> bool
fn ne(&self, other: &RdsDataSpec) -> bool
This method tests for !=
.
impl StructuralPartialEq for RdsDataSpec
Auto Trait Implementations
impl RefUnwindSafe for RdsDataSpec
impl Send for RdsDataSpec
impl Sync for RdsDataSpec
impl Unpin for RdsDataSpec
impl UnwindSafe for RdsDataSpec
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> ToOwned for T where
T: Clone,
impl<T> ToOwned for T where
T: Clone,
type Owned = T
type Owned = T
The resulting type after obtaining ownership.
sourcefn clone_into(&self, target: &mut T)
fn clone_into(&self, target: &mut T)
toowned_clone_into
)Uses borrowed data to replace owned data, usually by cloning. Read more
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more