#[non_exhaustive]
pub struct TabularJobConfigBuilder { /* private fields */ }
Expand description

A builder for TabularJobConfig.

Implementations§

source§

impl TabularJobConfigBuilder

source

pub fn candidate_generation_config( self, input: CandidateGenerationConfig ) -> Self

The configuration information of how model candidates are generated.

source

pub fn set_candidate_generation_config( self, input: Option<CandidateGenerationConfig> ) -> Self

The configuration information of how model candidates are generated.

source

pub fn get_candidate_generation_config( &self ) -> &Option<CandidateGenerationConfig>

The configuration information of how model candidates are generated.

source

pub fn completion_criteria(self, input: AutoMlJobCompletionCriteria) -> Self

How long a job is allowed to run, or how many candidates a job is allowed to generate.

source

pub fn set_completion_criteria( self, input: Option<AutoMlJobCompletionCriteria> ) -> Self

How long a job is allowed to run, or how many candidates a job is allowed to generate.

source

pub fn get_completion_criteria(&self) -> &Option<AutoMlJobCompletionCriteria>

How long a job is allowed to run, or how many candidates a job is allowed to generate.

source

pub fn feature_specification_s3_uri(self, input: impl Into<String>) -> Self

A URL to the Amazon S3 data source containing selected features from the input data source to run an Autopilot job V2. You can input FeatureAttributeNames (optional) in JSON format as shown below:

{ "FeatureAttributeNames":["col1", "col2", ...] }.

You can also specify the data type of the feature (optional) in the format shown below:

{ "FeatureDataTypes":{"col1":"numeric", "col2":"categorical" ... } }

These column keys may not include the target column.

In ensembling mode, Autopilot only supports the following data types: numeric, categorical, text, and datetime. In HPO mode, Autopilot can support numeric, categorical, text, datetime, and sequence.

If only FeatureDataTypes is provided, the column keys (col1, col2,..) should be a subset of the column names in the input data.

If both FeatureDataTypes and FeatureAttributeNames are provided, then the column keys should be a subset of the column names provided in FeatureAttributeNames.

The key name FeatureAttributeNames is fixed. The values listed in ["col1", "col2", ...] are case sensitive and should be a list of strings containing unique values that are a subset of the column names in the input data. The list of columns provided must not include the target column.

source

pub fn set_feature_specification_s3_uri(self, input: Option<String>) -> Self

A URL to the Amazon S3 data source containing selected features from the input data source to run an Autopilot job V2. You can input FeatureAttributeNames (optional) in JSON format as shown below:

{ "FeatureAttributeNames":["col1", "col2", ...] }.

You can also specify the data type of the feature (optional) in the format shown below:

{ "FeatureDataTypes":{"col1":"numeric", "col2":"categorical" ... } }

These column keys may not include the target column.

In ensembling mode, Autopilot only supports the following data types: numeric, categorical, text, and datetime. In HPO mode, Autopilot can support numeric, categorical, text, datetime, and sequence.

If only FeatureDataTypes is provided, the column keys (col1, col2,..) should be a subset of the column names in the input data.

If both FeatureDataTypes and FeatureAttributeNames are provided, then the column keys should be a subset of the column names provided in FeatureAttributeNames.

The key name FeatureAttributeNames is fixed. The values listed in ["col1", "col2", ...] are case sensitive and should be a list of strings containing unique values that are a subset of the column names in the input data. The list of columns provided must not include the target column.

source

pub fn get_feature_specification_s3_uri(&self) -> &Option<String>

A URL to the Amazon S3 data source containing selected features from the input data source to run an Autopilot job V2. You can input FeatureAttributeNames (optional) in JSON format as shown below:

{ "FeatureAttributeNames":["col1", "col2", ...] }.

You can also specify the data type of the feature (optional) in the format shown below:

{ "FeatureDataTypes":{"col1":"numeric", "col2":"categorical" ... } }

These column keys may not include the target column.

In ensembling mode, Autopilot only supports the following data types: numeric, categorical, text, and datetime. In HPO mode, Autopilot can support numeric, categorical, text, datetime, and sequence.

If only FeatureDataTypes is provided, the column keys (col1, col2,..) should be a subset of the column names in the input data.

If both FeatureDataTypes and FeatureAttributeNames are provided, then the column keys should be a subset of the column names provided in FeatureAttributeNames.

The key name FeatureAttributeNames is fixed. The values listed in ["col1", "col2", ...] are case sensitive and should be a list of strings containing unique values that are a subset of the column names in the input data. The list of columns provided must not include the target column.

source

pub fn mode(self, input: AutoMlMode) -> Self

The method that Autopilot uses to train the data. You can either specify the mode manually or let Autopilot choose for you based on the dataset size by selecting AUTO. In AUTO mode, Autopilot chooses ENSEMBLING for datasets smaller than 100 MB, and HYPERPARAMETER_TUNING for larger ones.

The ENSEMBLING mode uses a multi-stack ensemble model to predict classification and regression tasks directly from your dataset. This machine learning mode combines several base models to produce an optimal predictive model. It then uses a stacking ensemble method to combine predictions from contributing members. A multi-stack ensemble model can provide better performance over a single model by combining the predictive capabilities of multiple models. See Autopilot algorithm support for a list of algorithms supported by ENSEMBLING mode.

The HYPERPARAMETER_TUNING (HPO) mode uses the best hyperparameters to train the best version of a model. HPO automatically selects an algorithm for the type of problem you want to solve. Then HPO finds the best hyperparameters according to your objective metric. See Autopilot algorithm support for a list of algorithms supported by HYPERPARAMETER_TUNING mode.

source

pub fn set_mode(self, input: Option<AutoMlMode>) -> Self

The method that Autopilot uses to train the data. You can either specify the mode manually or let Autopilot choose for you based on the dataset size by selecting AUTO. In AUTO mode, Autopilot chooses ENSEMBLING for datasets smaller than 100 MB, and HYPERPARAMETER_TUNING for larger ones.

The ENSEMBLING mode uses a multi-stack ensemble model to predict classification and regression tasks directly from your dataset. This machine learning mode combines several base models to produce an optimal predictive model. It then uses a stacking ensemble method to combine predictions from contributing members. A multi-stack ensemble model can provide better performance over a single model by combining the predictive capabilities of multiple models. See Autopilot algorithm support for a list of algorithms supported by ENSEMBLING mode.

The HYPERPARAMETER_TUNING (HPO) mode uses the best hyperparameters to train the best version of a model. HPO automatically selects an algorithm for the type of problem you want to solve. Then HPO finds the best hyperparameters according to your objective metric. See Autopilot algorithm support for a list of algorithms supported by HYPERPARAMETER_TUNING mode.

source

pub fn get_mode(&self) -> &Option<AutoMlMode>

The method that Autopilot uses to train the data. You can either specify the mode manually or let Autopilot choose for you based on the dataset size by selecting AUTO. In AUTO mode, Autopilot chooses ENSEMBLING for datasets smaller than 100 MB, and HYPERPARAMETER_TUNING for larger ones.

The ENSEMBLING mode uses a multi-stack ensemble model to predict classification and regression tasks directly from your dataset. This machine learning mode combines several base models to produce an optimal predictive model. It then uses a stacking ensemble method to combine predictions from contributing members. A multi-stack ensemble model can provide better performance over a single model by combining the predictive capabilities of multiple models. See Autopilot algorithm support for a list of algorithms supported by ENSEMBLING mode.

The HYPERPARAMETER_TUNING (HPO) mode uses the best hyperparameters to train the best version of a model. HPO automatically selects an algorithm for the type of problem you want to solve. Then HPO finds the best hyperparameters according to your objective metric. See Autopilot algorithm support for a list of algorithms supported by HYPERPARAMETER_TUNING mode.

source

pub fn generate_candidate_definitions_only(self, input: bool) -> Self

Generates possible candidates without training the models. A model candidate is a combination of data preprocessors, algorithms, and algorithm parameter settings.

source

pub fn set_generate_candidate_definitions_only( self, input: Option<bool> ) -> Self

Generates possible candidates without training the models. A model candidate is a combination of data preprocessors, algorithms, and algorithm parameter settings.

source

pub fn get_generate_candidate_definitions_only(&self) -> &Option<bool>

Generates possible candidates without training the models. A model candidate is a combination of data preprocessors, algorithms, and algorithm parameter settings.

source

pub fn problem_type(self, input: ProblemType) -> Self

The type of supervised learning problem available for the model candidates of the AutoML job V2. For more information, see Amazon SageMaker Autopilot problem types.

You must either specify the type of supervised learning problem in ProblemType and provide the AutoMLJobObjective metric, or none at all.

source

pub fn set_problem_type(self, input: Option<ProblemType>) -> Self

The type of supervised learning problem available for the model candidates of the AutoML job V2. For more information, see Amazon SageMaker Autopilot problem types.

You must either specify the type of supervised learning problem in ProblemType and provide the AutoMLJobObjective metric, or none at all.

source

pub fn get_problem_type(&self) -> &Option<ProblemType>

The type of supervised learning problem available for the model candidates of the AutoML job V2. For more information, see Amazon SageMaker Autopilot problem types.

You must either specify the type of supervised learning problem in ProblemType and provide the AutoMLJobObjective metric, or none at all.

source

pub fn target_attribute_name(self, input: impl Into<String>) -> Self

The name of the target variable in supervised learning, usually represented by 'y'.

This field is required.
source

pub fn set_target_attribute_name(self, input: Option<String>) -> Self

The name of the target variable in supervised learning, usually represented by 'y'.

source

pub fn get_target_attribute_name(&self) -> &Option<String>

The name of the target variable in supervised learning, usually represented by 'y'.

source

pub fn sample_weight_attribute_name(self, input: impl Into<String>) -> Self

If specified, this column name indicates which column of the dataset should be treated as sample weights for use by the objective metric during the training, evaluation, and the selection of the best model. This column is not considered as a predictive feature. For more information on Autopilot metrics, see Metrics and validation.

Sample weights should be numeric, non-negative, with larger values indicating which rows are more important than others. Data points that have invalid or no weight value are excluded.

Support for sample weights is available in Ensembling mode only.

source

pub fn set_sample_weight_attribute_name(self, input: Option<String>) -> Self

If specified, this column name indicates which column of the dataset should be treated as sample weights for use by the objective metric during the training, evaluation, and the selection of the best model. This column is not considered as a predictive feature. For more information on Autopilot metrics, see Metrics and validation.

Sample weights should be numeric, non-negative, with larger values indicating which rows are more important than others. Data points that have invalid or no weight value are excluded.

Support for sample weights is available in Ensembling mode only.

source

pub fn get_sample_weight_attribute_name(&self) -> &Option<String>

If specified, this column name indicates which column of the dataset should be treated as sample weights for use by the objective metric during the training, evaluation, and the selection of the best model. This column is not considered as a predictive feature. For more information on Autopilot metrics, see Metrics and validation.

Sample weights should be numeric, non-negative, with larger values indicating which rows are more important than others. Data points that have invalid or no weight value are excluded.

Support for sample weights is available in Ensembling mode only.

source

pub fn build(self) -> TabularJobConfig

Consumes the builder and constructs a TabularJobConfig.

Trait Implementations§

source§

impl Clone for TabularJobConfigBuilder

source§

fn clone(&self) -> TabularJobConfigBuilder

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for TabularJobConfigBuilder

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Default for TabularJobConfigBuilder

source§

fn default() -> TabularJobConfigBuilder

Returns the “default value” for a type. Read more
source§

impl PartialEq for TabularJobConfigBuilder

source§

fn eq(&self, other: &TabularJobConfigBuilder) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl StructuralPartialEq for TabularJobConfigBuilder

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for Twhere U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<Unshared, Shared> IntoShared<Shared> for Unsharedwhere Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for Twhere T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more