[][src]Struct rusoto_sagemaker::HyperParameterTrainingJobDefinition

pub struct HyperParameterTrainingJobDefinition {
    pub algorithm_specification: HyperParameterAlgorithmSpecification,
    pub checkpoint_config: Option<CheckpointConfig>,
    pub definition_name: Option<String>,
    pub enable_inter_container_traffic_encryption: Option<bool>,
    pub enable_managed_spot_training: Option<bool>,
    pub enable_network_isolation: Option<bool>,
    pub hyper_parameter_ranges: Option<ParameterRanges>,
    pub input_data_config: Option<Vec<Channel>>,
    pub output_data_config: OutputDataConfig,
    pub resource_config: ResourceConfig,
    pub role_arn: String,
    pub static_hyper_parameters: Option<HashMap<String, String>>,
    pub stopping_condition: StoppingCondition,
    pub tuning_objective: Option<HyperParameterTuningJobObjective>,
    pub vpc_config: Option<VpcConfig>,
}

Defines the training jobs launched by a hyperparameter tuning job.

Fields

algorithm_specification: HyperParameterAlgorithmSpecification

The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the training jobs that the tuning job launches.

checkpoint_config: Option<CheckpointConfig>definition_name: Option<String>

The job definition name.

enable_inter_container_traffic_encryption: Option<bool>

To encrypt all communications between ML compute instances in distributed training, choose True. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training.

enable_managed_spot_training: Option<bool>

A Boolean indicating whether managed spot training is enabled (True) or not (False).

enable_network_isolation: Option<bool>

Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If network isolation is used for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.

hyper_parameter_ranges: Option<ParameterRanges>input_data_config: Option<Vec<Channel>>

An array of Channel objects that specify the input for the training jobs that the tuning job launches.

output_data_config: OutputDataConfig

Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the tuning job launches.

resource_config: ResourceConfig

The resources, including the compute instances and storage volumes, to use for the training jobs that the tuning job launches.

Storage volumes store model artifacts and incremental states. Training algorithms might also use storage volumes for scratch space. If you want Amazon SageMaker to use the storage volume to store the training data, choose File as the TrainingInputMode in the algorithm specification. For distributed training algorithms, specify an instance count greater than 1.

role_arn: String

The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches.

static_hyper_parameters: Option<HashMap<String, String>>

Specifies the values of hyperparameters that do not change for the tuning job.

stopping_condition: StoppingCondition

Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long you are willing to wait for a managed spot training job to complete. When the job reaches the a limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.

tuning_objective: Option<HyperParameterTuningJobObjective>vpc_config: Option<VpcConfig>

The VpcConfig object that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches to connect to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.

Trait Implementations

impl Clone for HyperParameterTrainingJobDefinition[src]

impl Debug for HyperParameterTrainingJobDefinition[src]

impl Default for HyperParameterTrainingJobDefinition[src]

impl<'de> Deserialize<'de> for HyperParameterTrainingJobDefinition[src]

impl PartialEq<HyperParameterTrainingJobDefinition> for HyperParameterTrainingJobDefinition[src]

impl Serialize for HyperParameterTrainingJobDefinition[src]

impl StructuralPartialEq for HyperParameterTrainingJobDefinition[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> DeserializeOwned for T where
    T: for<'de> Deserialize<'de>, 
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Same<T> for T

type Output = T

Should always be Self

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.