Struct aws_sdk_sagemaker::model::TrainingJobDefinition [−][src]
#[non_exhaustive]pub struct TrainingJobDefinition {
pub training_input_mode: Option<TrainingInputMode>,
pub hyper_parameters: Option<HashMap<String, String>>,
pub input_data_config: Option<Vec<Channel>>,
pub output_data_config: Option<OutputDataConfig>,
pub resource_config: Option<ResourceConfig>,
pub stopping_condition: Option<StoppingCondition>,
}
Expand description
Defines the input needed to run a training job using the algorithm.
Fields (Non-exhaustive)
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.training_input_mode: Option<TrainingInputMode>
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Pipe mode
If an algorithm supports Pipe
mode, Amazon SageMaker streams data directly
from Amazon S3 to the container.
File mode
If an algorithm supports File
mode, SageMaker
downloads the training data from S3 to the provisioned ML storage volume, and mounts the
directory to the Docker volume for the training container.
You must provision the ML storage volume with sufficient capacity to accommodate the data downloaded from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container uses the ML storage volume to also store intermediate information, if any.
For distributed algorithms, training data is distributed uniformly. Your training duration is predictable if the input data objects sizes are approximately the same. SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed when one host in a training cluster is overloaded, thus becoming a bottleneck in training.
FastFile mode
If an algorithm supports FastFile
mode, SageMaker streams data directly
from S3 to the container with no code changes, and provides file system access to
the data. Users can author their training script to interact with these files as if
they were stored on disk.
FastFile
mode works best when the data is read sequentially.
Augmented manifest files aren't supported.
The startup time is lower when there are fewer files in the S3 bucket provided.
hyper_parameters: Option<HashMap<String, String>>
The hyperparameters used for the training job.
input_data_config: Option<Vec<Channel>>
An array of Channel
objects, each of which specifies an input
source.
output_data_config: Option<OutputDataConfig>
the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates subfolders for the artifacts.
resource_config: Option<ResourceConfig>
The resources, including the ML compute instances and ML storage volumes, to use for model training.
stopping_condition: Option<StoppingCondition>
Specifies a limit to how long a model training job can run. It also specifies how long a managed Spot training job has to complete. When the job reaches the time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts.
Implementations
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Pipe mode
If an algorithm supports Pipe
mode, Amazon SageMaker streams data directly
from Amazon S3 to the container.
File mode
If an algorithm supports File
mode, SageMaker
downloads the training data from S3 to the provisioned ML storage volume, and mounts the
directory to the Docker volume for the training container.
You must provision the ML storage volume with sufficient capacity to accommodate the data downloaded from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container uses the ML storage volume to also store intermediate information, if any.
For distributed algorithms, training data is distributed uniformly. Your training duration is predictable if the input data objects sizes are approximately the same. SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed when one host in a training cluster is overloaded, thus becoming a bottleneck in training.
FastFile mode
If an algorithm supports FastFile
mode, SageMaker streams data directly
from S3 to the container with no code changes, and provides file system access to
the data. Users can author their training script to interact with these files as if
they were stored on disk.
FastFile
mode works best when the data is read sequentially.
Augmented manifest files aren't supported.
The startup time is lower when there are fewer files in the S3 bucket provided.
The hyperparameters used for the training job.
An array of Channel
objects, each of which specifies an input
source.
the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates subfolders for the artifacts.
The resources, including the ML compute instances and ML storage volumes, to use for model training.
Specifies a limit to how long a model training job can run. It also specifies how long a managed Spot training job has to complete. When the job reaches the time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts.
Creates a new builder-style object to manufacture TrainingJobDefinition
Trait Implementations
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
This method tests for !=
.
Auto Trait Implementations
impl RefUnwindSafe for TrainingJobDefinition
impl Send for TrainingJobDefinition
impl Sync for TrainingJobDefinition
impl Unpin for TrainingJobDefinition
impl UnwindSafe for TrainingJobDefinition
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more