pub struct AlgorithmSpecification {
pub algorithm_name: Option<String>,
pub enable_sage_maker_metrics_time_series: Option<bool>,
pub metric_definitions: Option<Vec<MetricDefinition>>,
pub training_image: Option<String>,
pub training_input_mode: String,
}
Expand description
Specifies the training algorithm to use in a CreateTrainingJob request.
For more information about algorithms provided by Amazon SageMaker, see Algorithms. For information about using your own algorithms, see Using Your Own Algorithms with Amazon SageMaker.
Fields§
§algorithm_name: Option<String>
The name of the algorithm resource to use for the training job. This must be an algorithm resource that you created or subscribe to on AWS Marketplace. If you specify a value for this parameter, you can't specify a value for TrainingImage
.
enable_sage_maker_metrics_time_series: Option<bool>
To generate and save time-series metrics during training, set to true
. The default is false
and time-series metrics aren't generated except in the following cases:
-
You use one of the Amazon SageMaker built-in algorithms
-
You use one of the following Prebuilt Amazon SageMaker Docker Images:
-
Tensorflow (version >= 1.15)
-
MXNet (version >= 1.6)
-
PyTorch (version >= 1.3)
-
-
You specify at least one MetricDefinition
metric_definitions: Option<Vec<MetricDefinition>>
A list of metric definition objects. Each object specifies the metric name and regular expressions used to parse algorithm logs. Amazon SageMaker publishes each metric to Amazon CloudWatch.
training_image: Option<String>
The registry path of the Docker image that contains the training algorithm. For information about docker registry paths for built-in algorithms, see Algorithms Provided by Amazon SageMaker: Common Parameters. Amazon SageMaker supports both registry/repository[:tag]
and registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
training_input_mode: String
The input mode that the algorithm supports. For the input modes that Amazon SageMaker algorithms support, see Algorithms. If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data from S3 to the provisioned ML storage Volume, and mounts the directory to docker volume for training container. If an algorithm supports the Pipe
input mode, Amazon SageMaker streams data directly from S3 to the container.
In File mode, make sure you provision ML storage volume with sufficient capacity to accommodate the data download from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container use ML storage volume to also store intermediate information, if any.
For distributed algorithms using File mode, training data is distributed uniformly, and your training duration is predictable if the input data objects size is approximately same. Amazon SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed where one host in a training cluster is overloaded, thus becoming bottleneck in training.
Trait Implementations§
Source§impl Clone for AlgorithmSpecification
impl Clone for AlgorithmSpecification
Source§fn clone(&self) -> AlgorithmSpecification
fn clone(&self) -> AlgorithmSpecification
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more