Struct aws_sdk_sagemaker::model::hyper_parameter_algorithm_specification::Builder [−][src]
#[non_exhaustive]pub struct Builder { /* fields omitted */ }
Expand description
A builder for HyperParameterAlgorithmSpecification
Implementations
The registry path of the Docker image that contains the training algorithm. For
information about Docker registry paths for built-in algorithms, see Algorithms
Provided by Amazon SageMaker: Common Parameters. Amazon SageMaker supports both
registry/repository[:tag]
and registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker.
The registry path of the Docker image that contains the training algorithm. For
information about Docker registry paths for built-in algorithms, see Algorithms
Provided by Amazon SageMaker: Common Parameters. Amazon SageMaker supports both
registry/repository[:tag]
and registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker.
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Pipe mode
If an algorithm supports Pipe
mode, Amazon SageMaker streams data directly
from Amazon S3 to the container.
File mode
If an algorithm supports File
mode, SageMaker
downloads the training data from S3 to the provisioned ML storage volume, and mounts the
directory to the Docker volume for the training container.
You must provision the ML storage volume with sufficient capacity to accommodate the data downloaded from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container uses the ML storage volume to also store intermediate information, if any.
For distributed algorithms, training data is distributed uniformly. Your training duration is predictable if the input data objects sizes are approximately the same. SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed when one host in a training cluster is overloaded, thus becoming a bottleneck in training.
FastFile mode
If an algorithm supports FastFile
mode, SageMaker streams data directly
from S3 to the container with no code changes, and provides file system access to
the data. Users can author their training script to interact with these files as if
they were stored on disk.
FastFile
mode works best when the data is read sequentially.
Augmented manifest files aren't supported.
The startup time is lower when there are fewer files in the S3 bucket provided.
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Pipe mode
If an algorithm supports Pipe
mode, Amazon SageMaker streams data directly
from Amazon S3 to the container.
File mode
If an algorithm supports File
mode, SageMaker
downloads the training data from S3 to the provisioned ML storage volume, and mounts the
directory to the Docker volume for the training container.
You must provision the ML storage volume with sufficient capacity to accommodate the data downloaded from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container uses the ML storage volume to also store intermediate information, if any.
For distributed algorithms, training data is distributed uniformly. Your training duration is predictable if the input data objects sizes are approximately the same. SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed when one host in a training cluster is overloaded, thus becoming a bottleneck in training.
FastFile mode
If an algorithm supports FastFile
mode, SageMaker streams data directly
from S3 to the container with no code changes, and provides file system access to
the data. Users can author their training script to interact with these files as if
they were stored on disk.
FastFile
mode works best when the data is read sequentially.
Augmented manifest files aren't supported.
The startup time is lower when there are fewer files in the S3 bucket provided.
The name of the resource algorithm to use for the hyperparameter tuning job. If you
specify a value for this parameter, do not specify a value for
TrainingImage
.
The name of the resource algorithm to use for the hyperparameter tuning job. If you
specify a value for this parameter, do not specify a value for
TrainingImage
.
Appends an item to metric_definitions
.
To override the contents of this collection use set_metric_definitions
.
An array of MetricDefinition objects that specify the metrics that the algorithm emits.
An array of MetricDefinition objects that specify the metrics that the algorithm emits.
Consumes the builder and constructs a HyperParameterAlgorithmSpecification
Trait Implementations
Auto Trait Implementations
impl RefUnwindSafe for Builder
impl UnwindSafe for Builder
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more