Struct aws_sdk_sagemaker::model::algorithm_specification::Builder
source · [−]pub struct Builder { /* private fields */ }
Expand description
A builder for AlgorithmSpecification
.
Implementations
sourceimpl Builder
impl Builder
sourcepub fn training_image(self, input: impl Into<String>) -> Self
pub fn training_image(self, input: impl Into<String>) -> Self
The registry path of the Docker image that contains the training algorithm. For information about docker registry paths for SageMaker built-in algorithms, see Docker Registry Paths and Example Code in the Amazon SageMaker developer guide. SageMaker supports both registry/repository[:tag]
and registry/repository[@digest]
image path formats. For more information about using your custom training container, see Using Your Own Algorithms with Amazon SageMaker.
You must specify either the algorithm name to the AlgorithmName
parameter or the image URI of the algorithm container to the TrainingImage
parameter.
For more information, see the note in the AlgorithmName
parameter description.
sourcepub fn set_training_image(self, input: Option<String>) -> Self
pub fn set_training_image(self, input: Option<String>) -> Self
The registry path of the Docker image that contains the training algorithm. For information about docker registry paths for SageMaker built-in algorithms, see Docker Registry Paths and Example Code in the Amazon SageMaker developer guide. SageMaker supports both registry/repository[:tag]
and registry/repository[@digest]
image path formats. For more information about using your custom training container, see Using Your Own Algorithms with Amazon SageMaker.
You must specify either the algorithm name to the AlgorithmName
parameter or the image URI of the algorithm container to the TrainingImage
parameter.
For more information, see the note in the AlgorithmName
parameter description.
sourcepub fn algorithm_name(self, input: impl Into<String>) -> Self
pub fn algorithm_name(self, input: impl Into<String>) -> Self
The name of the algorithm resource to use for the training job. This must be an algorithm resource that you created or subscribe to on Amazon Web Services Marketplace.
You must specify either the algorithm name to the AlgorithmName
parameter or the image URI of the algorithm container to the TrainingImage
parameter.
Note that the AlgorithmName
parameter is mutually exclusive with the TrainingImage
parameter. If you specify a value for the AlgorithmName
parameter, you can't specify a value for TrainingImage
, and vice versa.
If you specify values for both parameters, the training job might break; if you don't specify any value for both parameters, the training job might raise a null
error.
sourcepub fn set_algorithm_name(self, input: Option<String>) -> Self
pub fn set_algorithm_name(self, input: Option<String>) -> Self
The name of the algorithm resource to use for the training job. This must be an algorithm resource that you created or subscribe to on Amazon Web Services Marketplace.
You must specify either the algorithm name to the AlgorithmName
parameter or the image URI of the algorithm container to the TrainingImage
parameter.
Note that the AlgorithmName
parameter is mutually exclusive with the TrainingImage
parameter. If you specify a value for the AlgorithmName
parameter, you can't specify a value for TrainingImage
, and vice versa.
If you specify values for both parameters, the training job might break; if you don't specify any value for both parameters, the training job might raise a null
error.
sourcepub fn training_input_mode(self, input: TrainingInputMode) -> Self
pub fn training_input_mode(self, input: TrainingInputMode) -> Self
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Pipe mode
If an algorithm supports Pipe
mode, Amazon SageMaker streams data directly from Amazon S3 to the container.
File mode
If an algorithm supports File
mode, SageMaker downloads the training data from S3 to the provisioned ML storage volume, and mounts the directory to the Docker volume for the training container.
You must provision the ML storage volume with sufficient capacity to accommodate the data downloaded from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container uses the ML storage volume to also store intermediate information, if any.
For distributed algorithms, training data is distributed uniformly. Your training duration is predictable if the input data objects sizes are approximately the same. SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed when one host in a training cluster is overloaded, thus becoming a bottleneck in training.
FastFile mode
If an algorithm supports FastFile
mode, SageMaker streams data directly from S3 to the container with no code changes, and provides file system access to the data. Users can author their training script to interact with these files as if they were stored on disk.
FastFile
mode works best when the data is read sequentially. Augmented manifest files aren't supported. The startup time is lower when there are fewer files in the S3 bucket provided.
sourcepub fn set_training_input_mode(self, input: Option<TrainingInputMode>) -> Self
pub fn set_training_input_mode(self, input: Option<TrainingInputMode>) -> Self
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Pipe mode
If an algorithm supports Pipe
mode, Amazon SageMaker streams data directly from Amazon S3 to the container.
File mode
If an algorithm supports File
mode, SageMaker downloads the training data from S3 to the provisioned ML storage volume, and mounts the directory to the Docker volume for the training container.
You must provision the ML storage volume with sufficient capacity to accommodate the data downloaded from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container uses the ML storage volume to also store intermediate information, if any.
For distributed algorithms, training data is distributed uniformly. Your training duration is predictable if the input data objects sizes are approximately the same. SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed when one host in a training cluster is overloaded, thus becoming a bottleneck in training.
FastFile mode
If an algorithm supports FastFile
mode, SageMaker streams data directly from S3 to the container with no code changes, and provides file system access to the data. Users can author their training script to interact with these files as if they were stored on disk.
FastFile
mode works best when the data is read sequentially. Augmented manifest files aren't supported. The startup time is lower when there are fewer files in the S3 bucket provided.
sourcepub fn metric_definitions(self, input: MetricDefinition) -> Self
pub fn metric_definitions(self, input: MetricDefinition) -> Self
Appends an item to metric_definitions
.
To override the contents of this collection use set_metric_definitions
.
A list of metric definition objects. Each object specifies the metric name and regular expressions used to parse algorithm logs. SageMaker publishes each metric to Amazon CloudWatch.
sourcepub fn set_metric_definitions(self, input: Option<Vec<MetricDefinition>>) -> Self
pub fn set_metric_definitions(self, input: Option<Vec<MetricDefinition>>) -> Self
A list of metric definition objects. Each object specifies the metric name and regular expressions used to parse algorithm logs. SageMaker publishes each metric to Amazon CloudWatch.
sourcepub fn enable_sage_maker_metrics_time_series(self, input: bool) -> Self
pub fn enable_sage_maker_metrics_time_series(self, input: bool) -> Self
To generate and save time-series metrics during training, set to true
. The default is false
and time-series metrics aren't generated except in the following cases:
-
You use one of the SageMaker built-in algorithms
-
You use one of the following Prebuilt SageMaker Docker Images:
-
Tensorflow (version >= 1.15)
-
MXNet (version >= 1.6)
-
PyTorch (version >= 1.3)
-
-
You specify at least one
MetricDefinition
sourcepub fn set_enable_sage_maker_metrics_time_series(
self,
input: Option<bool>
) -> Self
pub fn set_enable_sage_maker_metrics_time_series(
self,
input: Option<bool>
) -> Self
To generate and save time-series metrics during training, set to true
. The default is false
and time-series metrics aren't generated except in the following cases:
-
You use one of the SageMaker built-in algorithms
-
You use one of the following Prebuilt SageMaker Docker Images:
-
Tensorflow (version >= 1.15)
-
MXNet (version >= 1.6)
-
PyTorch (version >= 1.3)
-
-
You specify at least one
MetricDefinition
sourcepub fn build(self) -> AlgorithmSpecification
pub fn build(self) -> AlgorithmSpecification
Consumes the builder and constructs a AlgorithmSpecification
.
Trait Implementations
sourceimpl PartialEq<Builder> for Builder
impl PartialEq<Builder> for Builder
impl StructuralPartialEq for Builder
Auto Trait Implementations
impl RefUnwindSafe for Builder
impl Send for Builder
impl Sync for Builder
impl Unpin for Builder
impl UnwindSafe for Builder
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more