#[non_exhaustive]pub struct HyperParameterAlgorithmSpecification {
pub training_image: Option<String>,
pub training_input_mode: Option<TrainingInputMode>,
pub algorithm_name: Option<String>,
pub metric_definitions: Option<Vec<MetricDefinition>>,
}
Expand description
Specifies which training algorithm to use for training jobs that a hyperparameter tuning job launches and the metrics to monitor.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.training_image: Option<String>
The registry path of the Docker image that contains the training algorithm. For information about Docker registry paths for built-in algorithms, see Algorithms Provided by Amazon SageMaker: Common Parameters. SageMaker supports both registry/repository\[:tag\]
and registry/repository\[@digest\]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
training_input_mode: Option<TrainingInputMode>
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Pipe mode
If an algorithm supports Pipe
mode, Amazon SageMaker streams data directly from Amazon S3 to the container.
File mode
If an algorithm supports File
mode, SageMaker downloads the training data from S3 to the provisioned ML storage volume, and mounts the directory to the Docker volume for the training container.
You must provision the ML storage volume with sufficient capacity to accommodate the data downloaded from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container uses the ML storage volume to also store intermediate information, if any.
For distributed algorithms, training data is distributed uniformly. Your training duration is predictable if the input data objects sizes are approximately the same. SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed when one host in a training cluster is overloaded, thus becoming a bottleneck in training.
FastFile mode
If an algorithm supports FastFile
mode, SageMaker streams data directly from S3 to the container with no code changes, and provides file system access to the data. Users can author their training script to interact with these files as if they were stored on disk.
FastFile
mode works best when the data is read sequentially. Augmented manifest files aren't supported. The startup time is lower when there are fewer files in the S3 bucket provided.
algorithm_name: Option<String>
The name of the resource algorithm to use for the hyperparameter tuning job. If you specify a value for this parameter, do not specify a value for TrainingImage
.
metric_definitions: Option<Vec<MetricDefinition>>
An array of MetricDefinition objects that specify the metrics that the algorithm emits.
Implementations§
source§impl HyperParameterAlgorithmSpecification
impl HyperParameterAlgorithmSpecification
sourcepub fn training_image(&self) -> Option<&str>
pub fn training_image(&self) -> Option<&str>
The registry path of the Docker image that contains the training algorithm. For information about Docker registry paths for built-in algorithms, see Algorithms Provided by Amazon SageMaker: Common Parameters. SageMaker supports both registry/repository\[:tag\]
and registry/repository\[@digest\]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
sourcepub fn training_input_mode(&self) -> Option<&TrainingInputMode>
pub fn training_input_mode(&self) -> Option<&TrainingInputMode>
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Pipe mode
If an algorithm supports Pipe
mode, Amazon SageMaker streams data directly from Amazon S3 to the container.
File mode
If an algorithm supports File
mode, SageMaker downloads the training data from S3 to the provisioned ML storage volume, and mounts the directory to the Docker volume for the training container.
You must provision the ML storage volume with sufficient capacity to accommodate the data downloaded from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container uses the ML storage volume to also store intermediate information, if any.
For distributed algorithms, training data is distributed uniformly. Your training duration is predictable if the input data objects sizes are approximately the same. SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed when one host in a training cluster is overloaded, thus becoming a bottleneck in training.
FastFile mode
If an algorithm supports FastFile
mode, SageMaker streams data directly from S3 to the container with no code changes, and provides file system access to the data. Users can author their training script to interact with these files as if they were stored on disk.
FastFile
mode works best when the data is read sequentially. Augmented manifest files aren't supported. The startup time is lower when there are fewer files in the S3 bucket provided.
sourcepub fn algorithm_name(&self) -> Option<&str>
pub fn algorithm_name(&self) -> Option<&str>
The name of the resource algorithm to use for the hyperparameter tuning job. If you specify a value for this parameter, do not specify a value for TrainingImage
.
sourcepub fn metric_definitions(&self) -> &[MetricDefinition]
pub fn metric_definitions(&self) -> &[MetricDefinition]
An array of MetricDefinition objects that specify the metrics that the algorithm emits.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .metric_definitions.is_none()
.
source§impl HyperParameterAlgorithmSpecification
impl HyperParameterAlgorithmSpecification
sourcepub fn builder() -> HyperParameterAlgorithmSpecificationBuilder
pub fn builder() -> HyperParameterAlgorithmSpecificationBuilder
Creates a new builder-style object to manufacture HyperParameterAlgorithmSpecification
.
Trait Implementations§
source§impl Clone for HyperParameterAlgorithmSpecification
impl Clone for HyperParameterAlgorithmSpecification
source§fn clone(&self) -> HyperParameterAlgorithmSpecification
fn clone(&self) -> HyperParameterAlgorithmSpecification
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl PartialEq for HyperParameterAlgorithmSpecification
impl PartialEq for HyperParameterAlgorithmSpecification
source§fn eq(&self, other: &HyperParameterAlgorithmSpecification) -> bool
fn eq(&self, other: &HyperParameterAlgorithmSpecification) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for HyperParameterAlgorithmSpecification
Auto Trait Implementations§
impl Freeze for HyperParameterAlgorithmSpecification
impl RefUnwindSafe for HyperParameterAlgorithmSpecification
impl Send for HyperParameterAlgorithmSpecification
impl Sync for HyperParameterAlgorithmSpecification
impl Unpin for HyperParameterAlgorithmSpecification
impl UnwindSafe for HyperParameterAlgorithmSpecification
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more