#[non_exhaustive]pub struct CreateOptimizationJobInput {
pub optimization_job_name: Option<String>,
pub role_arn: Option<String>,
pub model_source: Option<OptimizationJobModelSource>,
pub deployment_instance_type: Option<OptimizationJobDeploymentInstanceType>,
pub optimization_environment: Option<HashMap<String, String>>,
pub optimization_configs: Option<Vec<OptimizationConfig>>,
pub output_config: Option<OptimizationJobOutputConfig>,
pub stopping_condition: Option<StoppingCondition>,
pub tags: Option<Vec<Tag>>,
pub vpc_config: Option<OptimizationVpcConfig>,
}
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.optimization_job_name: Option<String>
A custom name for the new optimization job.
role_arn: Option<String>
The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.
During model optimization, Amazon SageMaker needs your permission to:
-
Read input data from an S3 bucket
-
Write model artifacts to an S3 bucket
-
Write logs to Amazon CloudWatch Logs
-
Publish metrics to Amazon CloudWatch
You grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission. For more information, see Amazon SageMaker Roles.
model_source: Option<OptimizationJobModelSource>
The location of the source model to optimize with an optimization job.
deployment_instance_type: Option<OptimizationJobDeploymentInstanceType>
The type of instance that hosts the optimized model that you create with the optimization job.
optimization_environment: Option<HashMap<String, String>>
The environment variables to set in the model container.
optimization_configs: Option<Vec<OptimizationConfig>>
Settings for each of the optimization techniques that the job applies.
output_config: Option<OptimizationJobOutputConfig>
Details for where to store the optimized model that you create with the optimization job.
stopping_condition: Option<StoppingCondition>
Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
To stop a training job, SageMaker sends the algorithm the SIGTERM
signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with CreateModel
.
The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
A list of key-value pairs associated with the optimization job. For more information, see Tagging Amazon Web Services resources in the Amazon Web Services General Reference Guide.
vpc_config: Option<OptimizationVpcConfig>
A VPC in Amazon VPC that your optimized model has access to.
Implementations§
source§impl CreateOptimizationJobInput
impl CreateOptimizationJobInput
sourcepub fn optimization_job_name(&self) -> Option<&str>
pub fn optimization_job_name(&self) -> Option<&str>
A custom name for the new optimization job.
sourcepub fn role_arn(&self) -> Option<&str>
pub fn role_arn(&self) -> Option<&str>
The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.
During model optimization, Amazon SageMaker needs your permission to:
-
Read input data from an S3 bucket
-
Write model artifacts to an S3 bucket
-
Write logs to Amazon CloudWatch Logs
-
Publish metrics to Amazon CloudWatch
You grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission. For more information, see Amazon SageMaker Roles.
sourcepub fn model_source(&self) -> Option<&OptimizationJobModelSource>
pub fn model_source(&self) -> Option<&OptimizationJobModelSource>
The location of the source model to optimize with an optimization job.
sourcepub fn deployment_instance_type(
&self,
) -> Option<&OptimizationJobDeploymentInstanceType>
pub fn deployment_instance_type( &self, ) -> Option<&OptimizationJobDeploymentInstanceType>
The type of instance that hosts the optimized model that you create with the optimization job.
sourcepub fn optimization_environment(&self) -> Option<&HashMap<String, String>>
pub fn optimization_environment(&self) -> Option<&HashMap<String, String>>
The environment variables to set in the model container.
sourcepub fn optimization_configs(&self) -> &[OptimizationConfig]
pub fn optimization_configs(&self) -> &[OptimizationConfig]
Settings for each of the optimization techniques that the job applies.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .optimization_configs.is_none()
.
sourcepub fn output_config(&self) -> Option<&OptimizationJobOutputConfig>
pub fn output_config(&self) -> Option<&OptimizationJobOutputConfig>
Details for where to store the optimized model that you create with the optimization job.
sourcepub fn stopping_condition(&self) -> Option<&StoppingCondition>
pub fn stopping_condition(&self) -> Option<&StoppingCondition>
Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
To stop a training job, SageMaker sends the algorithm the SIGTERM
signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with CreateModel
.
The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
A list of key-value pairs associated with the optimization job. For more information, see Tagging Amazon Web Services resources in the Amazon Web Services General Reference Guide.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .tags.is_none()
.
sourcepub fn vpc_config(&self) -> Option<&OptimizationVpcConfig>
pub fn vpc_config(&self) -> Option<&OptimizationVpcConfig>
A VPC in Amazon VPC that your optimized model has access to.
source§impl CreateOptimizationJobInput
impl CreateOptimizationJobInput
sourcepub fn builder() -> CreateOptimizationJobInputBuilder
pub fn builder() -> CreateOptimizationJobInputBuilder
Creates a new builder-style object to manufacture CreateOptimizationJobInput
.
Trait Implementations§
source§impl Clone for CreateOptimizationJobInput
impl Clone for CreateOptimizationJobInput
source§fn clone(&self) -> CreateOptimizationJobInput
fn clone(&self) -> CreateOptimizationJobInput
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for CreateOptimizationJobInput
impl Debug for CreateOptimizationJobInput
impl StructuralPartialEq for CreateOptimizationJobInput
Auto Trait Implementations§
impl Freeze for CreateOptimizationJobInput
impl RefUnwindSafe for CreateOptimizationJobInput
impl Send for CreateOptimizationJobInput
impl Sync for CreateOptimizationJobInput
impl Unpin for CreateOptimizationJobInput
impl UnwindSafe for CreateOptimizationJobInput
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more