Struct aws_sdk_sagemaker::model::ContainerDefinition [−][src]
#[non_exhaustive]pub struct ContainerDefinition {
pub container_hostname: Option<String>,
pub image: Option<String>,
pub image_config: Option<ImageConfig>,
pub mode: Option<ContainerMode>,
pub model_data_url: Option<String>,
pub environment: Option<HashMap<String, String>>,
pub model_package_name: Option<String>,
pub multi_model_config: Option<MultiModelConfig>,
}
Expand description
Describes the container, as part of model definition.
Fields (Non-exhaustive)
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.container_hostname: Option<String>
This parameter is ignored for models that contain only a
PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of
the parameter uniquely identifies the container for the purposes of logging and metrics.
For information, see Use Logs and Metrics
to Monitor an Inference Pipeline. If you don't specify a value for this
parameter for a ContainerDefinition
that is part of an inference pipeline,
a unique name is automatically assigned based on the position of the
ContainerDefinition
in the pipeline. If you specify a value for the
ContainerHostName
for any ContainerDefinition
that is part
of an inference pipeline, you must specify a value for the
ContainerHostName
parameter of every ContainerDefinition
in that pipeline.
image: Option<String>
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a
Docker registry that is accessible from the same VPC that you configure for your
endpoint. If you are using your own custom algorithm instead of an algorithm provided by
Amazon SageMaker, the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both
registry/repository[:tag]
and registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker
image_config: Option<ImageConfig>
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers
mode: Option<ContainerMode>
Whether the container hosts a single model or multiple models.
model_data_url: Option<String>
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, Amazon SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your IAM user account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide
a S3 path to the model artifacts in ModelDataUrl
.
environment: Option<HashMap<String, String>>
The environment variables to set in the Docker container. Each key and value in the
Environment
string to string map can have length of up to 1024. We
support up to 16 entries in the map.
model_package_name: Option<String>
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
multi_model_config: Option<MultiModelConfig>
Specifies additional configuration for multi-model endpoints.
Implementations
This parameter is ignored for models that contain only a
PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of
the parameter uniquely identifies the container for the purposes of logging and metrics.
For information, see Use Logs and Metrics
to Monitor an Inference Pipeline. If you don't specify a value for this
parameter for a ContainerDefinition
that is part of an inference pipeline,
a unique name is automatically assigned based on the position of the
ContainerDefinition
in the pipeline. If you specify a value for the
ContainerHostName
for any ContainerDefinition
that is part
of an inference pipeline, you must specify a value for the
ContainerHostName
parameter of every ContainerDefinition
in that pipeline.
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a
Docker registry that is accessible from the same VPC that you configure for your
endpoint. If you are using your own custom algorithm instead of an algorithm provided by
Amazon SageMaker, the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both
registry/repository[:tag]
and registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers
Whether the container hosts a single model or multiple models.
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, Amazon SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your IAM user account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide
a S3 path to the model artifacts in ModelDataUrl
.
The environment variables to set in the Docker container. Each key and value in the
Environment
string to string map can have length of up to 1024. We
support up to 16 entries in the map.
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
Specifies additional configuration for multi-model endpoints.
Creates a new builder-style object to manufacture ContainerDefinition
Trait Implementations
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
This method tests for !=
.
Auto Trait Implementations
impl RefUnwindSafe for ContainerDefinition
impl Send for ContainerDefinition
impl Sync for ContainerDefinition
impl Unpin for ContainerDefinition
impl UnwindSafe for ContainerDefinition
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more