#[non_exhaustive]pub struct ContainerDefinition {
pub container_hostname: Option<String>,
pub image: Option<String>,
pub image_config: Option<ImageConfig>,
pub mode: Option<ContainerMode>,
pub model_data_url: Option<String>,
pub model_data_source: Option<ModelDataSource>,
pub additional_model_data_sources: Option<Vec<AdditionalModelDataSource>>,
pub environment: Option<HashMap<String, String>>,
pub model_package_name: Option<String>,
pub inference_specification_name: Option<String>,
pub multi_model_config: Option<MultiModelConfig>,
}
Expand description
Describes the container, as part of model definition.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.container_hostname: Option<String>
This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline, you must specify a value for the ContainerHostName
parameter of every ContainerDefinition
in that pipeline.
image: Option<String>
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository\[:tag\]
and registry/repository\[@digest\]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
image_config: Option<ImageConfig>
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
mode: Option<ContainerMode>
Whether the container hosts a single model or multiple models.
model_data_url: Option<String>
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in ModelDataUrl
.
model_data_source: Option<ModelDataSource>
Specifies the location of ML model data to deploy.
Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
additional_model_data_sources: Option<Vec<AdditionalModelDataSource>>
Data sources that are available to your model in addition to the one that you specify for ModelDataSource
when you use the CreateModel
action.
environment: Option<HashMap<String, String>>
The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.
The maximum length of each key and value in the Environment
map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
model_package_name: Option<String>
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
inference_specification_name: Option<String>
The inference specification name in the model package version.
multi_model_config: Option<MultiModelConfig>
Specifies additional configuration for multi-model endpoints.
Implementations§
Source§impl ContainerDefinition
impl ContainerDefinition
Sourcepub fn container_hostname(&self) -> Option<&str>
pub fn container_hostname(&self) -> Option<&str>
This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline, you must specify a value for the ContainerHostName
parameter of every ContainerDefinition
in that pipeline.
Sourcepub fn image(&self) -> Option<&str>
pub fn image(&self) -> Option<&str>
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository\[:tag\]
and registry/repository\[@digest\]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
Sourcepub fn image_config(&self) -> Option<&ImageConfig>
pub fn image_config(&self) -> Option<&ImageConfig>
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
Sourcepub fn mode(&self) -> Option<&ContainerMode>
pub fn mode(&self) -> Option<&ContainerMode>
Whether the container hosts a single model or multiple models.
Sourcepub fn model_data_url(&self) -> Option<&str>
pub fn model_data_url(&self) -> Option<&str>
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in ModelDataUrl
.
Sourcepub fn model_data_source(&self) -> Option<&ModelDataSource>
pub fn model_data_source(&self) -> Option<&ModelDataSource>
Specifies the location of ML model data to deploy.
Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
Sourcepub fn additional_model_data_sources(&self) -> &[AdditionalModelDataSource]
pub fn additional_model_data_sources(&self) -> &[AdditionalModelDataSource]
Data sources that are available to your model in addition to the one that you specify for ModelDataSource
when you use the CreateModel
action.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .additional_model_data_sources.is_none()
.
Sourcepub fn environment(&self) -> Option<&HashMap<String, String>>
pub fn environment(&self) -> Option<&HashMap<String, String>>
The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.
The maximum length of each key and value in the Environment
map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
Sourcepub fn model_package_name(&self) -> Option<&str>
pub fn model_package_name(&self) -> Option<&str>
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
Sourcepub fn inference_specification_name(&self) -> Option<&str>
pub fn inference_specification_name(&self) -> Option<&str>
The inference specification name in the model package version.
Sourcepub fn multi_model_config(&self) -> Option<&MultiModelConfig>
pub fn multi_model_config(&self) -> Option<&MultiModelConfig>
Specifies additional configuration for multi-model endpoints.
Source§impl ContainerDefinition
impl ContainerDefinition
Sourcepub fn builder() -> ContainerDefinitionBuilder
pub fn builder() -> ContainerDefinitionBuilder
Creates a new builder-style object to manufacture ContainerDefinition
.
Trait Implementations§
Source§impl Clone for ContainerDefinition
impl Clone for ContainerDefinition
Source§fn clone(&self) -> ContainerDefinition
fn clone(&self) -> ContainerDefinition
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for ContainerDefinition
impl Debug for ContainerDefinition
Source§impl PartialEq for ContainerDefinition
impl PartialEq for ContainerDefinition
impl StructuralPartialEq for ContainerDefinition
Auto Trait Implementations§
impl Freeze for ContainerDefinition
impl RefUnwindSafe for ContainerDefinition
impl Send for ContainerDefinition
impl Sync for ContainerDefinition
impl Unpin for ContainerDefinition
impl UnwindSafe for ContainerDefinition
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);