Struct aws_sdk_sagemaker::operation::describe_inference_experiment::DescribeInferenceExperimentOutput
source · #[non_exhaustive]pub struct DescribeInferenceExperimentOutput {Show 16 fields
pub arn: Option<String>,
pub name: Option<String>,
pub type: Option<InferenceExperimentType>,
pub schedule: Option<InferenceExperimentSchedule>,
pub status: Option<InferenceExperimentStatus>,
pub status_reason: Option<String>,
pub description: Option<String>,
pub creation_time: Option<DateTime>,
pub completion_time: Option<DateTime>,
pub last_modified_time: Option<DateTime>,
pub role_arn: Option<String>,
pub endpoint_metadata: Option<EndpointMetadata>,
pub model_variants: Option<Vec<ModelVariantConfigSummary>>,
pub data_storage_config: Option<InferenceExperimentDataStorageConfig>,
pub shadow_mode_config: Option<ShadowModeConfig>,
pub kms_key: Option<String>,
/* private fields */
}
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.arn: Option<String>
The ARN of the inference experiment being described.
name: Option<String>
The name of the inference experiment.
type: Option<InferenceExperimentType>
The type of the inference experiment.
schedule: Option<InferenceExperimentSchedule>
The duration for which the inference experiment ran or will run.
status: Option<InferenceExperimentStatus>
The status of the inference experiment. The following are the possible statuses for an inference experiment:
-
Creating
- Amazon SageMaker is creating your experiment. -
Created
- Amazon SageMaker has finished the creation of your experiment and will begin the experiment at the scheduled time. -
Updating
- When you make changes to your experiment, your experiment shows as updating. -
Starting
- Amazon SageMaker is beginning your experiment. -
Running
- Your experiment is in progress. -
Stopping
- Amazon SageMaker is stopping your experiment. -
Completed
- Your experiment has completed. -
Cancelled
- When you conclude your experiment early using the StopInferenceExperiment API, or if any operation fails with an unexpected error, it shows as cancelled.
status_reason: Option<String>
The error message or client-specified Reason
from the StopInferenceExperiment API, that explains the status of the inference experiment.
description: Option<String>
The description of the inference experiment.
creation_time: Option<DateTime>
The timestamp at which you created the inference experiment.
completion_time: Option<DateTime>
The timestamp at which the inference experiment was completed.
last_modified_time: Option<DateTime>
The timestamp at which you last modified the inference experiment.
role_arn: Option<String>
The ARN of the IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.
endpoint_metadata: Option<EndpointMetadata>
The metadata of the endpoint on which the inference experiment ran.
model_variants: Option<Vec<ModelVariantConfigSummary>>
An array of ModelVariantConfigSummary
objects. There is one for each variant in the inference experiment. Each ModelVariantConfigSummary
object in the array describes the infrastructure configuration for deploying the corresponding variant.
data_storage_config: Option<InferenceExperimentDataStorageConfig>
The Amazon S3 location and configuration for storing inference request and response data.
shadow_mode_config: Option<ShadowModeConfig>
The configuration of ShadowMode
inference experiment type, which shows the production variant that takes all the inference requests, and the shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also shows the percentage of requests that Amazon SageMaker replicates.
kms_key: Option<String>
The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint. For more information, see CreateInferenceExperiment.
Implementations§
source§impl DescribeInferenceExperimentOutput
impl DescribeInferenceExperimentOutput
sourcepub fn type(&self) -> Option<&InferenceExperimentType>
pub fn type(&self) -> Option<&InferenceExperimentType>
The type of the inference experiment.
sourcepub fn schedule(&self) -> Option<&InferenceExperimentSchedule>
pub fn schedule(&self) -> Option<&InferenceExperimentSchedule>
The duration for which the inference experiment ran or will run.
sourcepub fn status(&self) -> Option<&InferenceExperimentStatus>
pub fn status(&self) -> Option<&InferenceExperimentStatus>
The status of the inference experiment. The following are the possible statuses for an inference experiment:
-
Creating
- Amazon SageMaker is creating your experiment. -
Created
- Amazon SageMaker has finished the creation of your experiment and will begin the experiment at the scheduled time. -
Updating
- When you make changes to your experiment, your experiment shows as updating. -
Starting
- Amazon SageMaker is beginning your experiment. -
Running
- Your experiment is in progress. -
Stopping
- Amazon SageMaker is stopping your experiment. -
Completed
- Your experiment has completed. -
Cancelled
- When you conclude your experiment early using the StopInferenceExperiment API, or if any operation fails with an unexpected error, it shows as cancelled.
sourcepub fn status_reason(&self) -> Option<&str>
pub fn status_reason(&self) -> Option<&str>
The error message or client-specified Reason
from the StopInferenceExperiment API, that explains the status of the inference experiment.
sourcepub fn description(&self) -> Option<&str>
pub fn description(&self) -> Option<&str>
The description of the inference experiment.
sourcepub fn creation_time(&self) -> Option<&DateTime>
pub fn creation_time(&self) -> Option<&DateTime>
The timestamp at which you created the inference experiment.
sourcepub fn completion_time(&self) -> Option<&DateTime>
pub fn completion_time(&self) -> Option<&DateTime>
The timestamp at which the inference experiment was completed.
sourcepub fn last_modified_time(&self) -> Option<&DateTime>
pub fn last_modified_time(&self) -> Option<&DateTime>
The timestamp at which you last modified the inference experiment.
sourcepub fn role_arn(&self) -> Option<&str>
pub fn role_arn(&self) -> Option<&str>
The ARN of the IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.
sourcepub fn endpoint_metadata(&self) -> Option<&EndpointMetadata>
pub fn endpoint_metadata(&self) -> Option<&EndpointMetadata>
The metadata of the endpoint on which the inference experiment ran.
sourcepub fn model_variants(&self) -> Option<&[ModelVariantConfigSummary]>
pub fn model_variants(&self) -> Option<&[ModelVariantConfigSummary]>
An array of ModelVariantConfigSummary
objects. There is one for each variant in the inference experiment. Each ModelVariantConfigSummary
object in the array describes the infrastructure configuration for deploying the corresponding variant.
sourcepub fn data_storage_config(
&self
) -> Option<&InferenceExperimentDataStorageConfig>
pub fn data_storage_config( &self ) -> Option<&InferenceExperimentDataStorageConfig>
The Amazon S3 location and configuration for storing inference request and response data.
sourcepub fn shadow_mode_config(&self) -> Option<&ShadowModeConfig>
pub fn shadow_mode_config(&self) -> Option<&ShadowModeConfig>
The configuration of ShadowMode
inference experiment type, which shows the production variant that takes all the inference requests, and the shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also shows the percentage of requests that Amazon SageMaker replicates.
sourcepub fn kms_key(&self) -> Option<&str>
pub fn kms_key(&self) -> Option<&str>
The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint. For more information, see CreateInferenceExperiment.
source§impl DescribeInferenceExperimentOutput
impl DescribeInferenceExperimentOutput
sourcepub fn builder() -> DescribeInferenceExperimentOutputBuilder
pub fn builder() -> DescribeInferenceExperimentOutputBuilder
Creates a new builder-style object to manufacture DescribeInferenceExperimentOutput
.
Trait Implementations§
source§impl Clone for DescribeInferenceExperimentOutput
impl Clone for DescribeInferenceExperimentOutput
source§fn clone(&self) -> DescribeInferenceExperimentOutput
fn clone(&self) -> DescribeInferenceExperimentOutput
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl PartialEq<DescribeInferenceExperimentOutput> for DescribeInferenceExperimentOutput
impl PartialEq<DescribeInferenceExperimentOutput> for DescribeInferenceExperimentOutput
source§fn eq(&self, other: &DescribeInferenceExperimentOutput) -> bool
fn eq(&self, other: &DescribeInferenceExperimentOutput) -> bool
self
and other
values to be equal, and is used
by ==
.source§impl RequestId for DescribeInferenceExperimentOutput
impl RequestId for DescribeInferenceExperimentOutput
source§fn request_id(&self) -> Option<&str>
fn request_id(&self) -> Option<&str>
None
if the service could not be reached.