#[non_exhaustive]pub struct GetTrainedModelInferenceJobOutput {Show 23 fields
pub create_time: DateTime,
pub update_time: DateTime,
pub trained_model_inference_job_arn: String,
pub configured_model_algorithm_association_arn: Option<String>,
pub name: String,
pub status: TrainedModelInferenceJobStatus,
pub trained_model_arn: String,
pub trained_model_version_identifier: Option<String>,
pub resource_config: Option<InferenceResourceConfig>,
pub output_configuration: Option<InferenceOutputConfiguration>,
pub membership_identifier: String,
pub data_source: Option<ModelInferenceDataSource>,
pub container_execution_parameters: Option<InferenceContainerExecutionParameters>,
pub status_details: Option<StatusDetails>,
pub description: Option<String>,
pub inference_container_image_digest: Option<String>,
pub environment: Option<HashMap<String, String>>,
pub kms_key_arn: Option<String>,
pub metrics_status: Option<MetricsStatus>,
pub metrics_status_details: Option<String>,
pub logs_status: Option<LogsStatus>,
pub logs_status_details: Option<String>,
pub tags: Option<HashMap<String, String>>,
/* private fields */
}
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.create_time: DateTime
The time at which the trained model inference job was created.
update_time: DateTime
The most recent time at which the trained model inference job was updated.
trained_model_inference_job_arn: String
The Amazon Resource Name (ARN) of the trained model inference job.
configured_model_algorithm_association_arn: Option<String>
The Amazon Resource Name (ARN) of the configured model algorithm association that was used for the trained model inference job.
name: String
The name of the trained model inference job.
status: TrainedModelInferenceJobStatus
The status of the trained model inference job.
trained_model_arn: String
The Amazon Resource Name (ARN) for the trained model that was used for the trained model inference job.
trained_model_version_identifier: Option<String>
The version identifier of the trained model used for this inference job. This identifies the specific version of the trained model that was used to generate the inference results.
resource_config: Option<InferenceResourceConfig>
The resource configuration information for the trained model inference job.
output_configuration: Option<InferenceOutputConfiguration>
The output configuration information for the trained model inference job.
membership_identifier: String
The membership ID of the membership that contains the trained model inference job.
data_source: Option<ModelInferenceDataSource>
The data source that was used for the trained model inference job.
container_execution_parameters: Option<InferenceContainerExecutionParameters>
The execution parameters for the model inference job container.
status_details: Option<StatusDetails>
Details about the status of a resource.
description: Option<String>
The description of the trained model inference job.
inference_container_image_digest: Option<String>
Information about the training container image.
environment: Option<HashMap<String, String>>
The environment variables to set in the Docker container.
kms_key_arn: Option<String>
The Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the ML inference job and associated data.
metrics_status: Option<MetricsStatus>
The metrics status for the trained model inference job.
metrics_status_details: Option<String>
Details about the metrics status for the trained model inference job.
logs_status: Option<LogsStatus>
The logs status for the trained model inference job.
logs_status_details: Option<String>
Details about the logs status for the trained model inference job.
The optional metadata that you applied to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
Implementations§
Source§impl GetTrainedModelInferenceJobOutput
impl GetTrainedModelInferenceJobOutput
Sourcepub fn create_time(&self) -> &DateTime
pub fn create_time(&self) -> &DateTime
The time at which the trained model inference job was created.
Sourcepub fn update_time(&self) -> &DateTime
pub fn update_time(&self) -> &DateTime
The most recent time at which the trained model inference job was updated.
Sourcepub fn trained_model_inference_job_arn(&self) -> &str
pub fn trained_model_inference_job_arn(&self) -> &str
The Amazon Resource Name (ARN) of the trained model inference job.
Sourcepub fn configured_model_algorithm_association_arn(&self) -> Option<&str>
pub fn configured_model_algorithm_association_arn(&self) -> Option<&str>
The Amazon Resource Name (ARN) of the configured model algorithm association that was used for the trained model inference job.
Sourcepub fn status(&self) -> &TrainedModelInferenceJobStatus
pub fn status(&self) -> &TrainedModelInferenceJobStatus
The status of the trained model inference job.
Sourcepub fn trained_model_arn(&self) -> &str
pub fn trained_model_arn(&self) -> &str
The Amazon Resource Name (ARN) for the trained model that was used for the trained model inference job.
Sourcepub fn trained_model_version_identifier(&self) -> Option<&str>
pub fn trained_model_version_identifier(&self) -> Option<&str>
The version identifier of the trained model used for this inference job. This identifies the specific version of the trained model that was used to generate the inference results.
Sourcepub fn resource_config(&self) -> Option<&InferenceResourceConfig>
pub fn resource_config(&self) -> Option<&InferenceResourceConfig>
The resource configuration information for the trained model inference job.
Sourcepub fn output_configuration(&self) -> Option<&InferenceOutputConfiguration>
pub fn output_configuration(&self) -> Option<&InferenceOutputConfiguration>
The output configuration information for the trained model inference job.
Sourcepub fn membership_identifier(&self) -> &str
pub fn membership_identifier(&self) -> &str
The membership ID of the membership that contains the trained model inference job.
Sourcepub fn data_source(&self) -> Option<&ModelInferenceDataSource>
pub fn data_source(&self) -> Option<&ModelInferenceDataSource>
The data source that was used for the trained model inference job.
Sourcepub fn container_execution_parameters(
&self,
) -> Option<&InferenceContainerExecutionParameters>
pub fn container_execution_parameters( &self, ) -> Option<&InferenceContainerExecutionParameters>
The execution parameters for the model inference job container.
Sourcepub fn status_details(&self) -> Option<&StatusDetails>
pub fn status_details(&self) -> Option<&StatusDetails>
Details about the status of a resource.
Sourcepub fn description(&self) -> Option<&str>
pub fn description(&self) -> Option<&str>
The description of the trained model inference job.
Sourcepub fn inference_container_image_digest(&self) -> Option<&str>
pub fn inference_container_image_digest(&self) -> Option<&str>
Information about the training container image.
Sourcepub fn environment(&self) -> Option<&HashMap<String, String>>
pub fn environment(&self) -> Option<&HashMap<String, String>>
The environment variables to set in the Docker container.
Sourcepub fn kms_key_arn(&self) -> Option<&str>
pub fn kms_key_arn(&self) -> Option<&str>
The Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the ML inference job and associated data.
Sourcepub fn metrics_status(&self) -> Option<&MetricsStatus>
pub fn metrics_status(&self) -> Option<&MetricsStatus>
The metrics status for the trained model inference job.
Sourcepub fn metrics_status_details(&self) -> Option<&str>
pub fn metrics_status_details(&self) -> Option<&str>
Details about the metrics status for the trained model inference job.
Sourcepub fn logs_status(&self) -> Option<&LogsStatus>
pub fn logs_status(&self) -> Option<&LogsStatus>
The logs status for the trained model inference job.
Sourcepub fn logs_status_details(&self) -> Option<&str>
pub fn logs_status_details(&self) -> Option<&str>
Details about the logs status for the trained model inference job.
The optional metadata that you applied to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
Source§impl GetTrainedModelInferenceJobOutput
impl GetTrainedModelInferenceJobOutput
Sourcepub fn builder() -> GetTrainedModelInferenceJobOutputBuilder
pub fn builder() -> GetTrainedModelInferenceJobOutputBuilder
Creates a new builder-style object to manufacture GetTrainedModelInferenceJobOutput
.
Trait Implementations§
Source§impl Clone for GetTrainedModelInferenceJobOutput
impl Clone for GetTrainedModelInferenceJobOutput
Source§fn clone(&self) -> GetTrainedModelInferenceJobOutput
fn clone(&self) -> GetTrainedModelInferenceJobOutput
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl PartialEq for GetTrainedModelInferenceJobOutput
impl PartialEq for GetTrainedModelInferenceJobOutput
Source§fn eq(&self, other: &GetTrainedModelInferenceJobOutput) -> bool
fn eq(&self, other: &GetTrainedModelInferenceJobOutput) -> bool
self
and other
values to be equal, and is used by ==
.Source§impl RequestId for GetTrainedModelInferenceJobOutput
impl RequestId for GetTrainedModelInferenceJobOutput
Source§fn request_id(&self) -> Option<&str>
fn request_id(&self) -> Option<&str>
None
if the service could not be reached.impl StructuralPartialEq for GetTrainedModelInferenceJobOutput
Auto Trait Implementations§
impl Freeze for GetTrainedModelInferenceJobOutput
impl RefUnwindSafe for GetTrainedModelInferenceJobOutput
impl Send for GetTrainedModelInferenceJobOutput
impl Sync for GetTrainedModelInferenceJobOutput
impl Unpin for GetTrainedModelInferenceJobOutput
impl UnwindSafe for GetTrainedModelInferenceJobOutput
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);