#[non_exhaustive]
pub struct AutoScalingGroupRecommendation {
Show 13 fields pub account_id: Option<String>, pub auto_scaling_group_arn: Option<String>, pub auto_scaling_group_name: Option<String>, pub finding: Option<Finding>, pub utilization_metrics: Option<Vec<UtilizationMetric>>, pub look_back_period_in_days: f64, pub current_configuration: Option<AutoScalingGroupConfiguration>, pub recommendation_options: Option<Vec<AutoScalingGroupRecommendationOption>>, pub last_refresh_timestamp: Option<DateTime>, pub current_performance_risk: Option<CurrentPerformanceRisk>, pub effective_recommendation_preferences: Option<EffectiveRecommendationPreferences>, pub inferred_workload_types: Option<Vec<InferredWorkloadType>>, pub current_instance_gpu_info: Option<GpuInfo>,
}
Expand description

Describes an Auto Scaling group recommendation.

Fields (Non-exhaustive)§

This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.
§account_id: Option<String>

The Amazon Web Services account ID of the Auto Scaling group.

§auto_scaling_group_arn: Option<String>

The Amazon Resource Name (ARN) of the Auto Scaling group.

§auto_scaling_group_name: Option<String>

The name of the Auto Scaling group.

§finding: Option<Finding>

The finding classification of the Auto Scaling group.

Findings for Auto Scaling groups include:

  • NotOptimized —An Auto Scaling group is considered not optimized when Compute Optimizer identifies a recommendation that can provide better performance for your workload.

  • Optimized —An Auto Scaling group is considered optimized when Compute Optimizer determines that the group is correctly provisioned to run your workload based on the chosen instance type. For optimized resources, Compute Optimizer might recommend a new generation instance type.

§utilization_metrics: Option<Vec<UtilizationMetric>>

An array of objects that describe the utilization metrics of the Auto Scaling group.

§look_back_period_in_days: f64

The number of days for which utilization metrics were analyzed for the Auto Scaling group.

§current_configuration: Option<AutoScalingGroupConfiguration>

An array of objects that describe the current configuration of the Auto Scaling group.

§recommendation_options: Option<Vec<AutoScalingGroupRecommendationOption>>

An array of objects that describe the recommendation options for the Auto Scaling group.

§last_refresh_timestamp: Option<DateTime>

The timestamp of when the Auto Scaling group recommendation was last generated.

§current_performance_risk: Option<CurrentPerformanceRisk>

The risk of the current Auto Scaling group not meeting the performance needs of its workloads. The higher the risk, the more likely the current Auto Scaling group configuration has insufficient capacity and cannot meet workload requirements.

§effective_recommendation_preferences: Option<EffectiveRecommendationPreferences>

An object that describes the effective recommendation preferences for the Auto Scaling group.

§inferred_workload_types: Option<Vec<InferredWorkloadType>>

The applications that might be running on the instances in the Auto Scaling group as inferred by Compute Optimizer.

Compute Optimizer can infer if one of the following applications might be running on the instances:

  • AmazonEmr - Infers that Amazon EMR might be running on the instances.

  • ApacheCassandra - Infers that Apache Cassandra might be running on the instances.

  • ApacheHadoop - Infers that Apache Hadoop might be running on the instances.

  • Memcached - Infers that Memcached might be running on the instances.

  • NGINX - Infers that NGINX might be running on the instances.

  • PostgreSql - Infers that PostgreSQL might be running on the instances.

  • Redis - Infers that Redis might be running on the instances.

  • Kafka - Infers that Kafka might be running on the instance.

  • SQLServer - Infers that SQLServer might be running on the instance.

§current_instance_gpu_info: Option<GpuInfo>

Describes the GPU accelerator settings for the current instance type of the Auto Scaling group.

Implementations§

source§

impl AutoScalingGroupRecommendation

source

pub fn account_id(&self) -> Option<&str>

The Amazon Web Services account ID of the Auto Scaling group.

source

pub fn auto_scaling_group_arn(&self) -> Option<&str>

The Amazon Resource Name (ARN) of the Auto Scaling group.

source

pub fn auto_scaling_group_name(&self) -> Option<&str>

The name of the Auto Scaling group.

source

pub fn finding(&self) -> Option<&Finding>

The finding classification of the Auto Scaling group.

Findings for Auto Scaling groups include:

  • NotOptimized —An Auto Scaling group is considered not optimized when Compute Optimizer identifies a recommendation that can provide better performance for your workload.

  • Optimized —An Auto Scaling group is considered optimized when Compute Optimizer determines that the group is correctly provisioned to run your workload based on the chosen instance type. For optimized resources, Compute Optimizer might recommend a new generation instance type.

source

pub fn utilization_metrics(&self) -> &[UtilizationMetric]

An array of objects that describe the utilization metrics of the Auto Scaling group.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .utilization_metrics.is_none().

source

pub fn look_back_period_in_days(&self) -> f64

The number of days for which utilization metrics were analyzed for the Auto Scaling group.

source

pub fn current_configuration(&self) -> Option<&AutoScalingGroupConfiguration>

An array of objects that describe the current configuration of the Auto Scaling group.

source

pub fn recommendation_options(&self) -> &[AutoScalingGroupRecommendationOption]

An array of objects that describe the recommendation options for the Auto Scaling group.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .recommendation_options.is_none().

source

pub fn last_refresh_timestamp(&self) -> Option<&DateTime>

The timestamp of when the Auto Scaling group recommendation was last generated.

source

pub fn current_performance_risk(&self) -> Option<&CurrentPerformanceRisk>

The risk of the current Auto Scaling group not meeting the performance needs of its workloads. The higher the risk, the more likely the current Auto Scaling group configuration has insufficient capacity and cannot meet workload requirements.

source

pub fn effective_recommendation_preferences( &self ) -> Option<&EffectiveRecommendationPreferences>

An object that describes the effective recommendation preferences for the Auto Scaling group.

source

pub fn inferred_workload_types(&self) -> &[InferredWorkloadType]

The applications that might be running on the instances in the Auto Scaling group as inferred by Compute Optimizer.

Compute Optimizer can infer if one of the following applications might be running on the instances:

  • AmazonEmr - Infers that Amazon EMR might be running on the instances.

  • ApacheCassandra - Infers that Apache Cassandra might be running on the instances.

  • ApacheHadoop - Infers that Apache Hadoop might be running on the instances.

  • Memcached - Infers that Memcached might be running on the instances.

  • NGINX - Infers that NGINX might be running on the instances.

  • PostgreSql - Infers that PostgreSQL might be running on the instances.

  • Redis - Infers that Redis might be running on the instances.

  • Kafka - Infers that Kafka might be running on the instance.

  • SQLServer - Infers that SQLServer might be running on the instance.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .inferred_workload_types.is_none().

source

pub fn current_instance_gpu_info(&self) -> Option<&GpuInfo>

Describes the GPU accelerator settings for the current instance type of the Auto Scaling group.

source§

impl AutoScalingGroupRecommendation

source

pub fn builder() -> AutoScalingGroupRecommendationBuilder

Creates a new builder-style object to manufacture AutoScalingGroupRecommendation.

Trait Implementations§

source§

impl Clone for AutoScalingGroupRecommendation

source§

fn clone(&self) -> AutoScalingGroupRecommendation

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for AutoScalingGroupRecommendation

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl PartialEq for AutoScalingGroupRecommendation

source§

fn eq(&self, other: &AutoScalingGroupRecommendation) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl StructuralPartialEq for AutoScalingGroupRecommendation

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more