Module types

Module types 

Source
Expand description

Data structures used by operation inputs/outputs.

Modules§

builders
Builders
error
Error types that Auto Scaling can respond with.

Structs§

AcceleratorCountRequest

Specifies the minimum and maximum for the AcceleratorCount object when you specify InstanceRequirements for an Auto Scaling group.

AcceleratorTotalMemoryMiBRequest

Specifies the minimum and maximum for the AcceleratorTotalMemoryMiB object when you specify InstanceRequirements for an Auto Scaling group.

Activity

Describes scaling activity, which is a long-running process that represents a change to your Auto Scaling group, such as changing its size or replacing an instance.

AdjustmentType

Describes a policy adjustment type.

Alarm

Describes an alarm.

AlarmSpecification

Specifies the CloudWatch alarm specification to use in an instance refresh.

AutoScalingGroup

Describes an Auto Scaling group.

AutoScalingInstanceDetails

Describes an EC2 instance associated with an Auto Scaling group.

AvailabilityZoneDistribution

Describes an Availability Zone distribution.

AvailabilityZoneImpairmentPolicy

Describes an Availability Zone impairment policy.

BaselineEbsBandwidthMbpsRequest

Specifies the minimum and maximum for the BaselineEbsBandwidthMbps object when you specify InstanceRequirements for an Auto Scaling group.

BaselinePerformanceFactorsRequest

The baseline performance to consider, using an instance family as a baseline reference. The instance family establishes the lowest acceptable level of performance. Auto Scaling uses this baseline to guide instance type selection, but there is no guarantee that the selected instance types will always exceed the baseline for every application.

Currently, this parameter only supports CPU performance as a baseline performance factor. For example, specifying c6i uses the CPU performance of the c6i family as the baseline reference.

BlockDeviceMapping

Describes a block device mapping.

CapacityForecast

A GetPredictiveScalingForecast call returns the capacity forecast for a predictive scaling policy. This structure includes the data points for that capacity forecast, along with the timestamps of those data points.

CapacityReservationSpecification

Describes the Capacity Reservation preference and targeting options. If you specify open or none for CapacityReservationPreference, do not specify a CapacityReservationTarget.

CapacityReservationTarget

The target for the Capacity Reservation. Specify Capacity Reservations IDs or Capacity Reservation resource group ARNs.

CpuPerformanceFactorRequest

The CPU performance to consider, using an instance family as the baseline reference.

CustomizedMetricSpecification

Represents a CloudWatch metric of your choosing for a target tracking scaling policy to use with Amazon EC2 Auto Scaling.

To create your customized metric specification:

  • Add values for each required property from CloudWatch. You can use an existing metric, or a new metric that you create. To use your own metric, you must first publish the metric to CloudWatch. For more information, see Publish custom metrics in the Amazon CloudWatch User Guide.

  • Choose a metric that changes proportionally with capacity. The value of the metric should increase or decrease in inverse proportion to the number of capacity units. That is, the value of the metric should decrease when capacity increases.

For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts.

Each individual service provides information about the metrics, namespace, and dimensions they use. For more information, see Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide.

DesiredConfiguration

Describes the desired configuration for an instance refresh.

If you specify a desired configuration, you must specify either a LaunchTemplate or a MixedInstancesPolicy.

Ebs

Describes information used to set up an Amazon EBS volume specified in a block device mapping.

EnabledMetric

Describes an enabled Auto Scaling group metric.

FailedScheduledUpdateGroupActionRequest

Describes a scheduled action that could not be created, updated, or deleted.

Filter

Describes a filter that is used to return a more specific list of results from a describe operation.

If you specify multiple filters, the filters are automatically logically joined with an AND, and the request returns only the results that match all of the specified filters.

For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide.

Instance

Describes an EC2 instance.

InstanceCollection

Contains details about a collection of instances launched in the Auto Scaling group.

InstanceLifecyclePolicy

Defines the lifecycle policy for instances in an Auto Scaling group. This policy controls instance behavior when lifecycles transition and operations fail. Use lifecycle policies to ensure graceful shutdown for stateful workloads or applications requiring extended draining periods.

InstanceMaintenancePolicy

Describes an instance maintenance policy.

For more information, see Set instance maintenance policy in the Amazon EC2 Auto Scaling User Guide.

InstanceMetadataOptions

The metadata options for the instances. For more information, see Configure the instance metadata options in the Amazon EC2 Auto Scaling User Guide.

InstanceMonitoring

Describes whether detailed monitoring is enabled for the Auto Scaling instances.

InstanceRefresh

Describes an instance refresh for an Auto Scaling group.

InstanceRefreshLivePoolProgress

Reports progress on replacing instances that are in the Auto Scaling group.

InstanceRefreshProgressDetails

Reports progress on replacing instances in an Auto Scaling group that has a warm pool. This includes separate details for instances in the warm pool and instances in the Auto Scaling group (the live pool).

InstanceRefreshWarmPoolProgress

Reports progress on replacing instances that are in the warm pool.

InstanceRequirements

The attributes for the instance types for a mixed instances policy. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.

When you specify multiple attributes, you get instance types that satisfy all of the specified attributes. If you specify multiple values for an attribute, you get instance types that satisfy any of the specified values.

To limit the list of instance types from which Amazon EC2 Auto Scaling can identify matching instance types, you can use one of the following parameters, but not both in the same request:

  • AllowedInstanceTypes - The instance types to include in the list. All other instance types are ignored, even if they match your specified attributes.

  • ExcludedInstanceTypes - The instance types to exclude from the list, even if they match your specified attributes.

You must specify VCpuCount and MemoryMiB. All other attributes are optional. Any unspecified optional attribute is set to its default.

For more information, see Create a mixed instances group using attribute-based instance type selection in the Amazon EC2 Auto Scaling User Guide. For help determining which instance types match your attributes before you apply them to your Auto Scaling group, see Preview instance types with specified attributes in the Amazon EC2 User Guide.

InstanceReusePolicy

Describes an instance reuse policy for a warm pool.

For more information, see Warm pools for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide.

InstancesDistribution

Use this structure to specify the distribution of On-Demand Instances and Spot Instances and the allocation strategies used to fulfill On-Demand and Spot capacities for a mixed instances policy.

LaunchConfiguration

Describes a launch configuration.

LaunchInstancesError

Contains details about errors encountered during instance launch attempts.

LaunchTemplate

Use this structure to specify the launch templates and instance types (overrides) for a mixed instances policy.

LaunchTemplateOverrides

Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:

  • Override the instance type that is specified in the launch template.

  • Use multiple instance types.

Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.

After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.

LaunchTemplateSpecification

Describes the launch template and the version of the launch template that Amazon EC2 Auto Scaling uses to launch Amazon EC2 instances. For more information about launch templates, see Launch templates in the Amazon EC2 Auto Scaling User Guide.

LifecycleHook

Describes a lifecycle hook. A lifecycle hook lets you create solutions that are aware of events in the Auto Scaling instance lifecycle, and then perform a custom action on instances when the corresponding lifecycle event occurs.

LifecycleHookSpecification

Describes information used to specify a lifecycle hook for an Auto Scaling group.

For more information, see Amazon EC2 Auto Scaling lifecycle hooks in the Amazon EC2 Auto Scaling User Guide.

LoadBalancerState

Describes the state of a Classic Load Balancer.

LoadBalancerTargetGroupState

Describes the state of a target group.

LoadForecast

A GetPredictiveScalingForecast call returns the load forecast for a predictive scaling policy. This structure includes the data points for that load forecast, along with the timestamps of those data points and the metric specification.

MemoryGiBPerVCpuRequest

Specifies the minimum and maximum for the MemoryGiBPerVCpu object when you specify InstanceRequirements for an Auto Scaling group.

MemoryMiBRequest

Specifies the minimum and maximum for the MemoryMiB object when you specify InstanceRequirements for an Auto Scaling group.

Metric

Represents a specific metric.

MetricCollectionType

Describes a metric.

MetricDataQuery

The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.

For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide.

MetricDimension

Describes the dimension of a metric.

MetricGranularityType

Describes a granularity of a metric.

MetricStat

This structure defines the CloudWatch metric to return, along with the statistic and unit.

For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts in the Amazon CloudWatch User Guide.

MixedInstancesPolicy

Use this structure to launch multiple instance types and On-Demand Instances and Spot Instances within a single Auto Scaling group.

A mixed instances policy contains information that Amazon EC2 Auto Scaling can use to launch instances and help optimize your costs. For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide.

NetworkBandwidthGbpsRequest

Specifies the minimum and maximum for the NetworkBandwidthGbps object when you specify InstanceRequirements for an Auto Scaling group.

Setting the minimum bandwidth does not guarantee that your instance will achieve the minimum bandwidth. Amazon EC2 will identify instance types that support the specified minimum bandwidth, but the actual bandwidth of your instance might go below the specified minimum at times. For more information, see Available instance bandwidth in the Amazon EC2 User Guide.

NetworkInterfaceCountRequest

Specifies the minimum and maximum for the NetworkInterfaceCount object when you specify InstanceRequirements for an Auto Scaling group.

NotificationConfiguration

Describes a notification.

PerformanceFactorReferenceRequest

Specify an instance family to use as the baseline reference for CPU performance. All instance types that All instance types that match your specified attributes will be compared against the CPU performance of the referenced instance family, regardless of CPU manufacturer or architecture differences.

Currently only one instance family can be specified in the list.

PredefinedMetricSpecification

Represents a predefined metric for a target tracking scaling policy to use with Amazon EC2 Auto Scaling.

PredictiveScalingConfiguration

Represents a predictive scaling policy configuration to use with Amazon EC2 Auto Scaling.

PredictiveScalingCustomizedCapacityMetric

Describes a customized capacity metric for a predictive scaling policy.

PredictiveScalingCustomizedLoadMetric

Describes a custom load metric for a predictive scaling policy.

PredictiveScalingCustomizedScalingMetric

Describes a custom scaling metric for a predictive scaling policy.

PredictiveScalingMetricSpecification

This structure specifies the metrics and target utilization settings for a predictive scaling policy.

You must specify either a metric pair, or a load metric and a scaling metric individually. Specifying a metric pair instead of individual metrics provides a simpler way to configure metrics for a scaling policy. You choose the metric pair, and the policy automatically knows the correct sum and average statistics to use for the load metric and the scaling metric.

Example

  • You create a predictive scaling policy and specify ALBRequestCount as the value for the metric pair and 1000.0 as the target value. For this type of metric, you must provide the metric dimension for the corresponding target group, so you also provide a resource label for the Application Load Balancer target group that is attached to your Auto Scaling group.

  • The number of requests the target group receives per minute provides the load metric, and the request count averaged between the members of the target group provides the scaling metric. In CloudWatch, this refers to the RequestCount and RequestCountPerTarget metrics, respectively.

  • For optimal use of predictive scaling, you adhere to the best practice of using a dynamic scaling policy to automatically scale between the minimum capacity and maximum capacity in response to real-time changes in resource utilization.

  • Amazon EC2 Auto Scaling consumes data points for the load metric over the last 14 days and creates an hourly load forecast for predictive scaling. (A minimum of 24 hours of data is required.)

  • After creating the load forecast, Amazon EC2 Auto Scaling determines when to reduce or increase the capacity of your Auto Scaling group in each hour of the forecast period so that the average number of requests received by each instance is as close to 1000 requests per minute as possible at all times.

For information about using custom metrics with predictive scaling, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide.

PredictiveScalingPredefinedLoadMetric

Describes a load metric for a predictive scaling policy.

When returned in the output of DescribePolicies, it indicates that a predictive scaling policy uses individually specified load and scaling metrics instead of a metric pair.

PredictiveScalingPredefinedMetricPair

Represents a metric pair for a predictive scaling policy.

PredictiveScalingPredefinedScalingMetric

Describes a scaling metric for a predictive scaling policy.

When returned in the output of DescribePolicies, it indicates that a predictive scaling policy uses individually specified load and scaling metrics instead of a metric pair.

ProcessType

Describes a process type.

For more information, see Types of processes in the Amazon EC2 Auto Scaling User Guide.

RefreshPreferences

Describes the preferences for an instance refresh.

RetentionTriggers

Defines the specific triggers that cause instances to be retained in a Retained state rather than terminated. Each trigger corresponds to a different failure scenario during the instance lifecycle. This allows fine-grained control over when to preserve instances for manual intervention.

RollbackDetails

Details about an instance refresh rollback.

ScalingPolicy

Describes a scaling policy.

ScheduledUpdateGroupAction

Describes a scheduled scaling action.

ScheduledUpdateGroupActionRequest

Describes information used for one or more scheduled scaling action updates in a BatchPutScheduledUpdateGroupAction operation.

StepAdjustment

Describes information used to create a step adjustment for a step scaling policy.

For the following examples, suppose that you have an alarm with a breach threshold of 50:

  • To trigger the adjustment when the metric is greater than or equal to 50 and less than 60, specify a lower bound of 0 and an upper bound of 10.

  • To trigger the adjustment when the metric is greater than 40 and less than or equal to 50, specify a lower bound of -10 and an upper bound of 0.

There are a few rules for the step adjustments for your step policy:

  • The ranges of your step adjustments can't overlap or have a gap.

  • At most, one step adjustment can have a null lower bound. If one step adjustment has a negative lower bound, then there must be a step adjustment with a null lower bound.

  • At most, one step adjustment can have a null upper bound. If one step adjustment has a positive upper bound, then there must be a step adjustment with a null upper bound.

  • The upper and lower bound can't be null in the same step adjustment.

For more information, see Step adjustments in the Amazon EC2 Auto Scaling User Guide.

SuspendedProcess

Describes an auto scaling process that has been suspended.

For more information, see Types of processes in the Amazon EC2 Auto Scaling User Guide.

Tag

Describes a tag for an Auto Scaling group.

TagDescription

Describes a tag for an Auto Scaling group.

TargetTrackingConfiguration

Represents a target tracking scaling policy configuration to use with Amazon EC2 Auto Scaling.

TargetTrackingMetricDataQuery

The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.

TargetTrackingMetricStat

This structure defines the CloudWatch metric to return, along with the statistic and unit.

For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts in the Amazon CloudWatch User Guide.

TotalLocalStorageGbRequest

Specifies the minimum and maximum for the TotalLocalStorageGB object when you specify InstanceRequirements for an Auto Scaling group.

TrafficSourceIdentifier

Identifying information for a traffic source.

TrafficSourceState

Describes the state of a traffic source.

VCpuCountRequest

Specifies the minimum and maximum for the VCpuCount object when you specify InstanceRequirements for an Auto Scaling group.

WarmPoolConfiguration

Describes a warm pool configuration.

Enums§

AcceleratorManufacturer
When writing a match expression against AcceleratorManufacturer, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
AcceleratorName
When writing a match expression against AcceleratorName, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
AcceleratorType
When writing a match expression against AcceleratorType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
BareMetal
When writing a match expression against BareMetal, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
BurstablePerformance
When writing a match expression against BurstablePerformance, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
CapacityDistributionStrategy
When writing a match expression against CapacityDistributionStrategy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
CapacityReservationPreference
When writing a match expression against CapacityReservationPreference, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
CpuManufacturer
When writing a match expression against CpuManufacturer, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ImpairedZoneHealthCheckBehavior
When writing a match expression against ImpairedZoneHealthCheckBehavior, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
InstanceGeneration
When writing a match expression against InstanceGeneration, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
InstanceMetadataEndpointState
When writing a match expression against InstanceMetadataEndpointState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
InstanceMetadataHttpTokensState
When writing a match expression against InstanceMetadataHttpTokensState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
InstanceRefreshStatus
When writing a match expression against InstanceRefreshStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
LifecycleState
When writing a match expression against LifecycleState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
LocalStorage
When writing a match expression against LocalStorage, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
LocalStorageType
When writing a match expression against LocalStorageType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
MetricStatistic
When writing a match expression against MetricStatistic, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
MetricType
When writing a match expression against MetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
PredefinedLoadMetricType
When writing a match expression against PredefinedLoadMetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
PredefinedMetricPairType
When writing a match expression against PredefinedMetricPairType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
PredefinedScalingMetricType
When writing a match expression against PredefinedScalingMetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
PredictiveScalingMaxCapacityBreachBehavior
When writing a match expression against PredictiveScalingMaxCapacityBreachBehavior, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
PredictiveScalingMode
When writing a match expression against PredictiveScalingMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
RefreshStrategy
When writing a match expression against RefreshStrategy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
RetentionAction
When writing a match expression against RetentionAction, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
RetryStrategy
When writing a match expression against RetryStrategy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ScaleInProtectedInstances
When writing a match expression against ScaleInProtectedInstances, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ScalingActivityStatusCode
When writing a match expression against ScalingActivityStatusCode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
StandbyInstances
When writing a match expression against StandbyInstances, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
WarmPoolState
When writing a match expression against WarmPoolState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
WarmPoolStatus
When writing a match expression against WarmPoolStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.