Expand description
Data structures used by operation inputs/outputs.
Modules§
Structs§
- Accelerator
Count Request Specifies the minimum and maximum for the
AcceleratorCountobject when you specify InstanceRequirements for an Auto Scaling group.- Accelerator
Total Memory MiBRequest Specifies the minimum and maximum for the
AcceleratorTotalMemoryMiBobject when you specify InstanceRequirements for an Auto Scaling group.- Activity
Describes scaling activity, which is a long-running process that represents a change to your Auto Scaling group, such as changing its size or replacing an instance.
- Adjustment
Type Describes a policy adjustment type.
- Alarm
Describes an alarm.
- Alarm
Specification Specifies the CloudWatch alarm specification to use in an instance refresh.
- Auto
Scaling Group Describes an Auto Scaling group.
- Auto
Scaling Instance Details Describes an EC2 instance associated with an Auto Scaling group.
- Availability
Zone Distribution Describes an Availability Zone distribution.
- Availability
Zone Impairment Policy Describes an Availability Zone impairment policy.
- Baseline
EbsBandwidth Mbps Request Specifies the minimum and maximum for the
BaselineEbsBandwidthMbpsobject when you specify InstanceRequirements for an Auto Scaling group.- Baseline
Performance Factors Request The baseline performance to consider, using an instance family as a baseline reference. The instance family establishes the lowest acceptable level of performance. Auto Scaling uses this baseline to guide instance type selection, but there is no guarantee that the selected instance types will always exceed the baseline for every application.
Currently, this parameter only supports CPU performance as a baseline performance factor. For example, specifying
c6iuses the CPU performance of thec6ifamily as the baseline reference.- Block
Device Mapping Describes a block device mapping.
- Capacity
Forecast A
GetPredictiveScalingForecastcall returns the capacity forecast for a predictive scaling policy. This structure includes the data points for that capacity forecast, along with the timestamps of those data points.- Capacity
Reservation Specification Describes the Capacity Reservation preference and targeting options. If you specify
openornoneforCapacityReservationPreference, do not specify aCapacityReservationTarget.- Capacity
Reservation Target The target for the Capacity Reservation. Specify Capacity Reservations IDs or Capacity Reservation resource group ARNs.
- CpuPerformance
Factor Request The CPU performance to consider, using an instance family as the baseline reference.
- Customized
Metric Specification Represents a CloudWatch metric of your choosing for a target tracking scaling policy to use with Amazon EC2 Auto Scaling.
To create your customized metric specification:
-
Add values for each required property from CloudWatch. You can use an existing metric, or a new metric that you create. To use your own metric, you must first publish the metric to CloudWatch. For more information, see Publish custom metrics in the Amazon CloudWatch User Guide.
-
Choose a metric that changes proportionally with capacity. The value of the metric should increase or decrease in inverse proportion to the number of capacity units. That is, the value of the metric should decrease when capacity increases.
For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts.
Each individual service provides information about the metrics, namespace, and dimensions they use. For more information, see Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide.
-
- Desired
Configuration Describes the desired configuration for an instance refresh.
If you specify a desired configuration, you must specify either a
LaunchTemplateor aMixedInstancesPolicy.- Ebs
Describes information used to set up an Amazon EBS volume specified in a block device mapping.
- Enabled
Metric Describes an enabled Auto Scaling group metric.
- Failed
Scheduled Update Group Action Request Describes a scheduled action that could not be created, updated, or deleted.
- Filter
Describes a filter that is used to return a more specific list of results from a describe operation.
If you specify multiple filters, the filters are automatically logically joined with an
AND, and the request returns only the results that match all of the specified filters.For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide.
- Instance
Describes an EC2 instance.
- Instance
Collection Contains details about a collection of instances launched in the Auto Scaling group.
- Instance
Lifecycle Policy Defines the lifecycle policy for instances in an Auto Scaling group. This policy controls instance behavior when lifecycles transition and operations fail. Use lifecycle policies to ensure graceful shutdown for stateful workloads or applications requiring extended draining periods.
- Instance
Maintenance Policy Describes an instance maintenance policy.
For more information, see Set instance maintenance policy in the Amazon EC2 Auto Scaling User Guide.
- Instance
Metadata Options The metadata options for the instances. For more information, see Configure the instance metadata options in the Amazon EC2 Auto Scaling User Guide.
- Instance
Monitoring Describes whether detailed monitoring is enabled for the Auto Scaling instances.
- Instance
Refresh Describes an instance refresh for an Auto Scaling group.
- Instance
Refresh Live Pool Progress Reports progress on replacing instances that are in the Auto Scaling group.
- Instance
Refresh Progress Details Reports progress on replacing instances in an Auto Scaling group that has a warm pool. This includes separate details for instances in the warm pool and instances in the Auto Scaling group (the live pool).
- Instance
Refresh Warm Pool Progress Reports progress on replacing instances that are in the warm pool.
- Instance
Requirements The attributes for the instance types for a mixed instances policy. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
When you specify multiple attributes, you get instance types that satisfy all of the specified attributes. If you specify multiple values for an attribute, you get instance types that satisfy any of the specified values.
To limit the list of instance types from which Amazon EC2 Auto Scaling can identify matching instance types, you can use one of the following parameters, but not both in the same request:
-
AllowedInstanceTypes- The instance types to include in the list. All other instance types are ignored, even if they match your specified attributes. -
ExcludedInstanceTypes- The instance types to exclude from the list, even if they match your specified attributes.
You must specify
VCpuCountandMemoryMiB. All other attributes are optional. Any unspecified optional attribute is set to its default.For more information, see Create a mixed instances group using attribute-based instance type selection in the Amazon EC2 Auto Scaling User Guide. For help determining which instance types match your attributes before you apply them to your Auto Scaling group, see Preview instance types with specified attributes in the Amazon EC2 User Guide.
-
- Instance
Reuse Policy Describes an instance reuse policy for a warm pool.
For more information, see Warm pools for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide.
- Instances
Distribution Use this structure to specify the distribution of On-Demand Instances and Spot Instances and the allocation strategies used to fulfill On-Demand and Spot capacities for a mixed instances policy.
- Launch
Configuration Describes a launch configuration.
- Launch
Instances Error Contains details about errors encountered during instance launch attempts.
- Launch
Template Use this structure to specify the launch templates and instance types (overrides) for a mixed instances policy.
- Launch
Template Overrides Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:
-
Override the instance type that is specified in the launch template.
-
Use multiple instance types.
Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.
After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.
-
- Launch
Template Specification Describes the launch template and the version of the launch template that Amazon EC2 Auto Scaling uses to launch Amazon EC2 instances. For more information about launch templates, see Launch templates in the Amazon EC2 Auto Scaling User Guide.
- Lifecycle
Hook Describes a lifecycle hook. A lifecycle hook lets you create solutions that are aware of events in the Auto Scaling instance lifecycle, and then perform a custom action on instances when the corresponding lifecycle event occurs.
- Lifecycle
Hook Specification Describes information used to specify a lifecycle hook for an Auto Scaling group.
For more information, see Amazon EC2 Auto Scaling lifecycle hooks in the Amazon EC2 Auto Scaling User Guide.
- Load
Balancer State Describes the state of a Classic Load Balancer.
- Load
Balancer Target Group State Describes the state of a target group.
- Load
Forecast A
GetPredictiveScalingForecastcall returns the load forecast for a predictive scaling policy. This structure includes the data points for that load forecast, along with the timestamps of those data points and the metric specification.- Memory
GiBPerV CpuRequest Specifies the minimum and maximum for the
MemoryGiBPerVCpuobject when you specify InstanceRequirements for an Auto Scaling group.- Memory
MiBRequest Specifies the minimum and maximum for the
MemoryMiBobject when you specify InstanceRequirements for an Auto Scaling group.- Metric
Represents a specific metric.
- Metric
Collection Type Describes a metric.
- Metric
Data Query The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide.
- Metric
Dimension Describes the dimension of a metric.
- Metric
Granularity Type Describes a granularity of a metric.
- Metric
Stat This structure defines the CloudWatch metric to return, along with the statistic and unit.
For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts in the Amazon CloudWatch User Guide.
- Mixed
Instances Policy Use this structure to launch multiple instance types and On-Demand Instances and Spot Instances within a single Auto Scaling group.
A mixed instances policy contains information that Amazon EC2 Auto Scaling can use to launch instances and help optimize your costs. For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide.
- Network
Bandwidth Gbps Request Specifies the minimum and maximum for the
NetworkBandwidthGbpsobject when you specify InstanceRequirements for an Auto Scaling group.Setting the minimum bandwidth does not guarantee that your instance will achieve the minimum bandwidth. Amazon EC2 will identify instance types that support the specified minimum bandwidth, but the actual bandwidth of your instance might go below the specified minimum at times. For more information, see Available instance bandwidth in the Amazon EC2 User Guide.
- Network
Interface Count Request Specifies the minimum and maximum for the
NetworkInterfaceCountobject when you specify InstanceRequirements for an Auto Scaling group.- Notification
Configuration Describes a notification.
- Performance
Factor Reference Request Specify an instance family to use as the baseline reference for CPU performance. All instance types that All instance types that match your specified attributes will be compared against the CPU performance of the referenced instance family, regardless of CPU manufacturer or architecture differences.
Currently only one instance family can be specified in the list.
- Predefined
Metric Specification Represents a predefined metric for a target tracking scaling policy to use with Amazon EC2 Auto Scaling.
- Predictive
Scaling Configuration Represents a predictive scaling policy configuration to use with Amazon EC2 Auto Scaling.
- Predictive
Scaling Customized Capacity Metric Describes a customized capacity metric for a predictive scaling policy.
- Predictive
Scaling Customized Load Metric Describes a custom load metric for a predictive scaling policy.
- Predictive
Scaling Customized Scaling Metric Describes a custom scaling metric for a predictive scaling policy.
- Predictive
Scaling Metric Specification This structure specifies the metrics and target utilization settings for a predictive scaling policy.
You must specify either a metric pair, or a load metric and a scaling metric individually. Specifying a metric pair instead of individual metrics provides a simpler way to configure metrics for a scaling policy. You choose the metric pair, and the policy automatically knows the correct sum and average statistics to use for the load metric and the scaling metric.
Example
-
You create a predictive scaling policy and specify
ALBRequestCountas the value for the metric pair and1000.0as the target value. For this type of metric, you must provide the metric dimension for the corresponding target group, so you also provide a resource label for the Application Load Balancer target group that is attached to your Auto Scaling group. -
The number of requests the target group receives per minute provides the load metric, and the request count averaged between the members of the target group provides the scaling metric. In CloudWatch, this refers to the
RequestCountandRequestCountPerTargetmetrics, respectively. -
For optimal use of predictive scaling, you adhere to the best practice of using a dynamic scaling policy to automatically scale between the minimum capacity and maximum capacity in response to real-time changes in resource utilization.
-
Amazon EC2 Auto Scaling consumes data points for the load metric over the last 14 days and creates an hourly load forecast for predictive scaling. (A minimum of 24 hours of data is required.)
-
After creating the load forecast, Amazon EC2 Auto Scaling determines when to reduce or increase the capacity of your Auto Scaling group in each hour of the forecast period so that the average number of requests received by each instance is as close to 1000 requests per minute as possible at all times.
For information about using custom metrics with predictive scaling, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide.
-
- Predictive
Scaling Predefined Load Metric Describes a load metric for a predictive scaling policy.
When returned in the output of
DescribePolicies, it indicates that a predictive scaling policy uses individually specified load and scaling metrics instead of a metric pair.- Predictive
Scaling Predefined Metric Pair Represents a metric pair for a predictive scaling policy.
- Predictive
Scaling Predefined Scaling Metric Describes a scaling metric for a predictive scaling policy.
When returned in the output of
DescribePolicies, it indicates that a predictive scaling policy uses individually specified load and scaling metrics instead of a metric pair.- Process
Type Describes a process type.
For more information, see Types of processes in the Amazon EC2 Auto Scaling User Guide.
- Refresh
Preferences Describes the preferences for an instance refresh.
- Retention
Triggers Defines the specific triggers that cause instances to be retained in a Retained state rather than terminated. Each trigger corresponds to a different failure scenario during the instance lifecycle. This allows fine-grained control over when to preserve instances for manual intervention.
- Rollback
Details Details about an instance refresh rollback.
- Scaling
Policy Describes a scaling policy.
- Scheduled
Update Group Action Describes a scheduled scaling action.
- Scheduled
Update Group Action Request Describes information used for one or more scheduled scaling action updates in a BatchPutScheduledUpdateGroupAction operation.
- Step
Adjustment Describes information used to create a step adjustment for a step scaling policy.
For the following examples, suppose that you have an alarm with a breach threshold of 50:
-
To trigger the adjustment when the metric is greater than or equal to 50 and less than 60, specify a lower bound of 0 and an upper bound of 10.
-
To trigger the adjustment when the metric is greater than 40 and less than or equal to 50, specify a lower bound of -10 and an upper bound of 0.
There are a few rules for the step adjustments for your step policy:
-
The ranges of your step adjustments can't overlap or have a gap.
-
At most, one step adjustment can have a null lower bound. If one step adjustment has a negative lower bound, then there must be a step adjustment with a null lower bound.
-
At most, one step adjustment can have a null upper bound. If one step adjustment has a positive upper bound, then there must be a step adjustment with a null upper bound.
-
The upper and lower bound can't be null in the same step adjustment.
For more information, see Step adjustments in the Amazon EC2 Auto Scaling User Guide.
-
- Suspended
Process Describes an auto scaling process that has been suspended.
For more information, see Types of processes in the Amazon EC2 Auto Scaling User Guide.
- Tag
Describes a tag for an Auto Scaling group.
- TagDescription
Describes a tag for an Auto Scaling group.
- Target
Tracking Configuration Represents a target tracking scaling policy configuration to use with Amazon EC2 Auto Scaling.
- Target
Tracking Metric Data Query The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
- Target
Tracking Metric Stat This structure defines the CloudWatch metric to return, along with the statistic and unit.
For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts in the Amazon CloudWatch User Guide.
- Total
Local Storage GbRequest Specifies the minimum and maximum for the
TotalLocalStorageGBobject when you specify InstanceRequirements for an Auto Scaling group.- Traffic
Source Identifier Identifying information for a traffic source.
- Traffic
Source State Describes the state of a traffic source.
- VCpu
Count Request Specifies the minimum and maximum for the
VCpuCountobject when you specify InstanceRequirements for an Auto Scaling group.- Warm
Pool Configuration Describes a warm pool configuration.
Enums§
- Accelerator
Manufacturer - When writing a match expression against
AcceleratorManufacturer, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Accelerator
Name - When writing a match expression against
AcceleratorName, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Accelerator
Type - When writing a match expression against
AcceleratorType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Bare
Metal - When writing a match expression against
BareMetal, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Burstable
Performance - When writing a match expression against
BurstablePerformance, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Capacity
Distribution Strategy - When writing a match expression against
CapacityDistributionStrategy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Capacity
Reservation Preference - When writing a match expression against
CapacityReservationPreference, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - CpuManufacturer
- When writing a match expression against
CpuManufacturer, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Impaired
Zone Health Check Behavior - When writing a match expression against
ImpairedZoneHealthCheckBehavior, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Instance
Generation - When writing a match expression against
InstanceGeneration, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Instance
Metadata Endpoint State - When writing a match expression against
InstanceMetadataEndpointState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Instance
Metadata Http Tokens State - When writing a match expression against
InstanceMetadataHttpTokensState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Instance
Refresh Status - When writing a match expression against
InstanceRefreshStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Lifecycle
State - When writing a match expression against
LifecycleState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Local
Storage - When writing a match expression against
LocalStorage, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Local
Storage Type - When writing a match expression against
LocalStorageType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Metric
Statistic - When writing a match expression against
MetricStatistic, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Metric
Type - When writing a match expression against
MetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Predefined
Load Metric Type - When writing a match expression against
PredefinedLoadMetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Predefined
Metric Pair Type - When writing a match expression against
PredefinedMetricPairType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Predefined
Scaling Metric Type - When writing a match expression against
PredefinedScalingMetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Predictive
Scaling MaxCapacity Breach Behavior - When writing a match expression against
PredictiveScalingMaxCapacityBreachBehavior, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Predictive
Scaling Mode - When writing a match expression against
PredictiveScalingMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Refresh
Strategy - When writing a match expression against
RefreshStrategy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Retention
Action - When writing a match expression against
RetentionAction, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Retry
Strategy - When writing a match expression against
RetryStrategy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Scale
InProtected Instances - When writing a match expression against
ScaleInProtectedInstances, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Scaling
Activity Status Code - When writing a match expression against
ScalingActivityStatusCode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Standby
Instances - When writing a match expression against
StandbyInstances, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Warm
Pool State - When writing a match expression against
WarmPoolState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Warm
Pool Status - When writing a match expression against
WarmPoolStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.