Module aws_sdk_autoscaling::types

source ·
Expand description

Data structures used by operation inputs/outputs.

Modules§

  • Builders
  • Error types that Auto Scaling can respond with.

Structs§

  • Specifies the minimum and maximum for the AcceleratorCount object when you specify InstanceRequirements for an Auto Scaling group.

  • Specifies the minimum and maximum for the AcceleratorTotalMemoryMiB object when you specify InstanceRequirements for an Auto Scaling group.

  • Describes scaling activity, which is a long-running process that represents a change to your Auto Scaling group, such as changing its size or replacing an instance.

  • Describes a policy adjustment type.

  • Describes an alarm.

  • Specifies the CloudWatch alarm specification to use in an instance refresh.

  • Describes an Auto Scaling group.

  • Describes an EC2 instance associated with an Auto Scaling group.

  • Specifies the minimum and maximum for the BaselineEbsBandwidthMbps object when you specify InstanceRequirements for an Auto Scaling group.

  • Describes a block device mapping.

  • A GetPredictiveScalingForecast call returns the capacity forecast for a predictive scaling policy. This structure includes the data points for that capacity forecast, along with the timestamps of those data points.

  • Represents a CloudWatch metric of your choosing for a target tracking scaling policy to use with Amazon EC2 Auto Scaling.

    To create your customized metric specification:

    • Add values for each required property from CloudWatch. You can use an existing metric, or a new metric that you create. To use your own metric, you must first publish the metric to CloudWatch. For more information, see Publish custom metrics in the Amazon CloudWatch User Guide.

    • Choose a metric that changes proportionally with capacity. The value of the metric should increase or decrease in inverse proportion to the number of capacity units. That is, the value of the metric should decrease when capacity increases.

    For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts.

    Each individual service provides information about the metrics, namespace, and dimensions they use. For more information, see Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide.

  • Describes the desired configuration for an instance refresh.

    If you specify a desired configuration, you must specify either a LaunchTemplate or a MixedInstancesPolicy.

  • Describes information used to set up an Amazon EBS volume specified in a block device mapping.

  • Describes an enabled Auto Scaling group metric.

  • Describes a scheduled action that could not be created, updated, or deleted.

  • Describes a filter that is used to return a more specific list of results from a describe operation.

    If you specify multiple filters, the filters are automatically logically joined with an AND, and the request returns only the results that match all of the specified filters.

    For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide.

  • Describes an EC2 instance.

  • Describes an instance maintenance policy.

    For more information, see Set instance maintenance policy in the Amazon EC2 Auto Scaling User Guide.

  • The metadata options for the instances. For more information, see Configure the instance metadata options in the Amazon EC2 Auto Scaling User Guide.

  • Describes whether detailed monitoring is enabled for the Auto Scaling instances.

  • Describes an instance refresh for an Auto Scaling group.

  • Reports progress on replacing instances that are in the Auto Scaling group.

  • Reports progress on replacing instances in an Auto Scaling group that has a warm pool. This includes separate details for instances in the warm pool and instances in the Auto Scaling group (the live pool).

  • Reports progress on replacing instances that are in the warm pool.

  • The attributes for the instance types for a mixed instances policy. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.

    When you specify multiple attributes, you get instance types that satisfy all of the specified attributes. If you specify multiple values for an attribute, you get instance types that satisfy any of the specified values.

    To limit the list of instance types from which Amazon EC2 Auto Scaling can identify matching instance types, you can use one of the following parameters, but not both in the same request:

    • AllowedInstanceTypes - The instance types to include in the list. All other instance types are ignored, even if they match your specified attributes.

    • ExcludedInstanceTypes - The instance types to exclude from the list, even if they match your specified attributes.

    You must specify VCpuCount and MemoryMiB. All other attributes are optional. Any unspecified optional attribute is set to its default.

    For more information, see Create a mixed instances group using attribute-based instance type selection in the Amazon EC2 Auto Scaling User Guide. For help determining which instance types match your attributes before you apply them to your Auto Scaling group, see Preview instance types with specified attributes in the Amazon EC2 User Guide for Linux Instances.

  • Describes an instance reuse policy for a warm pool.

    For more information, see Warm pools for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide.

  • Use this structure to specify the distribution of On-Demand Instances and Spot Instances and the allocation strategies used to fulfill On-Demand and Spot capacities for a mixed instances policy.

  • Describes a launch configuration.

  • Use this structure to specify the launch templates and instance types (overrides) for a mixed instances policy.

  • Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:

    • Override the instance type that is specified in the launch template.

    • Use multiple instance types.

    Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.

    After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.

  • Describes the launch template and the version of the launch template that Amazon EC2 Auto Scaling uses to launch Amazon EC2 instances. For more information about launch templates, see Launch templates in the Amazon EC2 Auto Scaling User Guide.

  • Describes a lifecycle hook. A lifecycle hook lets you create solutions that are aware of events in the Auto Scaling instance lifecycle, and then perform a custom action on instances when the corresponding lifecycle event occurs.

  • Describes information used to specify a lifecycle hook for an Auto Scaling group.

    For more information, see Amazon EC2 Auto Scaling lifecycle hooks in the Amazon EC2 Auto Scaling User Guide.

  • Describes the state of a Classic Load Balancer.

  • Describes the state of a target group.

  • A GetPredictiveScalingForecast call returns the load forecast for a predictive scaling policy. This structure includes the data points for that load forecast, along with the timestamps of those data points and the metric specification.

  • Specifies the minimum and maximum for the MemoryGiBPerVCpu object when you specify InstanceRequirements for an Auto Scaling group.

  • Specifies the minimum and maximum for the MemoryMiB object when you specify InstanceRequirements for an Auto Scaling group.

  • Represents a specific metric.

  • Describes a metric.

  • The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.

    For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide.

  • Describes the dimension of a metric.

  • Describes a granularity of a metric.

  • This structure defines the CloudWatch metric to return, along with the statistic and unit.

    For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts in the Amazon CloudWatch User Guide.

  • Use this structure to launch multiple instance types and On-Demand Instances and Spot Instances within a single Auto Scaling group.

    A mixed instances policy contains information that Amazon EC2 Auto Scaling can use to launch instances and help optimize your costs. For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide.

  • Specifies the minimum and maximum for the NetworkBandwidthGbps object when you specify InstanceRequirements for an Auto Scaling group.

    Setting the minimum bandwidth does not guarantee that your instance will achieve the minimum bandwidth. Amazon EC2 will identify instance types that support the specified minimum bandwidth, but the actual bandwidth of your instance might go below the specified minimum at times. For more information, see Available instance bandwidth in the Amazon EC2 User Guide for Linux Instances.

  • Specifies the minimum and maximum for the NetworkInterfaceCount object when you specify InstanceRequirements for an Auto Scaling group.

  • Describes a notification.

  • Represents a predefined metric for a target tracking scaling policy to use with Amazon EC2 Auto Scaling.

  • Represents a predictive scaling policy configuration to use with Amazon EC2 Auto Scaling.

  • Describes a customized capacity metric for a predictive scaling policy.

  • Describes a custom load metric for a predictive scaling policy.

  • Describes a custom scaling metric for a predictive scaling policy.

  • This structure specifies the metrics and target utilization settings for a predictive scaling policy.

    You must specify either a metric pair, or a load metric and a scaling metric individually. Specifying a metric pair instead of individual metrics provides a simpler way to configure metrics for a scaling policy. You choose the metric pair, and the policy automatically knows the correct sum and average statistics to use for the load metric and the scaling metric.

    Example

    • You create a predictive scaling policy and specify ALBRequestCount as the value for the metric pair and 1000.0 as the target value. For this type of metric, you must provide the metric dimension for the corresponding target group, so you also provide a resource label for the Application Load Balancer target group that is attached to your Auto Scaling group.

    • The number of requests the target group receives per minute provides the load metric, and the request count averaged between the members of the target group provides the scaling metric. In CloudWatch, this refers to the RequestCount and RequestCountPerTarget metrics, respectively.

    • For optimal use of predictive scaling, you adhere to the best practice of using a dynamic scaling policy to automatically scale between the minimum capacity and maximum capacity in response to real-time changes in resource utilization.

    • Amazon EC2 Auto Scaling consumes data points for the load metric over the last 14 days and creates an hourly load forecast for predictive scaling. (A minimum of 24 hours of data is required.)

    • After creating the load forecast, Amazon EC2 Auto Scaling determines when to reduce or increase the capacity of your Auto Scaling group in each hour of the forecast period so that the average number of requests received by each instance is as close to 1000 requests per minute as possible at all times.

    For information about using custom metrics with predictive scaling, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide.

  • Describes a load metric for a predictive scaling policy.

    When returned in the output of DescribePolicies, it indicates that a predictive scaling policy uses individually specified load and scaling metrics instead of a metric pair.

  • Represents a metric pair for a predictive scaling policy.

  • Describes a scaling metric for a predictive scaling policy.

    When returned in the output of DescribePolicies, it indicates that a predictive scaling policy uses individually specified load and scaling metrics instead of a metric pair.

  • Describes a process type.

    For more information, see Types of processes in the Amazon EC2 Auto Scaling User Guide.

  • Describes the preferences for an instance refresh.

  • Details about an instance refresh rollback.

  • Describes a scaling policy.

  • Describes a scheduled scaling action.

  • Describes information used for one or more scheduled scaling action updates in a BatchPutScheduledUpdateGroupAction operation.

  • Describes information used to create a step adjustment for a step scaling policy.

    For the following examples, suppose that you have an alarm with a breach threshold of 50:

    • To trigger the adjustment when the metric is greater than or equal to 50 and less than 60, specify a lower bound of 0 and an upper bound of 10.

    • To trigger the adjustment when the metric is greater than 40 and less than or equal to 50, specify a lower bound of -10 and an upper bound of 0.

    There are a few rules for the step adjustments for your step policy:

    • The ranges of your step adjustments can't overlap or have a gap.

    • At most, one step adjustment can have a null lower bound. If one step adjustment has a negative lower bound, then there must be a step adjustment with a null lower bound.

    • At most, one step adjustment can have a null upper bound. If one step adjustment has a positive upper bound, then there must be a step adjustment with a null upper bound.

    • The upper and lower bound can't be null in the same step adjustment.

    For more information, see Step adjustments in the Amazon EC2 Auto Scaling User Guide.

  • Describes an auto scaling process that has been suspended.

    For more information, see Types of processes in the Amazon EC2 Auto Scaling User Guide.

  • Describes a tag for an Auto Scaling group.

  • Describes a tag for an Auto Scaling group.

  • Represents a target tracking scaling policy configuration to use with Amazon EC2 Auto Scaling.

  • The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.

  • This structure defines the CloudWatch metric to return, along with the statistic and unit.

    For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts in the Amazon CloudWatch User Guide.

  • Specifies the minimum and maximum for the TotalLocalStorageGB object when you specify InstanceRequirements for an Auto Scaling group.

  • Identifying information for a traffic source.

  • Describes the state of a traffic source.

  • Specifies the minimum and maximum for the VCpuCount object when you specify InstanceRequirements for an Auto Scaling group.

  • Describes a warm pool configuration.

Enums§

  • When writing a match expression against AcceleratorManufacturer, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against AcceleratorName, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against AcceleratorType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against BareMetal, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against BurstablePerformance, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against CpuManufacturer, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against InstanceGeneration, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against InstanceMetadataEndpointState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against InstanceMetadataHttpTokensState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against InstanceRefreshStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against LifecycleState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against LocalStorage, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against LocalStorageType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against MetricStatistic, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against MetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against PredefinedLoadMetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against PredefinedMetricPairType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against PredefinedScalingMetricType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against PredictiveScalingMaxCapacityBreachBehavior, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against PredictiveScalingMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against RefreshStrategy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against ScaleInProtectedInstances, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against ScalingActivityStatusCode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against StandbyInstances, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against WarmPoolState, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
  • When writing a match expression against WarmPoolStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.