Expand description
With Application Auto Scaling, you can configure automatic scaling for the following resources:
- Amazon AppStream 2.0 fleets
- Amazon Aurora Replicas
- Amazon Comprehend document classification and entity recognizer endpoints
- Amazon DynamoDB tables and global secondary indexes throughput capacity
- Amazon ECS services
- Amazon ElastiCache replication groups (Redis OSS and Valkey) and Memcached clusters
- Amazon EMR clusters
- Amazon Keyspaces (for Apache Cassandra) tables
- Lambda function provisioned concurrency
- Amazon Managed Streaming for Apache Kafka broker storage
- Amazon Neptune clusters
- Amazon SageMaker endpoint variants
- Amazon SageMaker inference components
- Amazon SageMaker serverless endpoint provisioned concurrency
- Spot Fleets (Amazon EC2)
- Pool of WorkSpaces
- Custom resources provided by your own applications or services
To learn more about Application Auto Scaling, see the Application Auto Scaling User Guide.
API Summary
The Application Auto Scaling service API includes three key sets of actions:
- Register and manage scalable targets - Register Amazon Web Services or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets.
- Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history.
- Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTarget API action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling.
§Getting Started
Examples are available for many services and operations, check out the examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio
as a dependency within your Rust project to execute asynchronous code. To add aws-sdk-applicationautoscaling
to
your project, add the following to your Cargo.toml file:
[dependencies]
aws-config = { version = "1.1.7", features = ["behavior-version-latest"] }
aws-sdk-applicationautoscaling = "1.71.0"
tokio = { version = "1", features = ["full"] }
Then in code, a client can be created with the following:
use aws_sdk_applicationautoscaling as applicationautoscaling;
#[::tokio::main]
async fn main() -> Result<(), applicationautoscaling::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_applicationautoscaling::Client::new(&config);
// ... make some calls with the client
Ok(())
}
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
§Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
§Getting Help
- GitHub discussions - For ideas, RFCs & general questions
- GitHub issues - For bug reports & feature requests
- Generated Docs (latest version)
- Usage examples
§Crate Organization
The entry point for most customers will be Client
, which exposes one method for each API
offered by Application Auto Scaling. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling send()
to get a Future
that will result in
either a successful output or a SdkError
.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in types
. There are some simpler types for
representing data such as date times or binary blobs that live in primitives
.
All types required to configure a client via the Config
struct live
in config
.
The operation
module has a submodule for every API, and in each submodule
is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level Error
type that encompasses all the errors that the
client can return. Any other error type can be converted to this Error
type via the
From
trait.
The other modules within this crate are not required for normal usage.
Modules§
- client
- Client for calling Application Auto Scaling.
- config
- Configuration for Application Auto Scaling.
- error
- Common errors and error handling utilities.
- meta
- Information about this crate.
- operation
- All operations that this crate can perform.
- primitives
- Primitives such as
Blob
orDateTime
used by other types. - types
- Data structures used by operation inputs/outputs.
Structs§
- Client
- Client for Application Auto Scaling
- Config
- Configuration for a aws_sdk_applicationautoscaling service client.
Enums§
- Error
- All possible error types for this service.