1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
pub use crate::operation::create_auto_scaling_configuration::_create_auto_scaling_configuration_output::CreateAutoScalingConfigurationOutputBuilder;
pub use crate::operation::create_auto_scaling_configuration::_create_auto_scaling_configuration_input::CreateAutoScalingConfigurationInputBuilder;
/// Fluent builder constructing a request to `CreateAutoScalingConfiguration`.
///
/// <p>Create an App Runner automatic scaling configuration resource. App Runner requires this resource when you create or update App Runner services and you require non-default auto scaling settings. You can share an auto scaling configuration across multiple services.</p>
/// <p>Create multiple revisions of a configuration by calling this action multiple times using the same <code>AutoScalingConfigurationName</code>. The call returns incremental <code>AutoScalingConfigurationRevision</code> values. When you create a service and configure an auto scaling configuration resource, the service uses the latest active revision of the auto scaling configuration by default. You can optionally configure the service to use a specific revision.</p>
/// <p>Configure a higher <code>MinSize</code> to increase the spread of your App Runner service over more Availability Zones in the Amazon Web Services Region. The tradeoff is a higher minimal cost.</p>
/// <p>Configure a lower <code>MaxSize</code> to control your cost. The tradeoff is lower responsiveness during peak demand.</p>
#[derive(std::clone::Clone, std::fmt::Debug)]
pub struct CreateAutoScalingConfigurationFluentBuilder {
handle: std::sync::Arc<crate::client::Handle>,
inner: crate::operation::create_auto_scaling_configuration::builders::CreateAutoScalingConfigurationInputBuilder
}
impl CreateAutoScalingConfigurationFluentBuilder {
/// Creates a new `CreateAutoScalingConfiguration`.
pub(crate) fn new(handle: std::sync::Arc<crate::client::Handle>) -> Self {
Self {
handle,
inner: Default::default(),
}
}
/// Consume this builder, creating a customizable operation that can be modified before being
/// sent. The operation's inner [http::Request] can be modified as well.
pub async fn customize(self) -> std::result::Result<
crate::client::customize::CustomizableOperation<crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfiguration, aws_http::retry::AwsResponseRetryClassifier,>,
aws_smithy_http::result::SdkError<crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationError>
>{
let handle = self.handle.clone();
let operation = self
.inner
.build()
.map_err(aws_smithy_http::result::SdkError::construction_failure)?
.make_operation(&handle.conf)
.await
.map_err(aws_smithy_http::result::SdkError::construction_failure)?;
Ok(crate::client::customize::CustomizableOperation { handle, operation })
}
/// Sends the request and returns the response.
///
/// If an error occurs, an `SdkError` will be returned with additional details that
/// can be matched against.
///
/// By default, any retryable failures will be retried twice. Retry behavior
/// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
/// set when configuring the client.
pub async fn send(self) -> std::result::Result<crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationOutput, aws_smithy_http::result::SdkError<crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationError>>
{
let op = self
.inner
.build()
.map_err(aws_smithy_http::result::SdkError::construction_failure)?
.make_operation(&self.handle.conf)
.await
.map_err(aws_smithy_http::result::SdkError::construction_failure)?;
self.handle.client.call(op).await
}
/// <p>A name for the auto scaling configuration. When you use it for the first time in an Amazon Web Services Region, App Runner creates revision number <code>1</code> of this name. When you use the same name in subsequent calls, App Runner creates incremental revisions of the configuration.</p> <note>
/// <p>The name <code>DefaultConfiguration</code> is reserved (it's the configuration that App Runner uses if you don't provide a custome one). You can't use it to create a new auto scaling configuration, and you can't create a revision of it.</p>
/// <p>When you want to use your own auto scaling configuration for your App Runner service, <i>create a configuration with a different name</i>, and then provide it when you create or update your service.</p>
/// </note>
pub fn auto_scaling_configuration_name(
mut self,
input: impl Into<std::string::String>,
) -> Self {
self.inner = self.inner.auto_scaling_configuration_name(input.into());
self
}
/// <p>A name for the auto scaling configuration. When you use it for the first time in an Amazon Web Services Region, App Runner creates revision number <code>1</code> of this name. When you use the same name in subsequent calls, App Runner creates incremental revisions of the configuration.</p> <note>
/// <p>The name <code>DefaultConfiguration</code> is reserved (it's the configuration that App Runner uses if you don't provide a custome one). You can't use it to create a new auto scaling configuration, and you can't create a revision of it.</p>
/// <p>When you want to use your own auto scaling configuration for your App Runner service, <i>create a configuration with a different name</i>, and then provide it when you create or update your service.</p>
/// </note>
pub fn set_auto_scaling_configuration_name(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.inner = self.inner.set_auto_scaling_configuration_name(input);
self
}
/// <p>The maximum number of concurrent requests that you want an instance to process. If the number of concurrent requests exceeds this limit, App Runner scales up your service.</p>
/// <p>Default: <code>100</code> </p>
pub fn max_concurrency(mut self, input: i32) -> Self {
self.inner = self.inner.max_concurrency(input);
self
}
/// <p>The maximum number of concurrent requests that you want an instance to process. If the number of concurrent requests exceeds this limit, App Runner scales up your service.</p>
/// <p>Default: <code>100</code> </p>
pub fn set_max_concurrency(mut self, input: std::option::Option<i32>) -> Self {
self.inner = self.inner.set_max_concurrency(input);
self
}
/// <p>The minimum number of instances that App Runner provisions for your service. The service always has at least <code>MinSize</code> provisioned instances. Some of them actively serve traffic. The rest of them (provisioned and inactive instances) are a cost-effective compute capacity reserve and are ready to be quickly activated. You pay for memory usage of all the provisioned instances. You pay for CPU usage of only the active subset.</p>
/// <p>App Runner temporarily doubles the number of provisioned instances during deployments, to maintain the same capacity for both old and new code.</p>
/// <p>Default: <code>1</code> </p>
pub fn min_size(mut self, input: i32) -> Self {
self.inner = self.inner.min_size(input);
self
}
/// <p>The minimum number of instances that App Runner provisions for your service. The service always has at least <code>MinSize</code> provisioned instances. Some of them actively serve traffic. The rest of them (provisioned and inactive instances) are a cost-effective compute capacity reserve and are ready to be quickly activated. You pay for memory usage of all the provisioned instances. You pay for CPU usage of only the active subset.</p>
/// <p>App Runner temporarily doubles the number of provisioned instances during deployments, to maintain the same capacity for both old and new code.</p>
/// <p>Default: <code>1</code> </p>
pub fn set_min_size(mut self, input: std::option::Option<i32>) -> Self {
self.inner = self.inner.set_min_size(input);
self
}
/// <p>The maximum number of instances that your service scales up to. At most <code>MaxSize</code> instances actively serve traffic for your service.</p>
/// <p>Default: <code>25</code> </p>
pub fn max_size(mut self, input: i32) -> Self {
self.inner = self.inner.max_size(input);
self
}
/// <p>The maximum number of instances that your service scales up to. At most <code>MaxSize</code> instances actively serve traffic for your service.</p>
/// <p>Default: <code>25</code> </p>
pub fn set_max_size(mut self, input: std::option::Option<i32>) -> Self {
self.inner = self.inner.set_max_size(input);
self
}
/// Appends an item to `Tags`.
///
/// To override the contents of this collection use [`set_tags`](Self::set_tags).
///
/// <p>A list of metadata items that you can associate with your auto scaling configuration resource. A tag is a key-value pair.</p>
pub fn tags(mut self, input: crate::types::Tag) -> Self {
self.inner = self.inner.tags(input);
self
}
/// <p>A list of metadata items that you can associate with your auto scaling configuration resource. A tag is a key-value pair.</p>
pub fn set_tags(
mut self,
input: std::option::Option<std::vec::Vec<crate::types::Tag>>,
) -> Self {
self.inner = self.inner.set_tags(input);
self
}
}