1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
pub use crate::operation::create_auto_scaling_configuration::_create_auto_scaling_configuration_output::CreateAutoScalingConfigurationOutputBuilder;
pub use crate::operation::create_auto_scaling_configuration::_create_auto_scaling_configuration_input::CreateAutoScalingConfigurationInputBuilder;
impl CreateAutoScalingConfigurationInputBuilder {
/// Sends a request with this input using the given client.
pub async fn send_with(
self,
client: &crate::Client,
) -> ::std::result::Result<
crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationOutput,
::aws_smithy_http::result::SdkError<
crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationError,
::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
>,
> {
let mut fluent_builder = client.create_auto_scaling_configuration();
fluent_builder.inner = self;
fluent_builder.send().await
}
}
/// Fluent builder constructing a request to `CreateAutoScalingConfiguration`.
///
/// <p>Create an App Runner automatic scaling configuration resource. App Runner requires this resource when you create or update App Runner services and you require non-default auto scaling settings. You can share an auto scaling configuration across multiple services.</p>
/// <p>Create multiple revisions of a configuration by calling this action multiple times using the same <code>AutoScalingConfigurationName</code>. The call returns incremental <code>AutoScalingConfigurationRevision</code> values. When you create a service and configure an auto scaling configuration resource, the service uses the latest active revision of the auto scaling configuration by default. You can optionally configure the service to use a specific revision.</p>
/// <p>Configure a higher <code>MinSize</code> to increase the spread of your App Runner service over more Availability Zones in the Amazon Web Services Region. The tradeoff is a higher minimal cost.</p>
/// <p>Configure a lower <code>MaxSize</code> to control your cost. The tradeoff is lower responsiveness during peak demand.</p>
#[derive(::std::clone::Clone, ::std::fmt::Debug)]
pub struct CreateAutoScalingConfigurationFluentBuilder {
handle: ::std::sync::Arc<crate::client::Handle>,
inner: crate::operation::create_auto_scaling_configuration::builders::CreateAutoScalingConfigurationInputBuilder,
config_override: ::std::option::Option<crate::config::Builder>,
}
impl CreateAutoScalingConfigurationFluentBuilder {
/// Creates a new `CreateAutoScalingConfiguration`.
pub(crate) fn new(handle: ::std::sync::Arc<crate::client::Handle>) -> Self {
Self {
handle,
inner: ::std::default::Default::default(),
config_override: ::std::option::Option::None,
}
}
/// Access the CreateAutoScalingConfiguration as a reference.
pub fn as_input(&self) -> &crate::operation::create_auto_scaling_configuration::builders::CreateAutoScalingConfigurationInputBuilder {
&self.inner
}
/// Sends the request and returns the response.
///
/// If an error occurs, an `SdkError` will be returned with additional details that
/// can be matched against.
///
/// By default, any retryable failures will be retried twice. Retry behavior
/// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
/// set when configuring the client.
pub async fn send(
self,
) -> ::std::result::Result<
crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationOutput,
::aws_smithy_http::result::SdkError<
crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationError,
::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
>,
> {
let input = self.inner.build().map_err(::aws_smithy_http::result::SdkError::construction_failure)?;
let runtime_plugins = crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfiguration::operation_runtime_plugins(
self.handle.runtime_plugins.clone(),
&self.handle.conf,
self.config_override,
);
crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfiguration::orchestrate(&runtime_plugins, input).await
}
/// Consumes this builder, creating a customizable operation that can be modified before being
/// sent.
// TODO(enableNewSmithyRuntimeCleanup): Remove `async` and `Result` once we switch to orchestrator
pub async fn customize(
self,
) -> ::std::result::Result<
crate::client::customize::orchestrator::CustomizableOperation<
crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationOutput,
crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationError,
>,
::aws_smithy_http::result::SdkError<crate::operation::create_auto_scaling_configuration::CreateAutoScalingConfigurationError>,
> {
::std::result::Result::Ok(crate::client::customize::orchestrator::CustomizableOperation {
customizable_send: ::std::boxed::Box::new(move |config_override| {
::std::boxed::Box::pin(async { self.config_override(config_override).send().await })
}),
config_override: None,
interceptors: vec![],
runtime_plugins: vec![],
})
}
pub(crate) fn config_override(mut self, config_override: impl Into<crate::config::Builder>) -> Self {
self.set_config_override(Some(config_override.into()));
self
}
pub(crate) fn set_config_override(&mut self, config_override: Option<crate::config::Builder>) -> &mut Self {
self.config_override = config_override;
self
}
/// <p>A name for the auto scaling configuration. When you use it for the first time in an Amazon Web Services Region, App Runner creates revision number <code>1</code> of this name. When you use the same name in subsequent calls, App Runner creates incremental revisions of the configuration.</p> <note>
/// <p>The name <code>DefaultConfiguration</code> is reserved (it's the configuration that App Runner uses if you don't provide a custome one). You can't use it to create a new auto scaling configuration, and you can't create a revision of it.</p>
/// <p>When you want to use your own auto scaling configuration for your App Runner service, <i>create a configuration with a different name</i>, and then provide it when you create or update your service.</p>
/// </note>
pub fn auto_scaling_configuration_name(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
self.inner = self.inner.auto_scaling_configuration_name(input.into());
self
}
/// <p>A name for the auto scaling configuration. When you use it for the first time in an Amazon Web Services Region, App Runner creates revision number <code>1</code> of this name. When you use the same name in subsequent calls, App Runner creates incremental revisions of the configuration.</p> <note>
/// <p>The name <code>DefaultConfiguration</code> is reserved (it's the configuration that App Runner uses if you don't provide a custome one). You can't use it to create a new auto scaling configuration, and you can't create a revision of it.</p>
/// <p>When you want to use your own auto scaling configuration for your App Runner service, <i>create a configuration with a different name</i>, and then provide it when you create or update your service.</p>
/// </note>
pub fn set_auto_scaling_configuration_name(mut self, input: ::std::option::Option<::std::string::String>) -> Self {
self.inner = self.inner.set_auto_scaling_configuration_name(input);
self
}
/// <p>A name for the auto scaling configuration. When you use it for the first time in an Amazon Web Services Region, App Runner creates revision number <code>1</code> of this name. When you use the same name in subsequent calls, App Runner creates incremental revisions of the configuration.</p> <note>
/// <p>The name <code>DefaultConfiguration</code> is reserved (it's the configuration that App Runner uses if you don't provide a custome one). You can't use it to create a new auto scaling configuration, and you can't create a revision of it.</p>
/// <p>When you want to use your own auto scaling configuration for your App Runner service, <i>create a configuration with a different name</i>, and then provide it when you create or update your service.</p>
/// </note>
pub fn get_auto_scaling_configuration_name(&self) -> &::std::option::Option<::std::string::String> {
self.inner.get_auto_scaling_configuration_name()
}
/// <p>The maximum number of concurrent requests that you want an instance to process. If the number of concurrent requests exceeds this limit, App Runner scales up your service.</p>
/// <p>Default: <code>100</code> </p>
pub fn max_concurrency(mut self, input: i32) -> Self {
self.inner = self.inner.max_concurrency(input);
self
}
/// <p>The maximum number of concurrent requests that you want an instance to process. If the number of concurrent requests exceeds this limit, App Runner scales up your service.</p>
/// <p>Default: <code>100</code> </p>
pub fn set_max_concurrency(mut self, input: ::std::option::Option<i32>) -> Self {
self.inner = self.inner.set_max_concurrency(input);
self
}
/// <p>The maximum number of concurrent requests that you want an instance to process. If the number of concurrent requests exceeds this limit, App Runner scales up your service.</p>
/// <p>Default: <code>100</code> </p>
pub fn get_max_concurrency(&self) -> &::std::option::Option<i32> {
self.inner.get_max_concurrency()
}
/// <p>The minimum number of instances that App Runner provisions for your service. The service always has at least <code>MinSize</code> provisioned instances. Some of them actively serve traffic. The rest of them (provisioned and inactive instances) are a cost-effective compute capacity reserve and are ready to be quickly activated. You pay for memory usage of all the provisioned instances. You pay for CPU usage of only the active subset.</p>
/// <p>App Runner temporarily doubles the number of provisioned instances during deployments, to maintain the same capacity for both old and new code.</p>
/// <p>Default: <code>1</code> </p>
pub fn min_size(mut self, input: i32) -> Self {
self.inner = self.inner.min_size(input);
self
}
/// <p>The minimum number of instances that App Runner provisions for your service. The service always has at least <code>MinSize</code> provisioned instances. Some of them actively serve traffic. The rest of them (provisioned and inactive instances) are a cost-effective compute capacity reserve and are ready to be quickly activated. You pay for memory usage of all the provisioned instances. You pay for CPU usage of only the active subset.</p>
/// <p>App Runner temporarily doubles the number of provisioned instances during deployments, to maintain the same capacity for both old and new code.</p>
/// <p>Default: <code>1</code> </p>
pub fn set_min_size(mut self, input: ::std::option::Option<i32>) -> Self {
self.inner = self.inner.set_min_size(input);
self
}
/// <p>The minimum number of instances that App Runner provisions for your service. The service always has at least <code>MinSize</code> provisioned instances. Some of them actively serve traffic. The rest of them (provisioned and inactive instances) are a cost-effective compute capacity reserve and are ready to be quickly activated. You pay for memory usage of all the provisioned instances. You pay for CPU usage of only the active subset.</p>
/// <p>App Runner temporarily doubles the number of provisioned instances during deployments, to maintain the same capacity for both old and new code.</p>
/// <p>Default: <code>1</code> </p>
pub fn get_min_size(&self) -> &::std::option::Option<i32> {
self.inner.get_min_size()
}
/// <p>The maximum number of instances that your service scales up to. At most <code>MaxSize</code> instances actively serve traffic for your service.</p>
/// <p>Default: <code>25</code> </p>
pub fn max_size(mut self, input: i32) -> Self {
self.inner = self.inner.max_size(input);
self
}
/// <p>The maximum number of instances that your service scales up to. At most <code>MaxSize</code> instances actively serve traffic for your service.</p>
/// <p>Default: <code>25</code> </p>
pub fn set_max_size(mut self, input: ::std::option::Option<i32>) -> Self {
self.inner = self.inner.set_max_size(input);
self
}
/// <p>The maximum number of instances that your service scales up to. At most <code>MaxSize</code> instances actively serve traffic for your service.</p>
/// <p>Default: <code>25</code> </p>
pub fn get_max_size(&self) -> &::std::option::Option<i32> {
self.inner.get_max_size()
}
/// Appends an item to `Tags`.
///
/// To override the contents of this collection use [`set_tags`](Self::set_tags).
///
/// <p>A list of metadata items that you can associate with your auto scaling configuration resource. A tag is a key-value pair.</p>
pub fn tags(mut self, input: crate::types::Tag) -> Self {
self.inner = self.inner.tags(input);
self
}
/// <p>A list of metadata items that you can associate with your auto scaling configuration resource. A tag is a key-value pair.</p>
pub fn set_tags(mut self, input: ::std::option::Option<::std::vec::Vec<crate::types::Tag>>) -> Self {
self.inner = self.inner.set_tags(input);
self
}
/// <p>A list of metadata items that you can associate with your auto scaling configuration resource. A tag is a key-value pair.</p>
pub fn get_tags(&self) -> &::std::option::Option<::std::vec::Vec<crate::types::Tag>> {
self.inner.get_tags()
}
}