Struct aws_sdk_ecs::client::fluent_builders::UpdateService
source · [−]pub struct UpdateService { /* private fields */ }
Expand description
Fluent builder constructing a request to UpdateService
.
Updating the task placement strategies and constraints on an Amazon ECS service remains in preview and is a Beta Service as defined by and subject to the Beta Service Participation Service Terms located at https://aws.amazon.com/service-terms ("Beta Terms"). These Beta Terms apply to your participation in this preview.
Modifies the parameters of a service.
For services using the rolling update (ECS
) you can update the desired count, deployment configuration, network configuration, load balancers, service registries, enable ECS managed tags option, propagate tags option, task placement constraints and strategies, and task definition. When you update any of these parameters, Amazon ECS starts new tasks with the new configuration.
For services using the blue/green (CODE_DEPLOY
) deployment controller, only the desired count, deployment configuration, health check grace period, task placement constraints and strategies, enable ECS managed tags option, and propagate tags can be updated using this API. If the network configuration, platform version, task definition, or load balancer need to be updated, create a new CodeDeploy deployment. For more information, see CreateDeployment in the CodeDeploy API Reference.
For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, health check grace period, enable ECS managed tags option, and propagate tags option, using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, create a new task set For more information, see CreateTaskSet
.
You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount
parameter.
If you have updated the Docker image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy.
If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, my_image:latest
), you don't need to create a new revision of your task definition. You can update the service using the forceNewDeployment
option. The new tasks launched by the deployment pull the current image/tag combination from your repository when they start.
You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent
and maximumPercent
, to determine the deployment strategy.
-
If
minimumHealthyPercent
is below 100%, the scheduler can ignoredesiredCount
temporarily during a deployment. For example, ifdesiredCount
is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. Tasks for services that don't use a load balancer are considered healthy if they're in theRUNNING
state. Tasks for services that use a load balancer are considered healthy if they're in theRUNNING
state and are reported as healthy by the load balancer. -
The
maximumPercent
parameter represents an upper limit on the number of running tasks during a deployment. You can use it to define the deployment batch size. For example, ifdesiredCount
is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available).
When UpdateService
stops a task during a deployment, the equivalent of docker stop
is issued to the containers running in the task. This results in a SIGTERM
and a 30-second timeout. After this, SIGKILL
is sent and the containers are forcibly stopped. If the container handles the SIGTERM
gracefully and exits within 30 seconds from receiving it, no SIGKILL
is sent.
When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic.
-
Determine which of the container instances in your cluster can support your service's task definition. For example, they have the required CPU, memory, ports, and container instance attributes.
-
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner even though you can choose a different placement strategy.
-
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
-
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
-
When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:
-
Sort the container instances by the largest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have two, container instances in either zone B or C are considered optimal for termination.
-
Stop the task on a container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the largest number of running tasks for this service.
You must have a service-linked role when you update any of the following service properties. If you specified a custom IAM role when you created the service, Amazon ECS automatically replaces the roleARN associated with the service with the ARN of your service-linked role. For more information, see Service-linked roles in the Amazon Elastic Container Service Developer Guide.
-
loadBalancers,
-
serviceRegistries
Implementations
sourceimpl UpdateService
impl UpdateService
sourcepub async fn send(
self
) -> Result<UpdateServiceOutput, SdkError<UpdateServiceError>>
pub async fn send(
self
) -> Result<UpdateServiceOutput, SdkError<UpdateServiceError>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn cluster(self, input: impl Into<String>) -> Self
pub fn cluster(self, input: impl Into<String>) -> Self
The short name or full Amazon Resource Name (ARN) of the cluster that your service runs on. If you do not specify a cluster, the default cluster is assumed.
sourcepub fn set_cluster(self, input: Option<String>) -> Self
pub fn set_cluster(self, input: Option<String>) -> Self
The short name or full Amazon Resource Name (ARN) of the cluster that your service runs on. If you do not specify a cluster, the default cluster is assumed.
sourcepub fn set_service(self, input: Option<String>) -> Self
pub fn set_service(self, input: Option<String>) -> Self
The name of the service to update.
sourcepub fn desired_count(self, input: i32) -> Self
pub fn desired_count(self, input: i32) -> Self
The number of instantiations of the task to place and keep running in your service.
sourcepub fn set_desired_count(self, input: Option<i32>) -> Self
pub fn set_desired_count(self, input: Option<i32>) -> Self
The number of instantiations of the task to place and keep running in your service.
sourcepub fn task_definition(self, input: impl Into<String>) -> Self
pub fn task_definition(self, input: impl Into<String>) -> Self
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used. If you modify the task definition with UpdateService
, Amazon ECS spawns a task with the new version of the task definition and then stops an old task after the new version is running.
sourcepub fn set_task_definition(self, input: Option<String>) -> Self
pub fn set_task_definition(self, input: Option<String>) -> Self
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used. If you modify the task definition with UpdateService
, Amazon ECS spawns a task with the new version of the task definition and then stops an old task after the new version is running.
sourcepub fn capacity_provider_strategy(
self,
input: CapacityProviderStrategyItem
) -> Self
pub fn capacity_provider_strategy(
self,
input: CapacityProviderStrategyItem
) -> Self
Appends an item to capacityProviderStrategy
.
To override the contents of this collection use set_capacity_provider_strategy
.
The capacity provider strategy to update the service to use.
if the service uses the default capacity provider strategy for the cluster, the service can be updated to use one or more capacity providers as opposed to the default capacity provider strategy. However, when a service is using a capacity provider strategy that's not the default capacity provider strategy, the service can't be updated to use the cluster's default capacity provider strategy.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders
API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider
API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders
API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
sourcepub fn set_capacity_provider_strategy(
self,
input: Option<Vec<CapacityProviderStrategyItem>>
) -> Self
pub fn set_capacity_provider_strategy(
self,
input: Option<Vec<CapacityProviderStrategyItem>>
) -> Self
The capacity provider strategy to update the service to use.
if the service uses the default capacity provider strategy for the cluster, the service can be updated to use one or more capacity providers as opposed to the default capacity provider strategy. However, when a service is using a capacity provider strategy that's not the default capacity provider strategy, the service can't be updated to use the cluster's default capacity provider strategy.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders
API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider
API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders
API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
sourcepub fn deployment_configuration(self, input: DeploymentConfiguration) -> Self
pub fn deployment_configuration(self, input: DeploymentConfiguration) -> Self
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
sourcepub fn set_deployment_configuration(
self,
input: Option<DeploymentConfiguration>
) -> Self
pub fn set_deployment_configuration(
self,
input: Option<DeploymentConfiguration>
) -> Self
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
sourcepub fn network_configuration(self, input: NetworkConfiguration) -> Self
pub fn network_configuration(self, input: NetworkConfiguration) -> Self
An object representing the network configuration for the service.
sourcepub fn set_network_configuration(
self,
input: Option<NetworkConfiguration>
) -> Self
pub fn set_network_configuration(
self,
input: Option<NetworkConfiguration>
) -> Self
An object representing the network configuration for the service.
sourcepub fn placement_constraints(self, input: PlacementConstraint) -> Self
pub fn placement_constraints(self, input: PlacementConstraint) -> Self
Appends an item to placementConstraints
.
To override the contents of this collection use set_placement_constraints
.
An array of task placement constraint objects to update the service to use. If no value is specified, the existing placement constraints for the service will remain unchanged. If this value is specified, it will override any existing placement constraints defined for the service. To remove all existing placement constraints, specify an empty array.
You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
sourcepub fn set_placement_constraints(
self,
input: Option<Vec<PlacementConstraint>>
) -> Self
pub fn set_placement_constraints(
self,
input: Option<Vec<PlacementConstraint>>
) -> Self
An array of task placement constraint objects to update the service to use. If no value is specified, the existing placement constraints for the service will remain unchanged. If this value is specified, it will override any existing placement constraints defined for the service. To remove all existing placement constraints, specify an empty array.
You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
sourcepub fn placement_strategy(self, input: PlacementStrategy) -> Self
pub fn placement_strategy(self, input: PlacementStrategy) -> Self
Appends an item to placementStrategy
.
To override the contents of this collection use set_placement_strategy
.
The task placement strategy objects to update the service to use. If no value is specified, the existing placement strategy for the service will remain unchanged. If this value is specified, it will override the existing placement strategy defined for the service. To remove an existing placement strategy, specify an empty object.
You can specify a maximum of five strategy rules for each service.
sourcepub fn set_placement_strategy(
self,
input: Option<Vec<PlacementStrategy>>
) -> Self
pub fn set_placement_strategy(
self,
input: Option<Vec<PlacementStrategy>>
) -> Self
The task placement strategy objects to update the service to use. If no value is specified, the existing placement strategy for the service will remain unchanged. If this value is specified, it will override the existing placement strategy defined for the service. To remove an existing placement strategy, specify an empty object.
You can specify a maximum of five strategy rules for each service.
sourcepub fn platform_version(self, input: impl Into<String>) -> Self
pub fn platform_version(self, input: impl Into<String>) -> Self
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If a platform version is not specified, the LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
sourcepub fn set_platform_version(self, input: Option<String>) -> Self
pub fn set_platform_version(self, input: Option<String>) -> Self
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If a platform version is not specified, the LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
sourcepub fn force_new_deployment(self, input: bool) -> Self
pub fn force_new_deployment(self, input: bool) -> Self
Determines whether to force a new deployment of the service. By default, deployments aren't forced. You can use this option to start a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest
) or to roll Fargate tasks onto a newer platform version.
sourcepub fn set_force_new_deployment(self, input: Option<bool>) -> Self
pub fn set_force_new_deployment(self, input: Option<bool>) -> Self
Determines whether to force a new deployment of the service. By default, deployments aren't forced. You can use this option to start a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest
) or to roll Fargate tasks onto a newer platform version.
sourcepub fn health_check_grace_period_seconds(self, input: i32) -> Self
pub fn health_check_grace_period_seconds(self, input: i32) -> Self
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only valid if your service is configured to use a load balancer. If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds. During that time, the Amazon ECS service scheduler ignores the Elastic Load Balancing health check status. This grace period can prevent the ECS service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
sourcepub fn set_health_check_grace_period_seconds(self, input: Option<i32>) -> Self
pub fn set_health_check_grace_period_seconds(self, input: Option<i32>) -> Self
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only valid if your service is configured to use a load balancer. If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds. During that time, the Amazon ECS service scheduler ignores the Elastic Load Balancing health check status. This grace period can prevent the ECS service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
sourcepub fn enable_execute_command(self, input: bool) -> Self
pub fn enable_execute_command(self, input: bool) -> Self
If true
, this enables execute command functionality on all task containers.
If you do not want to override the value that was set when the service was created, you can set this to null
when performing this action.
sourcepub fn set_enable_execute_command(self, input: Option<bool>) -> Self
pub fn set_enable_execute_command(self, input: Option<bool>) -> Self
If true
, this enables execute command functionality on all task containers.
If you do not want to override the value that was set when the service was created, you can set this to null
when performing this action.
Determines whether to turn on Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment
to true
, so that Amazon ECS starts new tasks with the updated tags.
Determines whether to turn on Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment
to true
, so that Amazon ECS starts new tasks with the updated tags.
sourcepub fn load_balancers(self, input: LoadBalancer) -> Self
pub fn load_balancers(self, input: LoadBalancer) -> Self
Appends an item to loadBalancers
.
To override the contents of this collection use set_load_balancers
.
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running.
For services that use rolling updates, you can add, update, or remove Elastic Load Balancing target groups. You can update from a single target group to multiple target groups and from multiple target groups to a single target group.
For services that use blue/green deployments, you can update Elastic Load Balancing target groups by using CreateDeployment
through CodeDeploy. Note that multiple target groups are not supported for blue/green deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services that use the external deployment controller, you can add, update, or remove load balancers by using CreateTaskSet. Note that multiple target groups are not supported for external deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
You can remove existing loadBalancers
by passing an empty list.
sourcepub fn set_load_balancers(self, input: Option<Vec<LoadBalancer>>) -> Self
pub fn set_load_balancers(self, input: Option<Vec<LoadBalancer>>) -> Self
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running.
For services that use rolling updates, you can add, update, or remove Elastic Load Balancing target groups. You can update from a single target group to multiple target groups and from multiple target groups to a single target group.
For services that use blue/green deployments, you can update Elastic Load Balancing target groups by using CreateDeployment
through CodeDeploy. Note that multiple target groups are not supported for blue/green deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services that use the external deployment controller, you can add, update, or remove load balancers by using CreateTaskSet. Note that multiple target groups are not supported for external deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
You can remove existing loadBalancers
by passing an empty list.
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment
to true
, so that Amazon ECS starts new tasks with the updated tags.
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment
to true
, so that Amazon ECS starts new tasks with the updated tags.
sourcepub fn service_registries(self, input: ServiceRegistry) -> Self
pub fn service_registries(self, input: ServiceRegistry) -> Self
Appends an item to serviceRegistries
.
To override the contents of this collection use set_service_registries
.
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
When you add, update, or remove the service registries configuration, Amazon ECS starts new tasks with the updated service registries configuration, and then stops the old tasks when the new tasks are running.
You can remove existing serviceRegistries
by passing an empty list.
sourcepub fn set_service_registries(self, input: Option<Vec<ServiceRegistry>>) -> Self
pub fn set_service_registries(self, input: Option<Vec<ServiceRegistry>>) -> Self
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
When you add, update, or remove the service registries configuration, Amazon ECS starts new tasks with the updated service registries configuration, and then stops the old tasks when the new tasks are running.
You can remove existing serviceRegistries
by passing an empty list.
Trait Implementations
sourceimpl Clone for UpdateService
impl Clone for UpdateService
sourcefn clone(&self) -> UpdateService
fn clone(&self) -> UpdateService
Returns a copy of the value. Read more
1.0.0 · sourcefn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
Auto Trait Implementations
impl !RefUnwindSafe for UpdateService
impl Send for UpdateService
impl Sync for UpdateService
impl Unpin for UpdateService
impl !UnwindSafe for UpdateService
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more