pub struct Builder { /* private fields */ }
Expand description

A builder for ModifyCacheClusterInput.

Implementations§

The cluster identifier. This value is stored as a lowercase string.

Examples found in repository?
src/client.rs (line 8158)
8157
8158
8159
8160
        pub fn cache_cluster_id(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.cache_cluster_id(input.into());
            self
        }

The cluster identifier. This value is stored as a lowercase string.

Examples found in repository?
src/client.rs (line 8166)
8162
8163
8164
8165
8166
8167
8168
        pub fn set_cache_cluster_id(
            mut self,
            input: std::option::Option<std::string::String>,
        ) -> Self {
            self.inner = self.inner.set_cache_cluster_id(input);
            self
        }

The number of cache nodes that the cluster should have. If the value for NumCacheNodes is greater than the sum of the number of current cache nodes and the number of cache nodes pending creation (which may be zero), more nodes are added. If the value is less than the number of existing cache nodes, nodes are removed. If the value is equal to the number of current cache nodes, any pending add or remove requests are canceled.

If you are removing cache nodes, you must use the CacheNodeIdsToRemove parameter to provide the IDs of the specific cache nodes to remove.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 40.

Adding or removing Memcached cache nodes can be applied immediately or as a pending operation (see ApplyImmediately).

A pending operation to modify the number of cache nodes in a cluster during its maintenance window, whether by adding or removing nodes in accordance with the scale out architecture, is not queued. The customer's latest request to add or remove nodes to the cluster overrides any previous pending operations to modify the number of cache nodes in the cluster. For example, a request to remove 2 nodes would override a previous pending operation to remove 3 nodes. Similarly, a request to add 2 nodes would override a previous pending operation to remove 3 nodes and vice versa. As Memcached cache nodes may now be provisioned in different Availability Zones with flexible cache node placement, a request to add nodes does not automatically override a previous pending operation to add nodes. The customer can modify the previous pending operation to add more nodes or explicitly cancel the pending request and retry the new request. To cancel pending operations to modify the number of cache nodes in a cluster, use the ModifyCacheCluster request and set NumCacheNodes equal to the number of cache nodes currently in the cluster.

Examples found in repository?
src/client.rs (line 8176)
8175
8176
8177
8178
        pub fn num_cache_nodes(mut self, input: i32) -> Self {
            self.inner = self.inner.num_cache_nodes(input);
            self
        }

The number of cache nodes that the cluster should have. If the value for NumCacheNodes is greater than the sum of the number of current cache nodes and the number of cache nodes pending creation (which may be zero), more nodes are added. If the value is less than the number of existing cache nodes, nodes are removed. If the value is equal to the number of current cache nodes, any pending add or remove requests are canceled.

If you are removing cache nodes, you must use the CacheNodeIdsToRemove parameter to provide the IDs of the specific cache nodes to remove.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 40.

Adding or removing Memcached cache nodes can be applied immediately or as a pending operation (see ApplyImmediately).

A pending operation to modify the number of cache nodes in a cluster during its maintenance window, whether by adding or removing nodes in accordance with the scale out architecture, is not queued. The customer's latest request to add or remove nodes to the cluster overrides any previous pending operations to modify the number of cache nodes in the cluster. For example, a request to remove 2 nodes would override a previous pending operation to remove 3 nodes. Similarly, a request to add 2 nodes would override a previous pending operation to remove 3 nodes and vice versa. As Memcached cache nodes may now be provisioned in different Availability Zones with flexible cache node placement, a request to add nodes does not automatically override a previous pending operation to add nodes. The customer can modify the previous pending operation to add more nodes or explicitly cancel the pending request and retry the new request. To cancel pending operations to modify the number of cache nodes in a cluster, use the ModifyCacheCluster request and set NumCacheNodes equal to the number of cache nodes currently in the cluster.

Examples found in repository?
src/client.rs (line 8186)
8185
8186
8187
8188
        pub fn set_num_cache_nodes(mut self, input: std::option::Option<i32>) -> Self {
            self.inner = self.inner.set_num_cache_nodes(input);
            self
        }

Appends an item to cache_node_ids_to_remove.

To override the contents of this collection use set_cache_node_ids_to_remove.

A list of cache node IDs to be removed. A node ID is a numeric identifier (0001, 0002, etc.). This parameter is only valid when NumCacheNodes is less than the existing number of cache nodes. The number of cache node IDs supplied in this parameter must match the difference between the existing number of cache nodes in the cluster or pending cache nodes, whichever is greater, and the value of NumCacheNodes in the request.

For example: If you have 3 active cache nodes, 7 pending cache nodes, and the number of cache nodes in this ModifyCacheCluster call is 5, you must list 2 (7 - 5) cache node IDs to remove.

Examples found in repository?
src/client.rs (line 8196)
8195
8196
8197
8198
        pub fn cache_node_ids_to_remove(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.cache_node_ids_to_remove(input.into());
            self
        }

A list of cache node IDs to be removed. A node ID is a numeric identifier (0001, 0002, etc.). This parameter is only valid when NumCacheNodes is less than the existing number of cache nodes. The number of cache node IDs supplied in this parameter must match the difference between the existing number of cache nodes in the cluster or pending cache nodes, whichever is greater, and the value of NumCacheNodes in the request.

For example: If you have 3 active cache nodes, 7 pending cache nodes, and the number of cache nodes in this ModifyCacheCluster call is 5, you must list 2 (7 - 5) cache node IDs to remove.

Examples found in repository?
src/client.rs (line 8205)
8201
8202
8203
8204
8205
8206
8207
        pub fn set_cache_node_ids_to_remove(
            mut self,
            input: std::option::Option<std::vec::Vec<std::string::String>>,
        ) -> Self {
            self.inner = self.inner.set_cache_node_ids_to_remove(input);
            self
        }

Specifies whether the new nodes in this Memcached cluster are all created in a single Availability Zone or created across multiple Availability Zones.

Valid values: single-az | cross-az.

This option is only supported for Memcached clusters.

You cannot specify single-az if the Memcached cluster already has cache nodes in different Availability Zones. If cross-az is specified, existing Memcached nodes remain in their current Availability Zone.

Only newly created nodes are located in different Availability Zones.

Examples found in repository?
src/client.rs (line 8215)
8214
8215
8216
8217
        pub fn az_mode(mut self, input: crate::model::AzMode) -> Self {
            self.inner = self.inner.az_mode(input);
            self
        }

Specifies whether the new nodes in this Memcached cluster are all created in a single Availability Zone or created across multiple Availability Zones.

Valid values: single-az | cross-az.

This option is only supported for Memcached clusters.

You cannot specify single-az if the Memcached cluster already has cache nodes in different Availability Zones. If cross-az is specified, existing Memcached nodes remain in their current Availability Zone.

Only newly created nodes are located in different Availability Zones.

Examples found in repository?
src/client.rs (line 8225)
8224
8225
8226
8227
        pub fn set_az_mode(mut self, input: std::option::Option<crate::model::AzMode>) -> Self {
            self.inner = self.inner.set_az_mode(input);
            self
        }

Appends an item to new_availability_zones.

To override the contents of this collection use set_new_availability_zones.

This option is only supported on Memcached clusters.

The list of Availability Zones where the new Memcached cache nodes are created.

This parameter is only valid when NumCacheNodes in the request is greater than the sum of the number of active cache nodes and the number of cache nodes pending creation (which may be zero). The number of Availability Zones supplied in this list must match the cache nodes being added in this request.

Scenarios:

  • Scenario 1: You have 3 active nodes and wish to add 2 nodes. Specify NumCacheNodes=5 (3 + 2) and optionally specify two Availability Zones for the two new nodes.

  • Scenario 2: You have 3 active nodes and 2 nodes pending creation (from the scenario 1 call) and want to add 1 more node. Specify NumCacheNodes=6 ((3 + 2) + 1) and optionally specify an Availability Zone for the new node.

  • Scenario 3: You want to cancel all pending operations. Specify NumCacheNodes=3 to cancel all pending operations.

The Availability Zone placement of nodes pending creation cannot be modified. If you wish to cancel any nodes pending creation, add 0 nodes by setting NumCacheNodes to the number of current nodes.

If cross-az is specified, existing Memcached nodes remain in their current Availability Zone. Only newly created nodes can be located in different Availability Zones. For guidance on how to move existing Memcached nodes to different Availability Zones, see the Availability Zone Considerations section of Cache Node Considerations for Memcached.

Impact of new add/remove requests upon pending requests

  • Scenario-1

    • Pending Action: Delete

    • New Request: Delete

    • Result: The new delete, pending or immediate, replaces the pending delete.

  • Scenario-2

    • Pending Action: Delete

    • New Request: Create

    • Result: The new create, pending or immediate, replaces the pending delete.

  • Scenario-3

    • Pending Action: Create

    • New Request: Delete

    • Result: The new delete, pending or immediate, replaces the pending create.

  • Scenario-4

    • Pending Action: Create

    • New Request: Create

    • Result: The new create is added to the pending create.

      Important: If the new create request is Apply Immediately - Yes, all creates are performed immediately. If the new create request is Apply Immediately - No, all creates are pending.

Examples found in repository?
src/client.rs (line 8275)
8274
8275
8276
8277
        pub fn new_availability_zones(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.new_availability_zones(input.into());
            self
        }

This option is only supported on Memcached clusters.

The list of Availability Zones where the new Memcached cache nodes are created.

This parameter is only valid when NumCacheNodes in the request is greater than the sum of the number of active cache nodes and the number of cache nodes pending creation (which may be zero). The number of Availability Zones supplied in this list must match the cache nodes being added in this request.

Scenarios:

  • Scenario 1: You have 3 active nodes and wish to add 2 nodes. Specify NumCacheNodes=5 (3 + 2) and optionally specify two Availability Zones for the two new nodes.

  • Scenario 2: You have 3 active nodes and 2 nodes pending creation (from the scenario 1 call) and want to add 1 more node. Specify NumCacheNodes=6 ((3 + 2) + 1) and optionally specify an Availability Zone for the new node.

  • Scenario 3: You want to cancel all pending operations. Specify NumCacheNodes=3 to cancel all pending operations.

The Availability Zone placement of nodes pending creation cannot be modified. If you wish to cancel any nodes pending creation, add 0 nodes by setting NumCacheNodes to the number of current nodes.

If cross-az is specified, existing Memcached nodes remain in their current Availability Zone. Only newly created nodes can be located in different Availability Zones. For guidance on how to move existing Memcached nodes to different Availability Zones, see the Availability Zone Considerations section of Cache Node Considerations for Memcached.

Impact of new add/remove requests upon pending requests

  • Scenario-1

    • Pending Action: Delete

    • New Request: Delete

    • Result: The new delete, pending or immediate, replaces the pending delete.

  • Scenario-2

    • Pending Action: Delete

    • New Request: Create

    • Result: The new create, pending or immediate, replaces the pending delete.

  • Scenario-3

    • Pending Action: Create

    • New Request: Delete

    • Result: The new delete, pending or immediate, replaces the pending create.

  • Scenario-4

    • Pending Action: Create

    • New Request: Create

    • Result: The new create is added to the pending create.

      Important: If the new create request is Apply Immediately - Yes, all creates are performed immediately. If the new create request is Apply Immediately - No, all creates are pending.

Examples found in repository?
src/client.rs (line 8324)
8320
8321
8322
8323
8324
8325
8326
        pub fn set_new_availability_zones(
            mut self,
            input: std::option::Option<std::vec::Vec<std::string::String>>,
        ) -> Self {
            self.inner = self.inner.set_new_availability_zones(input);
            self
        }

Appends an item to cache_security_group_names.

To override the contents of this collection use set_cache_security_group_names.

A list of cache security group names to authorize on this cluster. This change is asynchronously applied as soon as possible.

You can use this parameter only with clusters that are created outside of an Amazon Virtual Private Cloud (Amazon VPC).

Constraints: Must contain no more than 255 alphanumeric characters. Must not be "Default".

Examples found in repository?
src/client.rs (line 8335)
8334
8335
8336
8337
        pub fn cache_security_group_names(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.cache_security_group_names(input.into());
            self
        }

A list of cache security group names to authorize on this cluster. This change is asynchronously applied as soon as possible.

You can use this parameter only with clusters that are created outside of an Amazon Virtual Private Cloud (Amazon VPC).

Constraints: Must contain no more than 255 alphanumeric characters. Must not be "Default".

Examples found in repository?
src/client.rs (line 8345)
8341
8342
8343
8344
8345
8346
8347
        pub fn set_cache_security_group_names(
            mut self,
            input: std::option::Option<std::vec::Vec<std::string::String>>,
        ) -> Self {
            self.inner = self.inner.set_cache_security_group_names(input);
            self
        }

Appends an item to security_group_ids.

To override the contents of this collection use set_security_group_ids.

Specifies the VPC Security Groups associated with the cluster.

This parameter can be used only with clusters that are created in an Amazon Virtual Private Cloud (Amazon VPC).

Examples found in repository?
src/client.rs (line 8355)
8354
8355
8356
8357
        pub fn security_group_ids(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.security_group_ids(input.into());
            self
        }

Specifies the VPC Security Groups associated with the cluster.

This parameter can be used only with clusters that are created in an Amazon Virtual Private Cloud (Amazon VPC).

Examples found in repository?
src/client.rs (line 8364)
8360
8361
8362
8363
8364
8365
8366
        pub fn set_security_group_ids(
            mut self,
            input: std::option::Option<std::vec::Vec<std::string::String>>,
        ) -> Self {
            self.inner = self.inner.set_security_group_ids(input);
            self
        }

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

Valid values for ddd are:

  • sun

  • mon

  • tue

  • wed

  • thu

  • fri

  • sat

Example: sun:23:00-mon:01:30

Examples found in repository?
src/client.rs (line 8383)
8379
8380
8381
8382
8383
8384
8385
        pub fn preferred_maintenance_window(
            mut self,
            input: impl Into<std::string::String>,
        ) -> Self {
            self.inner = self.inner.preferred_maintenance_window(input.into());
            self
        }

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

Valid values for ddd are:

  • sun

  • mon

  • tue

  • wed

  • thu

  • fri

  • sat

Example: sun:23:00-mon:01:30

Examples found in repository?
src/client.rs (line 8402)
8398
8399
8400
8401
8402
8403
8404
        pub fn set_preferred_maintenance_window(
            mut self,
            input: std::option::Option<std::string::String>,
        ) -> Self {
            self.inner = self.inner.set_preferred_maintenance_window(input);
            self
        }

The Amazon Resource Name (ARN) of the Amazon SNS topic to which notifications are sent.

The Amazon SNS topic owner must be same as the cluster owner.

Examples found in repository?
src/client.rs (line 8409)
8408
8409
8410
8411
        pub fn notification_topic_arn(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.notification_topic_arn(input.into());
            self
        }

The Amazon Resource Name (ARN) of the Amazon SNS topic to which notifications are sent.

The Amazon SNS topic owner must be same as the cluster owner.

Examples found in repository?
src/client.rs (line 8419)
8415
8416
8417
8418
8419
8420
8421
        pub fn set_notification_topic_arn(
            mut self,
            input: std::option::Option<std::string::String>,
        ) -> Self {
            self.inner = self.inner.set_notification_topic_arn(input);
            self
        }

The name of the cache parameter group to apply to this cluster. This change is asynchronously applied as soon as possible for parameters when the ApplyImmediately parameter is specified as true for this request.

Examples found in repository?
src/client.rs (line 8424)
8423
8424
8425
8426
        pub fn cache_parameter_group_name(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.cache_parameter_group_name(input.into());
            self
        }

The name of the cache parameter group to apply to this cluster. This change is asynchronously applied as soon as possible for parameters when the ApplyImmediately parameter is specified as true for this request.

Examples found in repository?
src/client.rs (line 8432)
8428
8429
8430
8431
8432
8433
8434
        pub fn set_cache_parameter_group_name(
            mut self,
            input: std::option::Option<std::string::String>,
        ) -> Self {
            self.inner = self.inner.set_cache_parameter_group_name(input);
            self
        }

The status of the Amazon SNS notification topic. Notifications are sent only if the status is active.

Valid values: active | inactive

Examples found in repository?
src/client.rs (line 8438)
8437
8438
8439
8440
        pub fn notification_topic_status(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.notification_topic_status(input.into());
            self
        }

The status of the Amazon SNS notification topic. Notifications are sent only if the status is active.

Valid values: active | inactive

Examples found in repository?
src/client.rs (line 8447)
8443
8444
8445
8446
8447
8448
8449
        pub fn set_notification_topic_status(
            mut self,
            input: std::option::Option<std::string::String>,
        ) -> Self {
            self.inner = self.inner.set_notification_topic_status(input);
            self
        }

If true, this parameter causes the modifications in this request and any pending modifications to be applied, asynchronously and as soon as possible, regardless of the PreferredMaintenanceWindow setting for the cluster.

If false, changes to the cluster are applied on the next maintenance reboot, or the next failure reboot, whichever occurs first.

If you perform a ModifyCacheCluster before a pending modification is applied, the pending modification is replaced by the newer modification.

Valid values: true | false

Default: false

Examples found in repository?
src/client.rs (line 8457)
8456
8457
8458
8459
        pub fn apply_immediately(mut self, input: bool) -> Self {
            self.inner = self.inner.apply_immediately(input);
            self
        }

If true, this parameter causes the modifications in this request and any pending modifications to be applied, asynchronously and as soon as possible, regardless of the PreferredMaintenanceWindow setting for the cluster.

If false, changes to the cluster are applied on the next maintenance reboot, or the next failure reboot, whichever occurs first.

If you perform a ModifyCacheCluster before a pending modification is applied, the pending modification is replaced by the newer modification.

Valid values: true | false

Default: false

Examples found in repository?
src/client.rs (line 8467)
8466
8467
8468
8469
        pub fn set_apply_immediately(mut self, input: std::option::Option<bool>) -> Self {
            self.inner = self.inner.set_apply_immediately(input);
            self
        }

The upgraded version of the cache engine to be run on the cache nodes.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cluster and create it anew with the earlier engine version.

Examples found in repository?
src/client.rs (line 8473)
8472
8473
8474
8475
        pub fn engine_version(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.engine_version(input.into());
            self
        }

The upgraded version of the cache engine to be run on the cache nodes.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cluster and create it anew with the earlier engine version.

Examples found in repository?
src/client.rs (line 8482)
8478
8479
8480
8481
8482
8483
8484
        pub fn set_engine_version(
            mut self,
            input: std::option::Option<std::string::String>,
        ) -> Self {
            self.inner = self.inner.set_engine_version(input);
            self
        }

 If you are running Redis engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions. 

Examples found in repository?
src/client.rs (line 8487)
8486
8487
8488
8489
        pub fn auto_minor_version_upgrade(mut self, input: bool) -> Self {
            self.inner = self.inner.auto_minor_version_upgrade(input);
            self
        }

 If you are running Redis engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions. 

Examples found in repository?
src/client.rs (line 8492)
8491
8492
8493
8494
        pub fn set_auto_minor_version_upgrade(mut self, input: std::option::Option<bool>) -> Self {
            self.inner = self.inner.set_auto_minor_version_upgrade(input);
            self
        }

The number of days for which ElastiCache retains automatic cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

Examples found in repository?
src/client.rs (line 8499)
8498
8499
8500
8501
        pub fn snapshot_retention_limit(mut self, input: i32) -> Self {
            self.inner = self.inner.snapshot_retention_limit(input);
            self
        }

The number of days for which ElastiCache retains automatic cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

Examples found in repository?
src/client.rs (line 8506)
8505
8506
8507
8508
        pub fn set_snapshot_retention_limit(mut self, input: std::option::Option<i32>) -> Self {
            self.inner = self.inner.set_snapshot_retention_limit(input);
            self
        }

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your cluster.

Examples found in repository?
src/client.rs (line 8511)
8510
8511
8512
8513
        pub fn snapshot_window(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.snapshot_window(input.into());
            self
        }

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your cluster.

Examples found in repository?
src/client.rs (line 8519)
8515
8516
8517
8518
8519
8520
8521
        pub fn set_snapshot_window(
            mut self,
            input: std::option::Option<std::string::String>,
        ) -> Self {
            self.inner = self.inner.set_snapshot_window(input);
            self
        }

A valid cache node type that you want to scale this cluster up to.

Examples found in repository?
src/client.rs (line 8524)
8523
8524
8525
8526
        pub fn cache_node_type(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.cache_node_type(input.into());
            self
        }

A valid cache node type that you want to scale this cluster up to.

Examples found in repository?
src/client.rs (line 8532)
8528
8529
8530
8531
8532
8533
8534
        pub fn set_cache_node_type(
            mut self,
            input: std::option::Option<std::string::String>,
        ) -> Self {
            self.inner = self.inner.set_cache_node_type(input);
            self
        }

Reserved parameter. The password used to access a password protected server. This parameter must be specified with the auth-token-update parameter. Password constraints:

  • Must be only printable ASCII characters

  • Must be at least 16 characters and no more than 128 characters in length

  • Cannot contain any of the following characters: '/', '"', or '@', '%'

For more information, see AUTH password at AUTH.

Examples found in repository?
src/client.rs (line 8543)
8542
8543
8544
8545
        pub fn auth_token(mut self, input: impl Into<std::string::String>) -> Self {
            self.inner = self.inner.auth_token(input.into());
            self
        }

Reserved parameter. The password used to access a password protected server. This parameter must be specified with the auth-token-update parameter. Password constraints:

  • Must be only printable ASCII characters

  • Must be at least 16 characters and no more than 128 characters in length

  • Cannot contain any of the following characters: '/', '"', or '@', '%'

For more information, see AUTH password at AUTH.

Examples found in repository?
src/client.rs (line 8554)
8553
8554
8555
8556
        pub fn set_auth_token(mut self, input: std::option::Option<std::string::String>) -> Self {
            self.inner = self.inner.set_auth_token(input);
            self
        }

Specifies the strategy to use to update the AUTH token. This parameter must be specified with the auth-token parameter. Possible values:

  • Rotate

  • Set

For more information, see Authenticating Users with Redis AUTH

Examples found in repository?
src/client.rs (line 8567)
8563
8564
8565
8566
8567
8568
8569
        pub fn auth_token_update_strategy(
            mut self,
            input: crate::model::AuthTokenUpdateStrategyType,
        ) -> Self {
            self.inner = self.inner.auth_token_update_strategy(input);
            self
        }

Specifies the strategy to use to update the AUTH token. This parameter must be specified with the auth-token parameter. Possible values:

  • Rotate

  • Set

For more information, see Authenticating Users with Redis AUTH

Examples found in repository?
src/client.rs (line 8580)
8576
8577
8578
8579
8580
8581
8582
        pub fn set_auth_token_update_strategy(
            mut self,
            input: std::option::Option<crate::model::AuthTokenUpdateStrategyType>,
        ) -> Self {
            self.inner = self.inner.set_auth_token_update_strategy(input);
            self
        }

Appends an item to log_delivery_configurations.

To override the contents of this collection use set_log_delivery_configurations.

Specifies the destination, format and type of the logs.

Examples found in repository?
src/client.rs (line 8592)
8588
8589
8590
8591
8592
8593
8594
        pub fn log_delivery_configurations(
            mut self,
            input: crate::model::LogDeliveryConfigurationRequest,
        ) -> Self {
            self.inner = self.inner.log_delivery_configurations(input);
            self
        }

Specifies the destination, format and type of the logs.

Examples found in repository?
src/client.rs (line 8602)
8596
8597
8598
8599
8600
8601
8602
8603
8604
        pub fn set_log_delivery_configurations(
            mut self,
            input: std::option::Option<
                std::vec::Vec<crate::model::LogDeliveryConfigurationRequest>,
            >,
        ) -> Self {
            self.inner = self.inner.set_log_delivery_configurations(input);
            self
        }

Consumes the builder and constructs a ModifyCacheClusterInput.

Examples found in repository?
src/client.rs (line 8125)
8113
8114
8115
8116
8117
8118
8119
8120
8121
8122
8123
8124
8125
8126
8127
8128
8129
8130
8131
8132
8133
8134
8135
8136
8137
8138
8139
8140
8141
8142
8143
8144
8145
8146
8147
8148
8149
8150
8151
8152
8153
8154
8155
        pub async fn customize(
            self,
        ) -> std::result::Result<
            crate::operation::customize::CustomizableOperation<
                crate::operation::ModifyCacheCluster,
                aws_http::retry::AwsResponseRetryClassifier,
            >,
            aws_smithy_http::result::SdkError<crate::error::ModifyCacheClusterError>,
        > {
            let handle = self.handle.clone();
            let operation = self
                .inner
                .build()
                .map_err(aws_smithy_http::result::SdkError::construction_failure)?
                .make_operation(&handle.conf)
                .await
                .map_err(aws_smithy_http::result::SdkError::construction_failure)?;
            Ok(crate::operation::customize::CustomizableOperation { handle, operation })
        }

        /// Sends the request and returns the response.
        ///
        /// If an error occurs, an `SdkError` will be returned with additional details that
        /// can be matched against.
        ///
        /// By default, any retryable failures will be retried twice. Retry behavior
        /// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
        /// set when configuring the client.
        pub async fn send(
            self,
        ) -> std::result::Result<
            crate::output::ModifyCacheClusterOutput,
            aws_smithy_http::result::SdkError<crate::error::ModifyCacheClusterError>,
        > {
            let op = self
                .inner
                .build()
                .map_err(aws_smithy_http::result::SdkError::construction_failure)?
                .make_operation(&self.handle.conf)
                .await
                .map_err(aws_smithy_http::result::SdkError::construction_failure)?;
            self.handle.client.call(op).await
        }

Trait Implementations§

Returns a copy of the value. Read more
Performs copy-assignment from source. Read more
Formats the value using the given formatter. Read more
Returns the “default value” for a type. Read more
This method tests for self and other values to be equal, and is used by ==. Read more
This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason. Read more

Auto Trait Implementations§

Blanket Implementations§

Gets the TypeId of self. Read more
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Should always be Self
The resulting type after obtaining ownership.
Creates owned data from borrowed data, usually by cloning. Read more
Uses borrowed data to replace owned data, usually by cloning. Read more
The type returned in the event of a conversion error.
Performs the conversion.
The type returned in the event of a conversion error.
Performs the conversion.
Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more