aws_sdk_s3/operation/delete_objects/builders.rs
1// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
2pub use crate::operation::delete_objects::_delete_objects_output::DeleteObjectsOutputBuilder;
3
4pub use crate::operation::delete_objects::_delete_objects_input::DeleteObjectsInputBuilder;
5
6impl crate::operation::delete_objects::builders::DeleteObjectsInputBuilder {
7 /// Sends a request with this input using the given client.
8 pub async fn send_with(
9 self,
10 client: &crate::Client,
11 ) -> ::std::result::Result<
12 crate::operation::delete_objects::DeleteObjectsOutput,
13 ::aws_smithy_runtime_api::client::result::SdkError<
14 crate::operation::delete_objects::DeleteObjectsError,
15 ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
16 >,
17 > {
18 let mut fluent_builder = client.delete_objects();
19 fluent_builder.inner = self;
20 fluent_builder.send().await
21 }
22}
23/// Fluent builder constructing a request to `DeleteObjects`.
24///
25/// <p>This operation enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this operation provides a suitable alternative to sending individual delete requests, reducing per-request overhead.</p>
26/// <p>The request can contain a list of up to 1,000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete operation and returns the result of that delete, success or failure, in the response. If the object specified in the request isn't found, Amazon S3 confirms the deletion by returning the result as deleted.</p><note>
27/// <ul>
28/// <li>
29/// <p><b>Directory buckets</b> - S3 Versioning isn't enabled and supported for directory buckets.</p></li>
30/// <li>
31/// <p><b>Directory buckets</b> - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format <code>https://<i>amzn-s3-demo-bucket</i>.s3express-<i>zone-id</i>.<i>region-code</i>.amazonaws.com/<i>key-name</i> </code>. Path-style requests are not supported. For more information about endpoints in Availability Zones, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html">Regional and Zonal endpoints for directory buckets in Availability Zones</a> in the <i>Amazon S3 User Guide</i>. For more information about endpoints in Local Zones, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html">Concepts for directory buckets in Local Zones</a> in the <i>Amazon S3 User Guide</i>.</p></li>
32/// </ul>
33/// </note>
34/// <p>The operation supports two modes for the response: verbose and quiet. By default, the operation uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete operation encountered an error. For a successful deletion in a quiet mode, the operation does not return any information about the delete in the response body.</p>
35/// <p>When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete">MFA Delete</a> in the <i>Amazon S3 User Guide</i>.</p><note>
36/// <p><b>Directory buckets</b> - MFA delete is not supported by directory buckets.</p>
37/// </note>
38/// <dl>
39/// <dt>
40/// Permissions
41/// </dt>
42/// <dd>
43/// <ul>
44/// <li>
45/// <p><b>General purpose bucket permissions</b> - The following permissions are required in your policies when your <code>DeleteObjects</code> request includes specific headers.</p>
46/// <ul>
47/// <li>
48/// <p><b> <code>s3:DeleteObject</code> </b> - To delete an object from a bucket, you must always specify the <code>s3:DeleteObject</code> permission.</p></li>
49/// <li>
50/// <p><b> <code>s3:DeleteObjectVersion</code> </b> - To delete a specific version of an object from a versioning-enabled bucket, you must specify the <code>s3:DeleteObjectVersion</code> permission.</p><note>
51/// <p>If the <code>s3:DeleteObject</code> or <code>s3:DeleteObjectVersion</code> permissions are explicitly denied in your bucket policy, attempts to delete any unversioned objects result in a <code>403 Access Denied</code> error.</p>
52/// </note></li>
53/// </ul></li>
54/// <li>
55/// <p><b>Directory bucket permissions</b> - To grant access to this API operation on a directory bucket, we recommend that you use the <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html"> <code>CreateSession</code> </a> API operation for session-based authorization. Specifically, you grant the <code>s3express:CreateSession</code> permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the <code>CreateSession</code> API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another <code>CreateSession</code> API call to generate a new session token for use. Amazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html"> <code>CreateSession</code> </a>.</p></li>
56/// </ul>
57/// </dd>
58/// <dt>
59/// Content-MD5 request header
60/// </dt>
61/// <dd>
62/// <ul>
63/// <li>
64/// <p><b>General purpose bucket</b> - The Content-MD5 request header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.</p></li>
65/// <li>
66/// <p><b>Directory bucket</b> - The Content-MD5 request header or a additional checksum request header (including <code>x-amz-checksum-crc32</code>, <code>x-amz-checksum-crc32c</code>, <code>x-amz-checksum-sha1</code>, or <code>x-amz-checksum-sha256</code>) is required for all Multi-Object Delete requests.</p></li>
67/// </ul>
68/// </dd>
69/// <dt>
70/// HTTP Host header syntax
71/// </dt>
72/// <dd>
73/// <p><b>Directory buckets </b> - The HTTP Host header syntax is <code> <i>Bucket-name</i>.s3express-<i>zone-id</i>.<i>region-code</i>.amazonaws.com</code>.</p>
74/// </dd>
75/// </dl>
76/// <p>The following operations are related to <code>DeleteObjects</code>:</p>
77/// <ul>
78/// <li>
79/// <p><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html">CreateMultipartUpload</a></p></li>
80/// <li>
81/// <p><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a></p></li>
82/// <li>
83/// <p><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html">CompleteMultipartUpload</a></p></li>
84/// <li>
85/// <p><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html">ListParts</a></p></li>
86/// <li>
87/// <p><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html">AbortMultipartUpload</a></p></li>
88/// </ul><important>
89/// <p>You must URL encode any signed header values that contain spaces. For example, if your header value is <code>my file.txt</code>, containing two spaces after <code>my</code>, you must URL encode this value to <code>my%20%20file.txt</code>.</p>
90/// </important>
91#[derive(::std::clone::Clone, ::std::fmt::Debug)]
92pub struct DeleteObjectsFluentBuilder {
93 handle: ::std::sync::Arc<crate::client::Handle>,
94 inner: crate::operation::delete_objects::builders::DeleteObjectsInputBuilder,
95 config_override: ::std::option::Option<crate::config::Builder>,
96}
97impl
98 crate::client::customize::internal::CustomizableSend<
99 crate::operation::delete_objects::DeleteObjectsOutput,
100 crate::operation::delete_objects::DeleteObjectsError,
101 > for DeleteObjectsFluentBuilder
102{
103 fn send(
104 self,
105 config_override: crate::config::Builder,
106 ) -> crate::client::customize::internal::BoxFuture<
107 crate::client::customize::internal::SendResult<
108 crate::operation::delete_objects::DeleteObjectsOutput,
109 crate::operation::delete_objects::DeleteObjectsError,
110 >,
111 > {
112 ::std::boxed::Box::pin(async move { self.config_override(config_override).send().await })
113 }
114}
115impl DeleteObjectsFluentBuilder {
116 /// Creates a new `DeleteObjectsFluentBuilder`.
117 pub(crate) fn new(handle: ::std::sync::Arc<crate::client::Handle>) -> Self {
118 Self {
119 handle,
120 inner: ::std::default::Default::default(),
121 config_override: ::std::option::Option::None,
122 }
123 }
124 /// Access the DeleteObjects as a reference.
125 pub fn as_input(&self) -> &crate::operation::delete_objects::builders::DeleteObjectsInputBuilder {
126 &self.inner
127 }
128 /// Sends the request and returns the response.
129 ///
130 /// If an error occurs, an `SdkError` will be returned with additional details that
131 /// can be matched against.
132 ///
133 /// By default, any retryable failures will be retried twice. Retry behavior
134 /// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
135 /// set when configuring the client.
136 pub async fn send(
137 self,
138 ) -> ::std::result::Result<
139 crate::operation::delete_objects::DeleteObjectsOutput,
140 ::aws_smithy_runtime_api::client::result::SdkError<
141 crate::operation::delete_objects::DeleteObjectsError,
142 ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
143 >,
144 > {
145 let input = self
146 .inner
147 .build()
148 .map_err(::aws_smithy_runtime_api::client::result::SdkError::construction_failure)?;
149 let runtime_plugins = crate::operation::delete_objects::DeleteObjects::operation_runtime_plugins(
150 self.handle.runtime_plugins.clone(),
151 &self.handle.conf,
152 self.config_override,
153 );
154 crate::operation::delete_objects::DeleteObjects::orchestrate(&runtime_plugins, input).await
155 }
156
157 /// Consumes this builder, creating a customizable operation that can be modified before being sent.
158 pub fn customize(
159 self,
160 ) -> crate::client::customize::CustomizableOperation<
161 crate::operation::delete_objects::DeleteObjectsOutput,
162 crate::operation::delete_objects::DeleteObjectsError,
163 Self,
164 > {
165 crate::client::customize::CustomizableOperation::new(self)
166 }
167 pub(crate) fn config_override(mut self, config_override: impl ::std::convert::Into<crate::config::Builder>) -> Self {
168 self.set_config_override(::std::option::Option::Some(config_override.into()));
169 self
170 }
171
172 pub(crate) fn set_config_override(&mut self, config_override: ::std::option::Option<crate::config::Builder>) -> &mut Self {
173 self.config_override = config_override;
174 self
175 }
176 /// <p>The bucket name containing the objects to delete.</p>
177 /// <p><b>Directory buckets</b> - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format <code> <i>Bucket-name</i>.s3express-<i>zone-id</i>.<i>region-code</i>.amazonaws.com</code>. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format <code> <i>bucket-base-name</i>--<i>zone-id</i>--x-s3</code> (for example, <code> <i>amzn-s3-demo-bucket</i>--<i>usw2-az1</i>--x-s3</code>). For information about bucket naming restrictions, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-bucket-naming-rules.html">Directory bucket naming rules</a> in the <i>Amazon S3 User Guide</i>.</p>
178 /// <p><b>Access points</b> - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form <i>AccessPointName</i>-<i>AccountId</i>.s3-accesspoint.<i>Region</i>.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html">Using access points</a> in the <i>Amazon S3 User Guide</i>.</p><note>
179 /// <p>Object Lambda access points are not supported by directory buckets.</p>
180 /// </note>
181 /// <p><b>S3 on Outposts</b> - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form <code> <i>AccessPointName</i>-<i>AccountId</i>.<i>outpostID</i>.s3-outposts.<i>Region</i>.amazonaws.com</code>. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html">What is S3 on Outposts?</a> in the <i>Amazon S3 User Guide</i>.</p>
182 pub fn bucket(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
183 self.inner = self.inner.bucket(input.into());
184 self
185 }
186 /// <p>The bucket name containing the objects to delete.</p>
187 /// <p><b>Directory buckets</b> - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format <code> <i>Bucket-name</i>.s3express-<i>zone-id</i>.<i>region-code</i>.amazonaws.com</code>. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format <code> <i>bucket-base-name</i>--<i>zone-id</i>--x-s3</code> (for example, <code> <i>amzn-s3-demo-bucket</i>--<i>usw2-az1</i>--x-s3</code>). For information about bucket naming restrictions, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-bucket-naming-rules.html">Directory bucket naming rules</a> in the <i>Amazon S3 User Guide</i>.</p>
188 /// <p><b>Access points</b> - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form <i>AccessPointName</i>-<i>AccountId</i>.s3-accesspoint.<i>Region</i>.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html">Using access points</a> in the <i>Amazon S3 User Guide</i>.</p><note>
189 /// <p>Object Lambda access points are not supported by directory buckets.</p>
190 /// </note>
191 /// <p><b>S3 on Outposts</b> - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form <code> <i>AccessPointName</i>-<i>AccountId</i>.<i>outpostID</i>.s3-outposts.<i>Region</i>.amazonaws.com</code>. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html">What is S3 on Outposts?</a> in the <i>Amazon S3 User Guide</i>.</p>
192 pub fn set_bucket(mut self, input: ::std::option::Option<::std::string::String>) -> Self {
193 self.inner = self.inner.set_bucket(input);
194 self
195 }
196 /// <p>The bucket name containing the objects to delete.</p>
197 /// <p><b>Directory buckets</b> - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format <code> <i>Bucket-name</i>.s3express-<i>zone-id</i>.<i>region-code</i>.amazonaws.com</code>. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format <code> <i>bucket-base-name</i>--<i>zone-id</i>--x-s3</code> (for example, <code> <i>amzn-s3-demo-bucket</i>--<i>usw2-az1</i>--x-s3</code>). For information about bucket naming restrictions, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-bucket-naming-rules.html">Directory bucket naming rules</a> in the <i>Amazon S3 User Guide</i>.</p>
198 /// <p><b>Access points</b> - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form <i>AccessPointName</i>-<i>AccountId</i>.s3-accesspoint.<i>Region</i>.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html">Using access points</a> in the <i>Amazon S3 User Guide</i>.</p><note>
199 /// <p>Object Lambda access points are not supported by directory buckets.</p>
200 /// </note>
201 /// <p><b>S3 on Outposts</b> - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form <code> <i>AccessPointName</i>-<i>AccountId</i>.<i>outpostID</i>.s3-outposts.<i>Region</i>.amazonaws.com</code>. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html">What is S3 on Outposts?</a> in the <i>Amazon S3 User Guide</i>.</p>
202 pub fn get_bucket(&self) -> &::std::option::Option<::std::string::String> {
203 self.inner.get_bucket()
204 }
205 /// <p>Container for the request.</p>
206 pub fn delete(mut self, input: crate::types::Delete) -> Self {
207 self.inner = self.inner.delete(input);
208 self
209 }
210 /// <p>Container for the request.</p>
211 pub fn set_delete(mut self, input: ::std::option::Option<crate::types::Delete>) -> Self {
212 self.inner = self.inner.set_delete(input);
213 self
214 }
215 /// <p>Container for the request.</p>
216 pub fn get_delete(&self) -> &::std::option::Option<crate::types::Delete> {
217 self.inner.get_delete()
218 }
219 /// <p>The concatenation of the authentication device's serial number, a space, and the value that is displayed on your authentication device. Required to permanently delete a versioned object if versioning is configured with MFA delete enabled.</p>
220 /// <p>When performing the <code>DeleteObjects</code> operation on an MFA delete enabled bucket, which attempts to delete the specified versioned objects, you must include an MFA token. If you don't provide an MFA token, the entire request will fail, even if there are non-versioned objects that you are trying to delete. If you provide an invalid token, whether there are versioned object keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete"> MFA Delete</a> in the <i>Amazon S3 User Guide</i>.</p><note>
221 /// <p>This functionality is not supported for directory buckets.</p>
222 /// </note>
223 pub fn mfa(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
224 self.inner = self.inner.mfa(input.into());
225 self
226 }
227 /// <p>The concatenation of the authentication device's serial number, a space, and the value that is displayed on your authentication device. Required to permanently delete a versioned object if versioning is configured with MFA delete enabled.</p>
228 /// <p>When performing the <code>DeleteObjects</code> operation on an MFA delete enabled bucket, which attempts to delete the specified versioned objects, you must include an MFA token. If you don't provide an MFA token, the entire request will fail, even if there are non-versioned objects that you are trying to delete. If you provide an invalid token, whether there are versioned object keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete"> MFA Delete</a> in the <i>Amazon S3 User Guide</i>.</p><note>
229 /// <p>This functionality is not supported for directory buckets.</p>
230 /// </note>
231 pub fn set_mfa(mut self, input: ::std::option::Option<::std::string::String>) -> Self {
232 self.inner = self.inner.set_mfa(input);
233 self
234 }
235 /// <p>The concatenation of the authentication device's serial number, a space, and the value that is displayed on your authentication device. Required to permanently delete a versioned object if versioning is configured with MFA delete enabled.</p>
236 /// <p>When performing the <code>DeleteObjects</code> operation on an MFA delete enabled bucket, which attempts to delete the specified versioned objects, you must include an MFA token. If you don't provide an MFA token, the entire request will fail, even if there are non-versioned objects that you are trying to delete. If you provide an invalid token, whether there are versioned object keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete"> MFA Delete</a> in the <i>Amazon S3 User Guide</i>.</p><note>
237 /// <p>This functionality is not supported for directory buckets.</p>
238 /// </note>
239 pub fn get_mfa(&self) -> &::std::option::Option<::std::string::String> {
240 self.inner.get_mfa()
241 }
242 /// <p>Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html">Downloading Objects in Requester Pays Buckets</a> in the <i>Amazon S3 User Guide</i>.</p><note>
243 /// <p>This functionality is not supported for directory buckets.</p>
244 /// </note>
245 pub fn request_payer(mut self, input: crate::types::RequestPayer) -> Self {
246 self.inner = self.inner.request_payer(input);
247 self
248 }
249 /// <p>Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html">Downloading Objects in Requester Pays Buckets</a> in the <i>Amazon S3 User Guide</i>.</p><note>
250 /// <p>This functionality is not supported for directory buckets.</p>
251 /// </note>
252 pub fn set_request_payer(mut self, input: ::std::option::Option<crate::types::RequestPayer>) -> Self {
253 self.inner = self.inner.set_request_payer(input);
254 self
255 }
256 /// <p>Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html">Downloading Objects in Requester Pays Buckets</a> in the <i>Amazon S3 User Guide</i>.</p><note>
257 /// <p>This functionality is not supported for directory buckets.</p>
258 /// </note>
259 pub fn get_request_payer(&self) -> &::std::option::Option<crate::types::RequestPayer> {
260 self.inner.get_request_payer()
261 }
262 /// <p>Specifies whether you want to delete this object even if it has a Governance-type Object Lock in place. To use this header, you must have the <code>s3:BypassGovernanceRetention</code> permission.</p><note>
263 /// <p>This functionality is not supported for directory buckets.</p>
264 /// </note>
265 pub fn bypass_governance_retention(mut self, input: bool) -> Self {
266 self.inner = self.inner.bypass_governance_retention(input);
267 self
268 }
269 /// <p>Specifies whether you want to delete this object even if it has a Governance-type Object Lock in place. To use this header, you must have the <code>s3:BypassGovernanceRetention</code> permission.</p><note>
270 /// <p>This functionality is not supported for directory buckets.</p>
271 /// </note>
272 pub fn set_bypass_governance_retention(mut self, input: ::std::option::Option<bool>) -> Self {
273 self.inner = self.inner.set_bypass_governance_retention(input);
274 self
275 }
276 /// <p>Specifies whether you want to delete this object even if it has a Governance-type Object Lock in place. To use this header, you must have the <code>s3:BypassGovernanceRetention</code> permission.</p><note>
277 /// <p>This functionality is not supported for directory buckets.</p>
278 /// </note>
279 pub fn get_bypass_governance_retention(&self) -> &::std::option::Option<bool> {
280 self.inner.get_bypass_governance_retention()
281 }
282 /// <p>The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code <code>403 Forbidden</code> (access denied).</p>
283 pub fn expected_bucket_owner(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
284 self.inner = self.inner.expected_bucket_owner(input.into());
285 self
286 }
287 /// <p>The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code <code>403 Forbidden</code> (access denied).</p>
288 pub fn set_expected_bucket_owner(mut self, input: ::std::option::Option<::std::string::String>) -> Self {
289 self.inner = self.inner.set_expected_bucket_owner(input);
290 self
291 }
292 /// <p>The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code <code>403 Forbidden</code> (access denied).</p>
293 pub fn get_expected_bucket_owner(&self) -> &::std::option::Option<::std::string::String> {
294 self.inner.get_expected_bucket_owner()
295 }
296 /// <p>Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding <code>x-amz-checksum-<i>algorithm</i> </code> or <code>x-amz-trailer</code> header sent. Otherwise, Amazon S3 fails the request with the HTTP status code <code>400 Bad Request</code>.</p>
297 /// <p>For the <code>x-amz-checksum-<i>algorithm</i> </code> header, replace <code> <i>algorithm</i> </code> with the supported algorithm from the following list:</p>
298 /// <ul>
299 /// <li>
300 /// <p><code>CRC32</code></p></li>
301 /// <li>
302 /// <p><code>CRC32C</code></p></li>
303 /// <li>
304 /// <p><code>CRC64NVME</code></p></li>
305 /// <li>
306 /// <p><code>SHA1</code></p></li>
307 /// <li>
308 /// <p><code>SHA256</code></p></li>
309 /// </ul>
310 /// <p>For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html">Checking object integrity</a> in the <i>Amazon S3 User Guide</i>.</p>
311 /// <p>If the individual checksum value you provide through <code>x-amz-checksum-<i>algorithm</i> </code> doesn't match the checksum algorithm you set through <code>x-amz-sdk-checksum-algorithm</code>, Amazon S3 fails the request with a <code>BadDigest</code> error.</p>
312 /// <p>If you provide an individual checksum, Amazon S3 ignores any provided <code>ChecksumAlgorithm</code> parameter.</p>
313 pub fn checksum_algorithm(mut self, input: crate::types::ChecksumAlgorithm) -> Self {
314 self.inner = self.inner.checksum_algorithm(input);
315 self
316 }
317 /// <p>Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding <code>x-amz-checksum-<i>algorithm</i> </code> or <code>x-amz-trailer</code> header sent. Otherwise, Amazon S3 fails the request with the HTTP status code <code>400 Bad Request</code>.</p>
318 /// <p>For the <code>x-amz-checksum-<i>algorithm</i> </code> header, replace <code> <i>algorithm</i> </code> with the supported algorithm from the following list:</p>
319 /// <ul>
320 /// <li>
321 /// <p><code>CRC32</code></p></li>
322 /// <li>
323 /// <p><code>CRC32C</code></p></li>
324 /// <li>
325 /// <p><code>CRC64NVME</code></p></li>
326 /// <li>
327 /// <p><code>SHA1</code></p></li>
328 /// <li>
329 /// <p><code>SHA256</code></p></li>
330 /// </ul>
331 /// <p>For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html">Checking object integrity</a> in the <i>Amazon S3 User Guide</i>.</p>
332 /// <p>If the individual checksum value you provide through <code>x-amz-checksum-<i>algorithm</i> </code> doesn't match the checksum algorithm you set through <code>x-amz-sdk-checksum-algorithm</code>, Amazon S3 fails the request with a <code>BadDigest</code> error.</p>
333 /// <p>If you provide an individual checksum, Amazon S3 ignores any provided <code>ChecksumAlgorithm</code> parameter.</p>
334 pub fn set_checksum_algorithm(mut self, input: ::std::option::Option<crate::types::ChecksumAlgorithm>) -> Self {
335 self.inner = self.inner.set_checksum_algorithm(input);
336 self
337 }
338 /// <p>Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding <code>x-amz-checksum-<i>algorithm</i> </code> or <code>x-amz-trailer</code> header sent. Otherwise, Amazon S3 fails the request with the HTTP status code <code>400 Bad Request</code>.</p>
339 /// <p>For the <code>x-amz-checksum-<i>algorithm</i> </code> header, replace <code> <i>algorithm</i> </code> with the supported algorithm from the following list:</p>
340 /// <ul>
341 /// <li>
342 /// <p><code>CRC32</code></p></li>
343 /// <li>
344 /// <p><code>CRC32C</code></p></li>
345 /// <li>
346 /// <p><code>CRC64NVME</code></p></li>
347 /// <li>
348 /// <p><code>SHA1</code></p></li>
349 /// <li>
350 /// <p><code>SHA256</code></p></li>
351 /// </ul>
352 /// <p>For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html">Checking object integrity</a> in the <i>Amazon S3 User Guide</i>.</p>
353 /// <p>If the individual checksum value you provide through <code>x-amz-checksum-<i>algorithm</i> </code> doesn't match the checksum algorithm you set through <code>x-amz-sdk-checksum-algorithm</code>, Amazon S3 fails the request with a <code>BadDigest</code> error.</p>
354 /// <p>If you provide an individual checksum, Amazon S3 ignores any provided <code>ChecksumAlgorithm</code> parameter.</p>
355 pub fn get_checksum_algorithm(&self) -> &::std::option::Option<crate::types::ChecksumAlgorithm> {
356 self.inner.get_checksum_algorithm()
357 }
358}