#[non_exhaustive]pub struct DataRepositoryConfigurationBuilder { /* private fields */ }
Expand description
A builder for DataRepositoryConfiguration
.
Implementations§
source§impl DataRepositoryConfigurationBuilder
impl DataRepositoryConfigurationBuilder
sourcepub fn lifecycle(self, input: DataRepositoryLifecycle) -> Self
pub fn lifecycle(self, input: DataRepositoryLifecycle) -> Self
Describes the state of the file system's S3 durable data repository, if it is configured with an S3 repository. The lifecycle can have the following values:
-
CREATING
- The data repository configuration between the FSx file system and the linked S3 data repository is being created. The data repository is unavailable. -
AVAILABLE
- The data repository is available for use. -
MISCONFIGURED
- Amazon FSx cannot automatically import updates from the S3 bucket until the data repository configuration is corrected. For more information, see Troubleshooting a Misconfigured linked S3 bucket. -
UPDATING
- The data repository is undergoing a customer initiated update and availability may be impacted. -
FAILED
- The data repository is in a terminal state that cannot be recovered.
sourcepub fn set_lifecycle(self, input: Option<DataRepositoryLifecycle>) -> Self
pub fn set_lifecycle(self, input: Option<DataRepositoryLifecycle>) -> Self
Describes the state of the file system's S3 durable data repository, if it is configured with an S3 repository. The lifecycle can have the following values:
-
CREATING
- The data repository configuration between the FSx file system and the linked S3 data repository is being created. The data repository is unavailable. -
AVAILABLE
- The data repository is available for use. -
MISCONFIGURED
- Amazon FSx cannot automatically import updates from the S3 bucket until the data repository configuration is corrected. For more information, see Troubleshooting a Misconfigured linked S3 bucket. -
UPDATING
- The data repository is undergoing a customer initiated update and availability may be impacted. -
FAILED
- The data repository is in a terminal state that cannot be recovered.
sourcepub fn get_lifecycle(&self) -> &Option<DataRepositoryLifecycle>
pub fn get_lifecycle(&self) -> &Option<DataRepositoryLifecycle>
Describes the state of the file system's S3 durable data repository, if it is configured with an S3 repository. The lifecycle can have the following values:
-
CREATING
- The data repository configuration between the FSx file system and the linked S3 data repository is being created. The data repository is unavailable. -
AVAILABLE
- The data repository is available for use. -
MISCONFIGURED
- Amazon FSx cannot automatically import updates from the S3 bucket until the data repository configuration is corrected. For more information, see Troubleshooting a Misconfigured linked S3 bucket. -
UPDATING
- The data repository is undergoing a customer initiated update and availability may be impacted. -
FAILED
- The data repository is in a terminal state that cannot be recovered.
sourcepub fn import_path(self, input: impl Into<String>) -> Self
pub fn import_path(self, input: impl Into<String>) -> Self
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your FSx for Lustre file system, for example s3://import-bucket/optional-prefix
. If a prefix is specified after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.
sourcepub fn set_import_path(self, input: Option<String>) -> Self
pub fn set_import_path(self, input: Option<String>) -> Self
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your FSx for Lustre file system, for example s3://import-bucket/optional-prefix
. If a prefix is specified after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.
sourcepub fn get_import_path(&self) -> &Option<String>
pub fn get_import_path(&self) -> &Option<String>
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your FSx for Lustre file system, for example s3://import-bucket/optional-prefix
. If a prefix is specified after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.
sourcepub fn export_path(self, input: impl Into<String>) -> Self
pub fn export_path(self, input: impl Into<String>) -> Self
The export path to the Amazon S3 bucket (and prefix) that you are using to store new and changed Lustre file system files in S3.
sourcepub fn set_export_path(self, input: Option<String>) -> Self
pub fn set_export_path(self, input: Option<String>) -> Self
The export path to the Amazon S3 bucket (and prefix) that you are using to store new and changed Lustre file system files in S3.
sourcepub fn get_export_path(&self) -> &Option<String>
pub fn get_export_path(&self) -> &Option<String>
The export path to the Amazon S3 bucket (and prefix) that you are using to store new and changed Lustre file system files in S3.
sourcepub fn imported_file_chunk_size(self, input: i32) -> Self
pub fn imported_file_chunk_size(self, input: i32) -> Self
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
sourcepub fn set_imported_file_chunk_size(self, input: Option<i32>) -> Self
pub fn set_imported_file_chunk_size(self, input: Option<i32>) -> Self
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
sourcepub fn get_imported_file_chunk_size(&self) -> &Option<i32>
pub fn get_imported_file_chunk_size(&self) -> &Option<i32>
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
sourcepub fn auto_import_policy(self, input: AutoImportPolicyType) -> Self
pub fn auto_import_policy(self, input: AutoImportPolicyType) -> Self
Describes the file system's linked S3 data repository's AutoImportPolicy
. The AutoImportPolicy configures how Amazon FSx keeps your file and directory listings up to date as you add or modify objects in your linked S3 bucket. AutoImportPolicy
can have the following values:
-
NONE
- (Default) AutoImport is off. Amazon FSx only updates file and directory listings from the linked S3 bucket when the file system is created. FSx does not update file and directory listings for any new or changed objects after choosing this option. -
NEW
- AutoImport is on. Amazon FSx automatically imports directory listings of any new objects added to the linked S3 bucket that do not currently exist in the FSx file system. -
NEW_CHANGED
- AutoImport is on. Amazon FSx automatically imports file and directory listings of any new objects added to the S3 bucket and any existing objects that are changed in the S3 bucket after you choose this option. -
NEW_CHANGED_DELETED
- AutoImport is on. Amazon FSx automatically imports file and directory listings of any new objects added to the S3 bucket, any existing objects that are changed in the S3 bucket, and any objects that were deleted in the S3 bucket.
sourcepub fn set_auto_import_policy(self, input: Option<AutoImportPolicyType>) -> Self
pub fn set_auto_import_policy(self, input: Option<AutoImportPolicyType>) -> Self
Describes the file system's linked S3 data repository's AutoImportPolicy
. The AutoImportPolicy configures how Amazon FSx keeps your file and directory listings up to date as you add or modify objects in your linked S3 bucket. AutoImportPolicy
can have the following values:
-
NONE
- (Default) AutoImport is off. Amazon FSx only updates file and directory listings from the linked S3 bucket when the file system is created. FSx does not update file and directory listings for any new or changed objects after choosing this option. -
NEW
- AutoImport is on. Amazon FSx automatically imports directory listings of any new objects added to the linked S3 bucket that do not currently exist in the FSx file system. -
NEW_CHANGED
- AutoImport is on. Amazon FSx automatically imports file and directory listings of any new objects added to the S3 bucket and any existing objects that are changed in the S3 bucket after you choose this option. -
NEW_CHANGED_DELETED
- AutoImport is on. Amazon FSx automatically imports file and directory listings of any new objects added to the S3 bucket, any existing objects that are changed in the S3 bucket, and any objects that were deleted in the S3 bucket.
sourcepub fn get_auto_import_policy(&self) -> &Option<AutoImportPolicyType>
pub fn get_auto_import_policy(&self) -> &Option<AutoImportPolicyType>
Describes the file system's linked S3 data repository's AutoImportPolicy
. The AutoImportPolicy configures how Amazon FSx keeps your file and directory listings up to date as you add or modify objects in your linked S3 bucket. AutoImportPolicy
can have the following values:
-
NONE
- (Default) AutoImport is off. Amazon FSx only updates file and directory listings from the linked S3 bucket when the file system is created. FSx does not update file and directory listings for any new or changed objects after choosing this option. -
NEW
- AutoImport is on. Amazon FSx automatically imports directory listings of any new objects added to the linked S3 bucket that do not currently exist in the FSx file system. -
NEW_CHANGED
- AutoImport is on. Amazon FSx automatically imports file and directory listings of any new objects added to the S3 bucket and any existing objects that are changed in the S3 bucket after you choose this option. -
NEW_CHANGED_DELETED
- AutoImport is on. Amazon FSx automatically imports file and directory listings of any new objects added to the S3 bucket, any existing objects that are changed in the S3 bucket, and any objects that were deleted in the S3 bucket.
sourcepub fn failure_details(self, input: DataRepositoryFailureDetails) -> Self
pub fn failure_details(self, input: DataRepositoryFailureDetails) -> Self
Provides detailed information about the data repository if its Lifecycle
is set to MISCONFIGURED
or FAILED
.
sourcepub fn set_failure_details(
self,
input: Option<DataRepositoryFailureDetails>
) -> Self
pub fn set_failure_details( self, input: Option<DataRepositoryFailureDetails> ) -> Self
Provides detailed information about the data repository if its Lifecycle
is set to MISCONFIGURED
or FAILED
.
sourcepub fn get_failure_details(&self) -> &Option<DataRepositoryFailureDetails>
pub fn get_failure_details(&self) -> &Option<DataRepositoryFailureDetails>
Provides detailed information about the data repository if its Lifecycle
is set to MISCONFIGURED
or FAILED
.
sourcepub fn build(self) -> DataRepositoryConfiguration
pub fn build(self) -> DataRepositoryConfiguration
Consumes the builder and constructs a DataRepositoryConfiguration
.
Trait Implementations§
source§impl Clone for DataRepositoryConfigurationBuilder
impl Clone for DataRepositoryConfigurationBuilder
source§fn clone(&self) -> DataRepositoryConfigurationBuilder
fn clone(&self) -> DataRepositoryConfigurationBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Default for DataRepositoryConfigurationBuilder
impl Default for DataRepositoryConfigurationBuilder
source§fn default() -> DataRepositoryConfigurationBuilder
fn default() -> DataRepositoryConfigurationBuilder
source§impl PartialEq for DataRepositoryConfigurationBuilder
impl PartialEq for DataRepositoryConfigurationBuilder
source§fn eq(&self, other: &DataRepositoryConfigurationBuilder) -> bool
fn eq(&self, other: &DataRepositoryConfigurationBuilder) -> bool
self
and other
values to be equal, and is used
by ==
.