Struct aws_sdk_fsx::types::DataRepositoryAssociation

source ·
#[non_exhaustive]
pub struct DataRepositoryAssociation {
Show 16 fields pub association_id: Option<String>, pub resource_arn: Option<String>, pub file_system_id: Option<String>, pub lifecycle: Option<DataRepositoryLifecycle>, pub failure_details: Option<DataRepositoryFailureDetails>, pub file_system_path: Option<String>, pub data_repository_path: Option<String>, pub batch_import_meta_data_on_create: Option<bool>, pub imported_file_chunk_size: Option<i32>, pub s3: Option<S3DataRepositoryConfiguration>, pub tags: Option<Vec<Tag>>, pub creation_time: Option<DateTime>, pub file_cache_id: Option<String>, pub file_cache_path: Option<String>, pub data_repository_subdirectories: Option<Vec<String>>, pub nfs: Option<NfsDataRepositoryConfiguration>,
}
Expand description

The configuration of a data repository association that links an Amazon FSx for Lustre file system to an Amazon S3 bucket or an Amazon File Cache resource to an Amazon S3 bucket or an NFS file system. The data repository association configuration object is returned in the response of the following operations:

  • CreateDataRepositoryAssociation

  • UpdateDataRepositoryAssociation

  • DescribeDataRepositoryAssociations

Data repository associations are supported on Amazon File Cache resources and all FSx for Lustre 2.12 and 2.15 file systems, excluding scratch_1 deployment type.

Fields (Non-exhaustive)§

This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.
§association_id: Option<String>

The system-generated, unique ID of the data repository association.

§resource_arn: Option<String>

The Amazon Resource Name (ARN) for a given resource. ARNs uniquely identify Amazon Web Services resources. We require an ARN when you need to specify a resource unambiguously across all of Amazon Web Services. For more information, see Amazon Resource Names (ARNs) in the Amazon Web Services General Reference.

§file_system_id: Option<String>

The globally unique ID of the file system, assigned by Amazon FSx.

§lifecycle: Option<DataRepositoryLifecycle>

Describes the state of a data repository association. The lifecycle can have the following values:

  • CREATING - The data repository association between the file system or cache and the data repository is being created. The data repository is unavailable.

  • AVAILABLE - The data repository association is available for use.

  • MISCONFIGURED - The data repository association is misconfigured. Until the configuration is corrected, automatic import and automatic export will not work (only for Amazon FSx for Lustre).

  • UPDATING - The data repository association is undergoing a customer initiated update that might affect its availability.

  • DELETING - The data repository association is undergoing a customer initiated deletion.

  • FAILED - The data repository association is in a terminal state that cannot be recovered.

§failure_details: Option<DataRepositoryFailureDetails>

Provides detailed information about the data repository if its Lifecycle is set to MISCONFIGURED or FAILED.

§file_system_path: Option<String>

A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path /ns1/, then you cannot link another data repository with file system path /ns1/ns2.

This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory.

If you specify only a forward slash (/) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.

§data_repository_path: Option<String>

The path to the data repository that will be linked to the cache or file system.

  • For Amazon File Cache, the path can be an NFS data repository that will be linked to the cache. The path can be in one of two formats:

    • If you are not using the DataRepositorySubdirectories parameter, the path is to an NFS Export directory (or one of its subdirectories) in the format nsf://nfs-domain-name/exportpath. You can therefore link a single NFS Export to a single data repository association.

    • If you are using the DataRepositorySubdirectories parameter, the path is the domain name of the NFS file system in the format nfs://filer-domain-name, which indicates the root of the subdirectories specified with the DataRepositorySubdirectories parameter.

  • For Amazon File Cache, the path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/.

  • For Amazon FSx for Lustre, the path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/.

§batch_import_meta_data_on_create: Option<bool>

A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to true.

BatchImportMetaDataOnCreate is not supported for data repositories linked to an Amazon File Cache resource.

§imported_file_chunk_size: Option<i32>

For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache.

The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.

§s3: Option<S3DataRepositoryConfiguration>

The configuration for an Amazon S3 data repository linked to an Amazon FSx for Lustre file system with a data repository association.

§tags: Option<Vec<Tag>>

A list of Tag values, with a maximum of 50 elements.

§creation_time: Option<DateTime>

The time that the resource was created, in seconds (since 1970-01-01T00:00:00Z), also known as Unix time.

§file_cache_id: Option<String>

The globally unique ID of the Amazon File Cache resource.

§file_cache_path: Option<String>

A path on the Amazon File Cache that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the path is required. Two data repository associations cannot have overlapping cache paths. For example, if a data repository is associated with cache path /ns1/, then you cannot link another data repository with cache path /ns1/ns2.

This path specifies the directory in your cache where files will be exported from. This cache directory can be linked to only one data repository (S3 or NFS) and no other data repository can be linked to the directory.

The cache path can only be set to root (/) on an NFS DRA when DataRepositorySubdirectories is specified. If you specify root (/) as the cache path, you can create only one DRA on the cache.

The cache path cannot be set to root (/) for an S3 DRA.

§data_repository_subdirectories: Option<Vec<String>>

For Amazon File Cache, a list of NFS Exports that will be linked with an NFS data repository association. All the subdirectories must be on a single NFS file system. The Export paths are in the format /exportpath1. To use this parameter, you must configure DataRepositoryPath as the domain name of the NFS file system. The NFS file system domain name in effect is the root of the subdirectories. Note that DataRepositorySubdirectories is not supported for S3 data repositories.

§nfs: Option<NfsDataRepositoryConfiguration>

The configuration for an NFS data repository linked to an Amazon File Cache resource with a data repository association.

Implementations§

source§

impl DataRepositoryAssociation

source

pub fn association_id(&self) -> Option<&str>

The system-generated, unique ID of the data repository association.

source

pub fn resource_arn(&self) -> Option<&str>

The Amazon Resource Name (ARN) for a given resource. ARNs uniquely identify Amazon Web Services resources. We require an ARN when you need to specify a resource unambiguously across all of Amazon Web Services. For more information, see Amazon Resource Names (ARNs) in the Amazon Web Services General Reference.

source

pub fn file_system_id(&self) -> Option<&str>

The globally unique ID of the file system, assigned by Amazon FSx.

source

pub fn lifecycle(&self) -> Option<&DataRepositoryLifecycle>

Describes the state of a data repository association. The lifecycle can have the following values:

  • CREATING - The data repository association between the file system or cache and the data repository is being created. The data repository is unavailable.

  • AVAILABLE - The data repository association is available for use.

  • MISCONFIGURED - The data repository association is misconfigured. Until the configuration is corrected, automatic import and automatic export will not work (only for Amazon FSx for Lustre).

  • UPDATING - The data repository association is undergoing a customer initiated update that might affect its availability.

  • DELETING - The data repository association is undergoing a customer initiated deletion.

  • FAILED - The data repository association is in a terminal state that cannot be recovered.

source

pub fn failure_details(&self) -> Option<&DataRepositoryFailureDetails>

Provides detailed information about the data repository if its Lifecycle is set to MISCONFIGURED or FAILED.

source

pub fn file_system_path(&self) -> Option<&str>

A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path /ns1/, then you cannot link another data repository with file system path /ns1/ns2.

This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory.

If you specify only a forward slash (/) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.

source

pub fn data_repository_path(&self) -> Option<&str>

The path to the data repository that will be linked to the cache or file system.

  • For Amazon File Cache, the path can be an NFS data repository that will be linked to the cache. The path can be in one of two formats:

    • If you are not using the DataRepositorySubdirectories parameter, the path is to an NFS Export directory (or one of its subdirectories) in the format nsf://nfs-domain-name/exportpath. You can therefore link a single NFS Export to a single data repository association.

    • If you are using the DataRepositorySubdirectories parameter, the path is the domain name of the NFS file system in the format nfs://filer-domain-name, which indicates the root of the subdirectories specified with the DataRepositorySubdirectories parameter.

  • For Amazon File Cache, the path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/.

  • For Amazon FSx for Lustre, the path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/.

source

pub fn batch_import_meta_data_on_create(&self) -> Option<bool>

A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to true.

BatchImportMetaDataOnCreate is not supported for data repositories linked to an Amazon File Cache resource.

source

pub fn imported_file_chunk_size(&self) -> Option<i32>

For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache.

The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.

source

pub fn s3(&self) -> Option<&S3DataRepositoryConfiguration>

The configuration for an Amazon S3 data repository linked to an Amazon FSx for Lustre file system with a data repository association.

source

pub fn tags(&self) -> &[Tag]

A list of Tag values, with a maximum of 50 elements.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .tags.is_none().

source

pub fn creation_time(&self) -> Option<&DateTime>

The time that the resource was created, in seconds (since 1970-01-01T00:00:00Z), also known as Unix time.

source

pub fn file_cache_id(&self) -> Option<&str>

The globally unique ID of the Amazon File Cache resource.

source

pub fn file_cache_path(&self) -> Option<&str>

A path on the Amazon File Cache that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the path is required. Two data repository associations cannot have overlapping cache paths. For example, if a data repository is associated with cache path /ns1/, then you cannot link another data repository with cache path /ns1/ns2.

This path specifies the directory in your cache where files will be exported from. This cache directory can be linked to only one data repository (S3 or NFS) and no other data repository can be linked to the directory.

The cache path can only be set to root (/) on an NFS DRA when DataRepositorySubdirectories is specified. If you specify root (/) as the cache path, you can create only one DRA on the cache.

The cache path cannot be set to root (/) for an S3 DRA.

source

pub fn data_repository_subdirectories(&self) -> &[String]

For Amazon File Cache, a list of NFS Exports that will be linked with an NFS data repository association. All the subdirectories must be on a single NFS file system. The Export paths are in the format /exportpath1. To use this parameter, you must configure DataRepositoryPath as the domain name of the NFS file system. The NFS file system domain name in effect is the root of the subdirectories. Note that DataRepositorySubdirectories is not supported for S3 data repositories.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .data_repository_subdirectories.is_none().

source

pub fn nfs(&self) -> Option<&NfsDataRepositoryConfiguration>

The configuration for an NFS data repository linked to an Amazon File Cache resource with a data repository association.

source§

impl DataRepositoryAssociation

source

pub fn builder() -> DataRepositoryAssociationBuilder

Creates a new builder-style object to manufacture DataRepositoryAssociation.

Trait Implementations§

source§

impl Clone for DataRepositoryAssociation

source§

fn clone(&self) -> DataRepositoryAssociation

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for DataRepositoryAssociation

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl PartialEq for DataRepositoryAssociation

source§

fn eq(&self, other: &DataRepositoryAssociation) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl StructuralPartialEq for DataRepositoryAssociation

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more