#[non_exhaustive]pub struct CreateLocationHdfsInput {Show 13 fields
pub subdirectory: Option<String>,
pub name_nodes: Option<Vec<HdfsNameNode>>,
pub block_size: Option<i32>,
pub replication_factor: Option<i32>,
pub kms_key_provider_uri: Option<String>,
pub qop_configuration: Option<QopConfiguration>,
pub authentication_type: Option<HdfsAuthenticationType>,
pub simple_user: Option<String>,
pub kerberos_principal: Option<String>,
pub kerberos_keytab: Option<Blob>,
pub kerberos_krb5_conf: Option<Blob>,
pub agent_arns: Option<Vec<String>>,
pub tags: Option<Vec<TagListEntry>>,
}
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.subdirectory: Option<String>
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /
.
name_nodes: Option<Vec<HdfsNameNode>>
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
block_size: Option<i32>
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
replication_factor: Option<i32>
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
kms_key_provider_uri: Option<String>
The URI of the HDFS cluster's Key Management Server (KMS).
qop_configuration: Option<QopConfiguration>
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.
authentication_type: Option<HdfsAuthenticationType>
The type of authentication used to determine the identity of the user.
simple_user: Option<String>
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
kerberos_principal: Option<String>
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
kerberos_keytab: Option<Blob>
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
kerberos_krb5_conf: Option<Blob>
The krb5.conf
file that contains the Kerberos configuration information. You can load the krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
agent_arns: Option<Vec<String>>
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
Implementations§
source§impl CreateLocationHdfsInput
impl CreateLocationHdfsInput
sourcepub fn subdirectory(&self) -> Option<&str>
pub fn subdirectory(&self) -> Option<&str>
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /
.
sourcepub fn name_nodes(&self) -> &[HdfsNameNode]
pub fn name_nodes(&self) -> &[HdfsNameNode]
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .name_nodes.is_none()
.
sourcepub fn block_size(&self) -> Option<i32>
pub fn block_size(&self) -> Option<i32>
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
sourcepub fn replication_factor(&self) -> Option<i32>
pub fn replication_factor(&self) -> Option<i32>
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
sourcepub fn kms_key_provider_uri(&self) -> Option<&str>
pub fn kms_key_provider_uri(&self) -> Option<&str>
The URI of the HDFS cluster's Key Management Server (KMS).
sourcepub fn qop_configuration(&self) -> Option<&QopConfiguration>
pub fn qop_configuration(&self) -> Option<&QopConfiguration>
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.
sourcepub fn authentication_type(&self) -> Option<&HdfsAuthenticationType>
pub fn authentication_type(&self) -> Option<&HdfsAuthenticationType>
The type of authentication used to determine the identity of the user.
sourcepub fn simple_user(&self) -> Option<&str>
pub fn simple_user(&self) -> Option<&str>
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_principal(&self) -> Option<&str>
pub fn kerberos_principal(&self) -> Option<&str>
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_keytab(&self) -> Option<&Blob>
pub fn kerberos_keytab(&self) -> Option<&Blob>
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_krb5_conf(&self) -> Option<&Blob>
pub fn kerberos_krb5_conf(&self) -> Option<&Blob>
The krb5.conf
file that contains the Kerberos configuration information. You can load the krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn agent_arns(&self) -> &[String]
pub fn agent_arns(&self) -> &[String]
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .agent_arns.is_none()
.
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .tags.is_none()
.
source§impl CreateLocationHdfsInput
impl CreateLocationHdfsInput
sourcepub fn builder() -> CreateLocationHdfsInputBuilder
pub fn builder() -> CreateLocationHdfsInputBuilder
Creates a new builder-style object to manufacture CreateLocationHdfsInput
.
Trait Implementations§
source§impl Clone for CreateLocationHdfsInput
impl Clone for CreateLocationHdfsInput
source§fn clone(&self) -> CreateLocationHdfsInput
fn clone(&self) -> CreateLocationHdfsInput
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for CreateLocationHdfsInput
impl Debug for CreateLocationHdfsInput
source§impl PartialEq for CreateLocationHdfsInput
impl PartialEq for CreateLocationHdfsInput
impl StructuralPartialEq for CreateLocationHdfsInput
Auto Trait Implementations§
impl Freeze for CreateLocationHdfsInput
impl RefUnwindSafe for CreateLocationHdfsInput
impl Send for CreateLocationHdfsInput
impl Sync for CreateLocationHdfsInput
impl Unpin for CreateLocationHdfsInput
impl UnwindSafe for CreateLocationHdfsInput
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more