#[non_exhaustive]
pub struct CreateLocationHdfsInput {
Show 13 fields pub subdirectory: Option<String>, pub name_nodes: Option<Vec<HdfsNameNode>>, pub block_size: Option<i32>, pub replication_factor: Option<i32>, pub kms_key_provider_uri: Option<String>, pub qop_configuration: Option<QopConfiguration>, pub authentication_type: Option<HdfsAuthenticationType>, pub simple_user: Option<String>, pub kerberos_principal: Option<String>, pub kerberos_keytab: Option<Blob>, pub kerberos_krb5_conf: Option<Blob>, pub agent_arns: Option<Vec<String>>, pub tags: Option<Vec<TagListEntry>>,
}

Fields (Non-exhaustive)§

This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.
§subdirectory: Option<String>

A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /.

§name_nodes: Option<Vec<HdfsNameNode>>

The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.

§block_size: Option<i32>

The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).

§replication_factor: Option<i32>

The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.

§kms_key_provider_uri: Option<String>

The URI of the HDFS cluster's Key Management Server (KMS).

§qop_configuration: Option<QopConfiguration>

The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration isn't specified, RpcProtection and DataTransferProtection default to PRIVACY. If you set RpcProtection or DataTransferProtection, the other parameter assumes the same value.

§authentication_type: Option<HdfsAuthenticationType>

The type of authentication used to determine the identity of the user.

§simple_user: Option<String>

The user name used to identify the client on the host operating system.

If SIMPLE is specified for AuthenticationType, this parameter is required.

§kerberos_principal: Option<String>

The Kerberos principal with access to the files and folders on the HDFS cluster.

If KERBEROS is specified for AuthenticationType, this parameter is required.

§kerberos_keytab: Option<Blob>

The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.

If KERBEROS is specified for AuthenticationType, this parameter is required.

§kerberos_krb5_conf: Option<Blob>

The krb5.conf file that contains the Kerberos configuration information. You can load the krb5.conf file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.

If KERBEROS is specified for AuthenticationType, this parameter is required.

§agent_arns: Option<Vec<String>>

The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.

§tags: Option<Vec<TagListEntry>>

The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.

Implementations§

source§

impl CreateLocationHdfsInput

source

pub fn subdirectory(&self) -> Option<&str>

A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /.

source

pub fn name_nodes(&self) -> Option<&[HdfsNameNode]>

The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.

source

pub fn block_size(&self) -> Option<i32>

The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).

source

pub fn replication_factor(&self) -> Option<i32>

The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.

source

pub fn kms_key_provider_uri(&self) -> Option<&str>

The URI of the HDFS cluster's Key Management Server (KMS).

source

pub fn qop_configuration(&self) -> Option<&QopConfiguration>

The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration isn't specified, RpcProtection and DataTransferProtection default to PRIVACY. If you set RpcProtection or DataTransferProtection, the other parameter assumes the same value.

source

pub fn authentication_type(&self) -> Option<&HdfsAuthenticationType>

The type of authentication used to determine the identity of the user.

source

pub fn simple_user(&self) -> Option<&str>

The user name used to identify the client on the host operating system.

If SIMPLE is specified for AuthenticationType, this parameter is required.

source

pub fn kerberos_principal(&self) -> Option<&str>

The Kerberos principal with access to the files and folders on the HDFS cluster.

If KERBEROS is specified for AuthenticationType, this parameter is required.

source

pub fn kerberos_keytab(&self) -> Option<&Blob>

The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.

If KERBEROS is specified for AuthenticationType, this parameter is required.

source

pub fn kerberos_krb5_conf(&self) -> Option<&Blob>

The krb5.conf file that contains the Kerberos configuration information. You can load the krb5.conf file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.

If KERBEROS is specified for AuthenticationType, this parameter is required.

source

pub fn agent_arns(&self) -> Option<&[String]>

The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.

source

pub fn tags(&self) -> Option<&[TagListEntry]>

The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.

source§

impl CreateLocationHdfsInput

source

pub fn builder() -> CreateLocationHdfsInputBuilder

Creates a new builder-style object to manufacture CreateLocationHdfsInput.

Trait Implementations§

source§

impl Clone for CreateLocationHdfsInput

source§

fn clone(&self) -> CreateLocationHdfsInput

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for CreateLocationHdfsInput

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl PartialEq<CreateLocationHdfsInput> for CreateLocationHdfsInput

source§

fn eq(&self, other: &CreateLocationHdfsInput) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl StructuralPartialEq for CreateLocationHdfsInput

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for Twhere U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> Same<T> for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for Twhere T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more