Struct aws_sdk_datasync::operation::create_location_hdfs::builders::CreateLocationHdfsInputBuilder
source · #[non_exhaustive]pub struct CreateLocationHdfsInputBuilder { /* private fields */ }
Expand description
A builder for CreateLocationHdfsInput
.
Implementations§
source§impl CreateLocationHdfsInputBuilder
impl CreateLocationHdfsInputBuilder
sourcepub fn subdirectory(self, input: impl Into<String>) -> Self
pub fn subdirectory(self, input: impl Into<String>) -> Self
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /
.
sourcepub fn set_subdirectory(self, input: Option<String>) -> Self
pub fn set_subdirectory(self, input: Option<String>) -> Self
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /
.
sourcepub fn get_subdirectory(&self) -> &Option<String>
pub fn get_subdirectory(&self) -> &Option<String>
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /
.
sourcepub fn name_nodes(self, input: HdfsNameNode) -> Self
pub fn name_nodes(self, input: HdfsNameNode) -> Self
Appends an item to name_nodes
.
To override the contents of this collection use set_name_nodes
.
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
sourcepub fn set_name_nodes(self, input: Option<Vec<HdfsNameNode>>) -> Self
pub fn set_name_nodes(self, input: Option<Vec<HdfsNameNode>>) -> Self
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
sourcepub fn get_name_nodes(&self) -> &Option<Vec<HdfsNameNode>>
pub fn get_name_nodes(&self) -> &Option<Vec<HdfsNameNode>>
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
sourcepub fn block_size(self, input: i32) -> Self
pub fn block_size(self, input: i32) -> Self
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
sourcepub fn set_block_size(self, input: Option<i32>) -> Self
pub fn set_block_size(self, input: Option<i32>) -> Self
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
sourcepub fn get_block_size(&self) -> &Option<i32>
pub fn get_block_size(&self) -> &Option<i32>
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
sourcepub fn replication_factor(self, input: i32) -> Self
pub fn replication_factor(self, input: i32) -> Self
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
sourcepub fn set_replication_factor(self, input: Option<i32>) -> Self
pub fn set_replication_factor(self, input: Option<i32>) -> Self
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
sourcepub fn get_replication_factor(&self) -> &Option<i32>
pub fn get_replication_factor(&self) -> &Option<i32>
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
sourcepub fn kms_key_provider_uri(self, input: impl Into<String>) -> Self
pub fn kms_key_provider_uri(self, input: impl Into<String>) -> Self
The URI of the HDFS cluster's Key Management Server (KMS).
sourcepub fn set_kms_key_provider_uri(self, input: Option<String>) -> Self
pub fn set_kms_key_provider_uri(self, input: Option<String>) -> Self
The URI of the HDFS cluster's Key Management Server (KMS).
sourcepub fn get_kms_key_provider_uri(&self) -> &Option<String>
pub fn get_kms_key_provider_uri(&self) -> &Option<String>
The URI of the HDFS cluster's Key Management Server (KMS).
sourcepub fn qop_configuration(self, input: QopConfiguration) -> Self
pub fn qop_configuration(self, input: QopConfiguration) -> Self
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.
sourcepub fn set_qop_configuration(self, input: Option<QopConfiguration>) -> Self
pub fn set_qop_configuration(self, input: Option<QopConfiguration>) -> Self
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.
sourcepub fn get_qop_configuration(&self) -> &Option<QopConfiguration>
pub fn get_qop_configuration(&self) -> &Option<QopConfiguration>
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.
sourcepub fn authentication_type(self, input: HdfsAuthenticationType) -> Self
pub fn authentication_type(self, input: HdfsAuthenticationType) -> Self
The type of authentication used to determine the identity of the user.
This field is required.sourcepub fn set_authentication_type(
self,
input: Option<HdfsAuthenticationType>
) -> Self
pub fn set_authentication_type( self, input: Option<HdfsAuthenticationType> ) -> Self
The type of authentication used to determine the identity of the user.
sourcepub fn get_authentication_type(&self) -> &Option<HdfsAuthenticationType>
pub fn get_authentication_type(&self) -> &Option<HdfsAuthenticationType>
The type of authentication used to determine the identity of the user.
sourcepub fn simple_user(self, input: impl Into<String>) -> Self
pub fn simple_user(self, input: impl Into<String>) -> Self
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
sourcepub fn set_simple_user(self, input: Option<String>) -> Self
pub fn set_simple_user(self, input: Option<String>) -> Self
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
sourcepub fn get_simple_user(&self) -> &Option<String>
pub fn get_simple_user(&self) -> &Option<String>
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_principal(self, input: impl Into<String>) -> Self
pub fn kerberos_principal(self, input: impl Into<String>) -> Self
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn set_kerberos_principal(self, input: Option<String>) -> Self
pub fn set_kerberos_principal(self, input: Option<String>) -> Self
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn get_kerberos_principal(&self) -> &Option<String>
pub fn get_kerberos_principal(&self) -> &Option<String>
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_keytab(self, input: Blob) -> Self
pub fn kerberos_keytab(self, input: Blob) -> Self
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn set_kerberos_keytab(self, input: Option<Blob>) -> Self
pub fn set_kerberos_keytab(self, input: Option<Blob>) -> Self
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn get_kerberos_keytab(&self) -> &Option<Blob>
pub fn get_kerberos_keytab(&self) -> &Option<Blob>
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_krb5_conf(self, input: Blob) -> Self
pub fn kerberos_krb5_conf(self, input: Blob) -> Self
The krb5.conf
file that contains the Kerberos configuration information. You can load the krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn set_kerberos_krb5_conf(self, input: Option<Blob>) -> Self
pub fn set_kerberos_krb5_conf(self, input: Option<Blob>) -> Self
The krb5.conf
file that contains the Kerberos configuration information. You can load the krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn get_kerberos_krb5_conf(&self) -> &Option<Blob>
pub fn get_kerberos_krb5_conf(&self) -> &Option<Blob>
The krb5.conf
file that contains the Kerberos configuration information. You can load the krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn agent_arns(self, input: impl Into<String>) -> Self
pub fn agent_arns(self, input: impl Into<String>) -> Self
Appends an item to agent_arns
.
To override the contents of this collection use set_agent_arns
.
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
sourcepub fn set_agent_arns(self, input: Option<Vec<String>>) -> Self
pub fn set_agent_arns(self, input: Option<Vec<String>>) -> Self
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
sourcepub fn get_agent_arns(&self) -> &Option<Vec<String>>
pub fn get_agent_arns(&self) -> &Option<Vec<String>>
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
Appends an item to tags
.
To override the contents of this collection use set_tags
.
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
sourcepub fn build(self) -> Result<CreateLocationHdfsInput, BuildError>
pub fn build(self) -> Result<CreateLocationHdfsInput, BuildError>
Consumes the builder and constructs a CreateLocationHdfsInput
.
source§impl CreateLocationHdfsInputBuilder
impl CreateLocationHdfsInputBuilder
sourcepub async fn send_with(
self,
client: &Client
) -> Result<CreateLocationHdfsOutput, SdkError<CreateLocationHdfsError, HttpResponse>>
pub async fn send_with( self, client: &Client ) -> Result<CreateLocationHdfsOutput, SdkError<CreateLocationHdfsError, HttpResponse>>
Sends a request with this input using the given client.
Trait Implementations§
source§impl Clone for CreateLocationHdfsInputBuilder
impl Clone for CreateLocationHdfsInputBuilder
source§fn clone(&self) -> CreateLocationHdfsInputBuilder
fn clone(&self) -> CreateLocationHdfsInputBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Default for CreateLocationHdfsInputBuilder
impl Default for CreateLocationHdfsInputBuilder
source§fn default() -> CreateLocationHdfsInputBuilder
fn default() -> CreateLocationHdfsInputBuilder
source§impl PartialEq for CreateLocationHdfsInputBuilder
impl PartialEq for CreateLocationHdfsInputBuilder
source§fn eq(&self, other: &CreateLocationHdfsInputBuilder) -> bool
fn eq(&self, other: &CreateLocationHdfsInputBuilder) -> bool
self
and other
values to be equal, and is used
by ==
.