Struct aws_sdk_datasync::operation::create_location_hdfs::builders::CreateLocationHdfsFluentBuilder
source · pub struct CreateLocationHdfsFluentBuilder { /* private fields */ }
Expand description
Fluent builder constructing a request to CreateLocationHdfs
.
Creates an endpoint for a Hadoop Distributed File System (HDFS).
Implementations§
source§impl CreateLocationHdfsFluentBuilder
impl CreateLocationHdfsFluentBuilder
sourcepub fn as_input(&self) -> &CreateLocationHdfsInputBuilder
pub fn as_input(&self) -> &CreateLocationHdfsInputBuilder
Access the CreateLocationHdfs as a reference.
sourcepub async fn send(
self
) -> Result<CreateLocationHdfsOutput, SdkError<CreateLocationHdfsError, HttpResponse>>
pub async fn send( self ) -> Result<CreateLocationHdfsOutput, SdkError<CreateLocationHdfsError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub async fn customize(
self
) -> Result<CustomizableOperation<CreateLocationHdfsOutput, CreateLocationHdfsError, Self>, SdkError<CreateLocationHdfsError>>
pub async fn customize( self ) -> Result<CustomizableOperation<CreateLocationHdfsOutput, CreateLocationHdfsError, Self>, SdkError<CreateLocationHdfsError>>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn subdirectory(self, input: impl Into<String>) -> Self
pub fn subdirectory(self, input: impl Into<String>) -> Self
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /
.
sourcepub fn set_subdirectory(self, input: Option<String>) -> Self
pub fn set_subdirectory(self, input: Option<String>) -> Self
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /
.
sourcepub fn get_subdirectory(&self) -> &Option<String>
pub fn get_subdirectory(&self) -> &Option<String>
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /
.
sourcepub fn name_nodes(self, input: HdfsNameNode) -> Self
pub fn name_nodes(self, input: HdfsNameNode) -> Self
Appends an item to NameNodes
.
To override the contents of this collection use set_name_nodes
.
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
sourcepub fn set_name_nodes(self, input: Option<Vec<HdfsNameNode>>) -> Self
pub fn set_name_nodes(self, input: Option<Vec<HdfsNameNode>>) -> Self
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
sourcepub fn get_name_nodes(&self) -> &Option<Vec<HdfsNameNode>>
pub fn get_name_nodes(&self) -> &Option<Vec<HdfsNameNode>>
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
sourcepub fn block_size(self, input: i32) -> Self
pub fn block_size(self, input: i32) -> Self
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
sourcepub fn set_block_size(self, input: Option<i32>) -> Self
pub fn set_block_size(self, input: Option<i32>) -> Self
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
sourcepub fn get_block_size(&self) -> &Option<i32>
pub fn get_block_size(&self) -> &Option<i32>
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
sourcepub fn replication_factor(self, input: i32) -> Self
pub fn replication_factor(self, input: i32) -> Self
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
sourcepub fn set_replication_factor(self, input: Option<i32>) -> Self
pub fn set_replication_factor(self, input: Option<i32>) -> Self
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
sourcepub fn get_replication_factor(&self) -> &Option<i32>
pub fn get_replication_factor(&self) -> &Option<i32>
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
sourcepub fn kms_key_provider_uri(self, input: impl Into<String>) -> Self
pub fn kms_key_provider_uri(self, input: impl Into<String>) -> Self
The URI of the HDFS cluster's Key Management Server (KMS).
sourcepub fn set_kms_key_provider_uri(self, input: Option<String>) -> Self
pub fn set_kms_key_provider_uri(self, input: Option<String>) -> Self
The URI of the HDFS cluster's Key Management Server (KMS).
sourcepub fn get_kms_key_provider_uri(&self) -> &Option<String>
pub fn get_kms_key_provider_uri(&self) -> &Option<String>
The URI of the HDFS cluster's Key Management Server (KMS).
sourcepub fn qop_configuration(self, input: QopConfiguration) -> Self
pub fn qop_configuration(self, input: QopConfiguration) -> Self
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.
sourcepub fn set_qop_configuration(self, input: Option<QopConfiguration>) -> Self
pub fn set_qop_configuration(self, input: Option<QopConfiguration>) -> Self
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.
sourcepub fn get_qop_configuration(&self) -> &Option<QopConfiguration>
pub fn get_qop_configuration(&self) -> &Option<QopConfiguration>
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.
sourcepub fn authentication_type(self, input: HdfsAuthenticationType) -> Self
pub fn authentication_type(self, input: HdfsAuthenticationType) -> Self
The type of authentication used to determine the identity of the user.
sourcepub fn set_authentication_type(
self,
input: Option<HdfsAuthenticationType>
) -> Self
pub fn set_authentication_type( self, input: Option<HdfsAuthenticationType> ) -> Self
The type of authentication used to determine the identity of the user.
sourcepub fn get_authentication_type(&self) -> &Option<HdfsAuthenticationType>
pub fn get_authentication_type(&self) -> &Option<HdfsAuthenticationType>
The type of authentication used to determine the identity of the user.
sourcepub fn simple_user(self, input: impl Into<String>) -> Self
pub fn simple_user(self, input: impl Into<String>) -> Self
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
sourcepub fn set_simple_user(self, input: Option<String>) -> Self
pub fn set_simple_user(self, input: Option<String>) -> Self
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
sourcepub fn get_simple_user(&self) -> &Option<String>
pub fn get_simple_user(&self) -> &Option<String>
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_principal(self, input: impl Into<String>) -> Self
pub fn kerberos_principal(self, input: impl Into<String>) -> Self
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn set_kerberos_principal(self, input: Option<String>) -> Self
pub fn set_kerberos_principal(self, input: Option<String>) -> Self
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn get_kerberos_principal(&self) -> &Option<String>
pub fn get_kerberos_principal(&self) -> &Option<String>
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_keytab(self, input: Blob) -> Self
pub fn kerberos_keytab(self, input: Blob) -> Self
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn set_kerberos_keytab(self, input: Option<Blob>) -> Self
pub fn set_kerberos_keytab(self, input: Option<Blob>) -> Self
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn get_kerberos_keytab(&self) -> &Option<Blob>
pub fn get_kerberos_keytab(&self) -> &Option<Blob>
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn kerberos_krb5_conf(self, input: Blob) -> Self
pub fn kerberos_krb5_conf(self, input: Blob) -> Self
The krb5.conf
file that contains the Kerberos configuration information. You can load the krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn set_kerberos_krb5_conf(self, input: Option<Blob>) -> Self
pub fn set_kerberos_krb5_conf(self, input: Option<Blob>) -> Self
The krb5.conf
file that contains the Kerberos configuration information. You can load the krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn get_kerberos_krb5_conf(&self) -> &Option<Blob>
pub fn get_kerberos_krb5_conf(&self) -> &Option<Blob>
The krb5.conf
file that contains the Kerberos configuration information. You can load the krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
sourcepub fn agent_arns(self, input: impl Into<String>) -> Self
pub fn agent_arns(self, input: impl Into<String>) -> Self
Appends an item to AgentArns
.
To override the contents of this collection use set_agent_arns
.
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
sourcepub fn set_agent_arns(self, input: Option<Vec<String>>) -> Self
pub fn set_agent_arns(self, input: Option<Vec<String>>) -> Self
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
sourcepub fn get_agent_arns(&self) -> &Option<Vec<String>>
pub fn get_agent_arns(&self) -> &Option<Vec<String>>
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
Appends an item to Tags
.
To override the contents of this collection use set_tags
.
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
Trait Implementations§
source§impl Clone for CreateLocationHdfsFluentBuilder
impl Clone for CreateLocationHdfsFluentBuilder
source§fn clone(&self) -> CreateLocationHdfsFluentBuilder
fn clone(&self) -> CreateLocationHdfsFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more