Struct rusoto_dms::KafkaSettings[][src]

pub struct KafkaSettings {
Show fields pub broker: Option<String>, pub include_control_details: Option<bool>, pub include_null_and_empty: Option<bool>, pub include_partition_value: Option<bool>, pub include_table_alter_operations: Option<bool>, pub include_transaction_details: Option<bool>, pub message_format: Option<String>, pub message_max_bytes: Option<i64>, pub partition_include_schema_table: Option<bool>, pub sasl_password: Option<String>, pub sasl_username: Option<String>, pub security_protocol: Option<String>, pub ssl_ca_certificate_arn: Option<String>, pub ssl_client_certificate_arn: Option<String>, pub ssl_client_key_arn: Option<String>, pub ssl_client_key_password: Option<String>, pub topic: Option<String>,
}
Expand description

Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.

Fields

broker: Option<String>

A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form broker-hostname-or-ip:port . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for AWS Database Migration Service in the AWS Data Migration Service User Guide.

include_control_details: Option<bool>

Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false.

include_null_and_empty: Option<bool>

Include NULL and empty columns for records migrated to the endpoint. The default is false.

include_partition_value: Option<bool>

Shows the partition value within the Kafka message output, unless the partition type is schema-table-type. The default is false.

include_table_alter_operations: Option<bool>

Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is false.

include_transaction_details: Option<bool>

Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is false.

message_format: Option<String>

The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

message_max_bytes: Option<i64>

The maximum size in bytes for records created on the endpoint The default is 1,000,000.

partition_include_schema_table: Option<bool>

Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false.

sasl_password: Option<String>

The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.

sasl_username: Option<String>

The secure username you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.

security_protocol: Option<String>

Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl requires SaslUsername and SaslPassword.

ssl_ca_certificate_arn: Option<String>

The Amazon Resource Name (ARN) for the private Certification Authority (CA) cert that AWS DMS uses to securely connect to your Kafka target endpoint.

ssl_client_certificate_arn: Option<String>

The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.

ssl_client_key_arn: Option<String>

The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.

ssl_client_key_password: Option<String>

The password for the client private key used to securely connect to a Kafka target endpoint.

topic: Option<String>

The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.

Trait Implementations

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

Formats the value using the given formatter. Read more

Returns the “default value” for a type. Read more

Deserialize this value from the given Serde deserializer. Read more

This method tests for self and other values to be equal, and is used by ==. Read more

This method tests for !=.

Serialize this value into the given Serde serializer. Read more

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Performs the conversion.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Performs the conversion.

Should always be Self

The resulting type after obtaining ownership.

Creates owned data from borrowed data, usually by cloning. Read more

🔬 This is a nightly-only experimental API. (toowned_clone_into)

recently added

Uses borrowed data to replace owned data, usually by cloning. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.