Struct aws_sdk_databasemigration::model::KafkaSettings [−][src]
#[non_exhaustive]pub struct KafkaSettings {Show 18 fields
pub broker: Option<String>,
pub topic: Option<String>,
pub message_format: Option<MessageFormatValue>,
pub include_transaction_details: Option<bool>,
pub include_partition_value: Option<bool>,
pub partition_include_schema_table: Option<bool>,
pub include_table_alter_operations: Option<bool>,
pub include_control_details: Option<bool>,
pub message_max_bytes: Option<i32>,
pub include_null_and_empty: Option<bool>,
pub security_protocol: Option<KafkaSecurityProtocol>,
pub ssl_client_certificate_arn: Option<String>,
pub ssl_client_key_arn: Option<String>,
pub ssl_client_key_password: Option<String>,
pub ssl_ca_certificate_arn: Option<String>,
pub sasl_username: Option<String>,
pub sasl_password: Option<String>,
pub no_hex_prefix: Option<bool>,
}
Expand description
Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.
Fields (Non-exhaustive)
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.broker: Option<String>
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location
in the form
broker-hostname-or-ip:port
. For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.
For more information and examples of specifying a list of broker locations,
see Using Apache Kafka as a target for Database Migration Service
in the Database Migration Service User Guide.
topic: Option<String>
The topic to which you migrate the data. If you don't specify a topic, DMS
specifies "kafka-default-topic"
as the migration topic.
message_format: Option<MessageFormatValue>
The output format for the records created on the endpoint. The message format is
JSON
(default) or JSON_UNFORMATTED
(a single line with no
tab).
include_transaction_details: Option<bool>
Provides detailed transaction information from the source database. This information
includes a commit timestamp, a log position, and values for transaction_id
,
previous transaction_id
, and transaction_record_id
(the record
offset within a transaction). The default is false
.
include_partition_value: Option<bool>
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default is false
.
partition_include_schema_table: Option<bool>
Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka
partitions. For example, suppose that a SysBench schema has thousands of tables and each
table has only limited range for a primary key. In this case, the same primary key is sent
from thousands of tables to the same partition, which causes throttling. The default is
false
.
include_table_alter_operations: Option<bool>
Includes any data definition language (DDL) operations that change the table in the
control data, such as rename-table
, drop-table
,
add-column
, drop-column
, and rename-column
. The
default is false
.
include_control_details: Option<bool>
Shows detailed control information for table definition, column definition, and table
and column changes in the Kafka message output. The default is false
.
message_max_bytes: Option<i32>
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
include_null_and_empty: Option<bool>
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
security_protocol: Option<KafkaSecurityProtocol>
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
.
sasl-ssl
requires SaslUsername
and SaslPassword
.
ssl_client_certificate_arn: Option<String>
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
ssl_client_key_arn: Option<String>
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
ssl_client_key_password: Option<String>
The password for the client private key used to securely connect to a Kafka target endpoint.
ssl_ca_certificate_arn: Option<String>
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
sasl_username: Option<String>
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
sasl_password: Option<String>
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
no_hex_prefix: Option<bool>
Set this optional parameter to true
to avoid adding a '0x' prefix
to raw data in hexadecimal format. For example, by default, DMS adds a '0x'
prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka
target. Use the NoHexPrefix
endpoint setting to enable migration of RAW data
type columns without adding the '0x' prefix.
Implementations
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location
in the form
broker-hostname-or-ip:port
. For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.
For more information and examples of specifying a list of broker locations,
see Using Apache Kafka as a target for Database Migration Service
in the Database Migration Service User Guide.
The topic to which you migrate the data. If you don't specify a topic, DMS
specifies "kafka-default-topic"
as the migration topic.
The output format for the records created on the endpoint. The message format is
JSON
(default) or JSON_UNFORMATTED
(a single line with no
tab).
Provides detailed transaction information from the source database. This information
includes a commit timestamp, a log position, and values for transaction_id
,
previous transaction_id
, and transaction_record_id
(the record
offset within a transaction). The default is false
.
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default is false
.
Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka
partitions. For example, suppose that a SysBench schema has thousands of tables and each
table has only limited range for a primary key. In this case, the same primary key is sent
from thousands of tables to the same partition, which causes throttling. The default is
false
.
Includes any data definition language (DDL) operations that change the table in the
control data, such as rename-table
, drop-table
,
add-column
, drop-column
, and rename-column
. The
default is false
.
Shows detailed control information for table definition, column definition, and table
and column changes in the Kafka message output. The default is false
.
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
.
sasl-ssl
requires SaslUsername
and SaslPassword
.
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
The password for the client private key used to securely connect to a Kafka target endpoint.
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
Set this optional parameter to true
to avoid adding a '0x' prefix
to raw data in hexadecimal format. For example, by default, DMS adds a '0x'
prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka
target. Use the NoHexPrefix
endpoint setting to enable migration of RAW data
type columns without adding the '0x' prefix.
Creates a new builder-style object to manufacture KafkaSettings
Trait Implementations
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
This method tests for !=
.
Auto Trait Implementations
impl RefUnwindSafe for KafkaSettings
impl Send for KafkaSettings
impl Sync for KafkaSettings
impl Unpin for KafkaSettings
impl UnwindSafe for KafkaSettings
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more