Struct aws_sdk_glue::types::JdbcConnectorOptions
source · #[non_exhaustive]pub struct JdbcConnectorOptions {
pub filter_predicate: Option<String>,
pub partition_column: Option<String>,
pub lower_bound: Option<i64>,
pub upper_bound: Option<i64>,
pub num_partitions: Option<i64>,
pub job_bookmark_keys: Option<Vec<String>>,
pub job_bookmark_keys_sort_order: Option<String>,
pub data_type_mapping: Option<HashMap<JdbcDataType, GlueRecordType>>,
}Expand description
Additional connection options for the connector.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.filter_predicate: Option<String>Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified filterPredicate.
partition_column: Option<String>The name of an integer column that is used for partitioning. This option works only when it's included with lowerBound, upperBound, and numPartitions. This option works the same way as in the Spark SQL JDBC reader.
lower_bound: Option<i64>The minimum value of partitionColumn that is used to decide partition stride.
upper_bound: Option<i64>The maximum value of partitionColumn that is used to decide partition stride.
num_partitions: Option<i64>The number of partitions. This value, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions that are used to split the partitionColumn.
job_bookmark_keys: Option<Vec<String>>The name of the job bookmark keys on which to sort.
job_bookmark_keys_sort_order: Option<String>Specifies an ascending or descending sort order.
data_type_mapping: Option<HashMap<JdbcDataType, GlueRecordType>>Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the option "dataTypeMapping":{"FLOAT":"STRING"} maps data fields of JDBC type FLOAT into the Java String type by calling the ResultSet.getString() method of the driver, and uses it to build the Glue record. The ResultSet object is implemented by each driver, so the behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the driver performs the conversions.
Implementations§
source§impl JdbcConnectorOptions
impl JdbcConnectorOptions
sourcepub fn filter_predicate(&self) -> Option<&str>
pub fn filter_predicate(&self) -> Option<&str>
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified filterPredicate.
sourcepub fn partition_column(&self) -> Option<&str>
pub fn partition_column(&self) -> Option<&str>
The name of an integer column that is used for partitioning. This option works only when it's included with lowerBound, upperBound, and numPartitions. This option works the same way as in the Spark SQL JDBC reader.
sourcepub fn lower_bound(&self) -> Option<i64>
pub fn lower_bound(&self) -> Option<i64>
The minimum value of partitionColumn that is used to decide partition stride.
sourcepub fn upper_bound(&self) -> Option<i64>
pub fn upper_bound(&self) -> Option<i64>
The maximum value of partitionColumn that is used to decide partition stride.
sourcepub fn num_partitions(&self) -> Option<i64>
pub fn num_partitions(&self) -> Option<i64>
The number of partitions. This value, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions that are used to split the partitionColumn.
sourcepub fn job_bookmark_keys(&self) -> &[String]
pub fn job_bookmark_keys(&self) -> &[String]
The name of the job bookmark keys on which to sort.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .job_bookmark_keys.is_none().
sourcepub fn job_bookmark_keys_sort_order(&self) -> Option<&str>
pub fn job_bookmark_keys_sort_order(&self) -> Option<&str>
Specifies an ascending or descending sort order.
sourcepub fn data_type_mapping(
&self
) -> Option<&HashMap<JdbcDataType, GlueRecordType>>
pub fn data_type_mapping( &self ) -> Option<&HashMap<JdbcDataType, GlueRecordType>>
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the option "dataTypeMapping":{"FLOAT":"STRING"} maps data fields of JDBC type FLOAT into the Java String type by calling the ResultSet.getString() method of the driver, and uses it to build the Glue record. The ResultSet object is implemented by each driver, so the behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the driver performs the conversions.
source§impl JdbcConnectorOptions
impl JdbcConnectorOptions
sourcepub fn builder() -> JdbcConnectorOptionsBuilder
pub fn builder() -> JdbcConnectorOptionsBuilder
Creates a new builder-style object to manufacture JdbcConnectorOptions.
Trait Implementations§
source§impl Clone for JdbcConnectorOptions
impl Clone for JdbcConnectorOptions
source§fn clone(&self) -> JdbcConnectorOptions
fn clone(&self) -> JdbcConnectorOptions
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moresource§impl Debug for JdbcConnectorOptions
impl Debug for JdbcConnectorOptions
source§impl PartialEq for JdbcConnectorOptions
impl PartialEq for JdbcConnectorOptions
source§fn eq(&self, other: &JdbcConnectorOptions) -> bool
fn eq(&self, other: &JdbcConnectorOptions) -> bool
self and other values to be equal, and is used
by ==.