Struct aws_sdk_firehose::model::parquet_ser_de::Builder
source · [−]#[non_exhaustive]pub struct Builder { /* private fields */ }
Expand description
A builder for ParquetSerDe
Implementations
sourceimpl Builder
impl Builder
sourcepub fn block_size_bytes(self, input: i32) -> Self
pub fn block_size_bytes(self, input: i32) -> Self
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
sourcepub fn set_block_size_bytes(self, input: Option<i32>) -> Self
pub fn set_block_size_bytes(self, input: Option<i32>) -> Self
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
sourcepub fn page_size_bytes(self, input: i32) -> Self
pub fn page_size_bytes(self, input: i32) -> Self
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
sourcepub fn set_page_size_bytes(self, input: Option<i32>) -> Self
pub fn set_page_size_bytes(self, input: Option<i32>) -> Self
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
sourcepub fn compression(self, input: ParquetCompression) -> Self
pub fn compression(self, input: ParquetCompression) -> Self
The compression code to use over data blocks. The possible values are UNCOMPRESSED
, SNAPPY
, and GZIP
, with the default being SNAPPY
. Use SNAPPY
for higher decompression speed. Use GZIP
if the compression ratio is more important than speed.
sourcepub fn set_compression(self, input: Option<ParquetCompression>) -> Self
pub fn set_compression(self, input: Option<ParquetCompression>) -> Self
The compression code to use over data blocks. The possible values are UNCOMPRESSED
, SNAPPY
, and GZIP
, with the default being SNAPPY
. Use SNAPPY
for higher decompression speed. Use GZIP
if the compression ratio is more important than speed.
sourcepub fn enable_dictionary_compression(self, input: bool) -> Self
pub fn enable_dictionary_compression(self, input: bool) -> Self
Indicates whether to enable dictionary compression.
sourcepub fn set_enable_dictionary_compression(self, input: Option<bool>) -> Self
pub fn set_enable_dictionary_compression(self, input: Option<bool>) -> Self
Indicates whether to enable dictionary compression.
sourcepub fn max_padding_bytes(self, input: i32) -> Self
pub fn max_padding_bytes(self, input: i32) -> Self
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
sourcepub fn set_max_padding_bytes(self, input: Option<i32>) -> Self
pub fn set_max_padding_bytes(self, input: Option<i32>) -> Self
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
sourcepub fn writer_version(self, input: ParquetWriterVersion) -> Self
pub fn writer_version(self, input: ParquetWriterVersion) -> Self
Indicates the version of row format to output. The possible values are V1
and V2
. The default is V1
.
sourcepub fn set_writer_version(self, input: Option<ParquetWriterVersion>) -> Self
pub fn set_writer_version(self, input: Option<ParquetWriterVersion>) -> Self
Indicates the version of row format to output. The possible values are V1
and V2
. The default is V1
.
sourcepub fn build(self) -> ParquetSerDe
pub fn build(self) -> ParquetSerDe
Consumes the builder and constructs a ParquetSerDe
Trait Implementations
impl StructuralPartialEq for Builder
Auto Trait Implementations
impl RefUnwindSafe for Builder
impl Send for Builder
impl Sync for Builder
impl Unpin for Builder
impl UnwindSafe for Builder
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> ToOwned for T where
T: Clone,
impl<T> ToOwned for T where
T: Clone,
type Owned = T
type Owned = T
The resulting type after obtaining ownership.
sourcefn clone_into(&self, target: &mut T)
fn clone_into(&self, target: &mut T)
toowned_clone_into
)Uses borrowed data to replace owned data, usually by cloning. Read more
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more