Struct deltalake::datafusion::common::config::ParquetOptions

source ·
pub struct ParquetOptions {
Show 25 fields pub enable_page_index: bool, pub pruning: bool, pub skip_metadata: bool, pub metadata_size_hint: Option<usize>, pub pushdown_filters: bool, pub reorder_filters: bool, pub data_pagesize_limit: usize, pub write_batch_size: usize, pub writer_version: String, pub compression: Option<String>, pub dictionary_enabled: Option<bool>, pub dictionary_page_size_limit: usize, pub statistics_enabled: Option<String>, pub max_statistics_size: Option<usize>, pub max_row_group_size: usize, pub created_by: String, pub column_index_truncate_length: Option<usize>, pub data_page_row_count_limit: usize, pub encoding: Option<String>, pub bloom_filter_enabled: bool, pub bloom_filter_fpp: Option<f64>, pub bloom_filter_ndv: Option<u64>, pub allow_single_file_parallelism: bool, pub maximum_parallel_row_group_writers: usize, pub maximum_buffered_record_batches_per_stream: usize,
}
Expand description

Options related to parquet files

See also: SessionConfig

Fields§

§enable_page_index: bool

If true, reads the Parquet data page level metadata (the Page Index), if present, to reduce the I/O and number of rows decoded.

§pruning: bool

If true, the parquet reader attempts to skip entire row groups based on the predicate in the query and the metadata (min/max values) stored in the parquet file

§skip_metadata: bool

If true, the parquet reader skip the optional embedded metadata that may be in the file Schema. This setting can help avoid schema conflicts when querying multiple parquet files with schemas containing compatible types but different metadata

§metadata_size_hint: Option<usize>

If specified, the parquet reader will try and fetch the last size_hint bytes of the parquet file optimistically. If not specified, two reads are required: One read to fetch the 8-byte parquet footer and another to fetch the metadata length encoded in the footer

§pushdown_filters: bool

If true, filter expressions are be applied during the parquet decoding operation to reduce the number of rows decoded. This optimization is sometimes called “late materialization”.

§reorder_filters: bool

If true, filter expressions evaluated during the parquet decoding operation will be reordered heuristically to minimize the cost of evaluation. If false, the filters are applied in the same order as written in the query

§data_pagesize_limit: usize

Sets best effort maximum size of data page in bytes

§write_batch_size: usize

Sets write_batch_size in bytes

§writer_version: String

Sets parquet writer version valid values are “1.0” and “2.0”

§compression: Option<String>

Sets default parquet compression codec Valid values are: uncompressed, snappy, gzip(level), lzo, brotli(level), lz4, zstd(level), and lz4_raw. These values are not case sensitive. If NULL, uses default parquet writer setting

§dictionary_enabled: Option<bool>

Sets if dictionary encoding is enabled. If NULL, uses default parquet writer setting

§dictionary_page_size_limit: usize

Sets best effort maximum dictionary page size, in bytes

§statistics_enabled: Option<String>

Sets if statistics are enabled for any column Valid values are: “none”, “chunk”, and “page” These values are not case sensitive. If NULL, uses default parquet writer setting

§max_statistics_size: Option<usize>

Sets max statistics size for any column. If NULL, uses default parquet writer setting

§max_row_group_size: usize

Target maximum number of rows in each row group (defaults to 1M rows). Writing larger row groups requires more memory to write, but can get better compression and be faster to read.

§created_by: String

Sets “created by” property

§column_index_truncate_length: Option<usize>

Sets column index truncate length

§data_page_row_count_limit: usize

Sets best effort maximum number of rows in data page

§encoding: Option<String>

Sets default encoding for any column Valid values are: plain, plain_dictionary, rle, bit_packed, delta_binary_packed, delta_length_byte_array, delta_byte_array, rle_dictionary, and byte_stream_split. These values are not case sensitive. If NULL, uses default parquet writer setting

§bloom_filter_enabled: bool

Sets if bloom filter is enabled for any column

§bloom_filter_fpp: Option<f64>

Sets bloom filter false positive probability. If NULL, uses default parquet writer setting

§bloom_filter_ndv: Option<u64>

Sets bloom filter number of distinct values. If NULL, uses default parquet writer setting

§allow_single_file_parallelism: bool

Controls whether DataFusion will attempt to speed up writing parquet files by serializing them in parallel. Each column in each row group in each output file are serialized in parallel leveraging a maximum possible core count of n_filesn_row_groupsn_columns.

§maximum_parallel_row_group_writers: usize

By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing maximum_parallel_row_group_writers and maximum_buffered_record_batches_per_stream if your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame.

§maximum_buffered_record_batches_per_stream: usize

By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing maximum_parallel_row_group_writers and maximum_buffered_record_batches_per_stream if your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame.

Trait Implementations§

source§

impl Clone for ParquetOptions

source§

fn clone(&self) -> ParquetOptions

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl ConfigField for ParquetOptions

source§

fn set(&mut self, key: &str, value: &str) -> Result<(), DataFusionError>

source§

fn visit<V>(&self, v: &mut V, key_prefix: &str, _description: &'static str)
where V: Visit,

source§

impl Debug for ParquetOptions

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
source§

impl Default for ParquetOptions

source§

fn default() -> ParquetOptions

Returns the “default value” for a type. Read more
source§

impl PartialEq for ParquetOptions

source§

fn eq(&self, other: &ParquetOptions) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl TryFrom<&ParquetOptions> for ParquetOptions

§

type Error = DataFusionError

The type returned in the event of a conversion error.
source§

fn try_from( value: &ParquetOptions ) -> Result<ParquetOptions, <ParquetOptions as TryFrom<&ParquetOptions>>::Error>

Performs the conversion.
source§

impl StructuralPartialEq for ParquetOptions

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

source§

fn vzip(self) -> V

source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

impl<T> Allocation for T
where T: RefUnwindSafe + Send + Sync,

source§

impl<T> Ungil for T
where T: Send,