Module datafusion::datasource::physical_plan
source · Expand description
Execution plans that read file formats
Re-exports§
pub use self::parquet::ParquetExec;
pub use self::parquet::ParquetFileMetrics;
pub use self::parquet::ParquetFileReaderFactory;
Modules§
ParquetExec
Execution plan for reading Parquet files
Structs§
- Execution plan for scanning Arrow data source
- Execution plan for scanning Avro data source
- A Config for
CsvOpener
- Execution plan for scanning a CSV file
- A
FileOpener
that opens a CSV file and yields aFileOpenFuture
- Repartition input files into
target_partitions
partitions, if total file size exceedrepartition_file_min_size
- A single file or part of a file that should be read, along with its schema, statistics
- The base configurations to provide when creating a physical plan for any given file format.
- The base configurations to provide when creating a physical plan for writing to any given file format.
- A stream that iterates record batch by record batch, file over file.
- A
FileOpener
that opens a JSON file and yields aFileOpenFuture
- Execution plan for scanning NdJson data source
Enums§
- Describes the behavior of the
FileStream
if file opening or scanning fails
Traits§
- Generic API for opening a file using an
ObjectStore
and resolving to a stream ofRecordBatch
Functions§
- Convert type to a type suitable for use as a
ListingTable
partition column. ReturnsDictionary(UInt16, val_type)
, which is a reasonable trade off between a reasonable number of partition values and space efficiency. - Convert a
ScalarValue
of partition columns to a type, as decribed in the documentation ofwrap_partition_type_in_dict
, which can wrap the types.
Type Aliases§
- A fallible future that resolves to a stream of
RecordBatch