Module datafusion::datasource::physical_plan
source · Expand description
Execution plans that read file formats
Re-exports
pub use self::parquet::ParquetExec;
pub use self::parquet::ParquetFileMetrics;
pub use self::parquet::ParquetFileReaderFactory;
Modules
- Execution plan for reading Parquet files
Structs
- Execution plan for scanning Arrow data source
- Execution plan for scanning Avro data source
- A Config for
CsvOpener
- Execution plan for scanning a CSV file
- A
FileOpener
that opens a CSV file and yields aFileOpenFuture
- A single file or part of a file that should be read, along with its schema, statistics
- The base configurations to provide when creating a physical plan for any given file format.
- The base configurations to provide when creating a physical plan for writing to any given file format.
- A stream that iterates record batch by record batch, file over file.
- A
FileOpener
that opens a JSON file and yields aFileOpenFuture
- Execution plan for scanning NdJson data source
- The SchemaMapping struct holds a mapping from the file schema to the table schema and any necessary type conversions that need to be applied.
Enums
- Describes the behavior of the
FileStream
if file opening or scanning fails
Traits
- Generic API for opening a file using an
ObjectStore
and resolving to a stream ofRecordBatch
Functions
- Get all of the
PartitionedFile
to be scanned for anExecutionPlan
- Convert type to a type suitable for use as a
ListingTable
partition column. ReturnsDictionary(UInt16, val_type)
, which is a reasonable trade off between a reasonable number of partition values and space efficiency. - Convert a
ScalarValue
of partition columns to a type, as decribed in the documentation ofwrap_partition_type_in_dict
, which can wrap the types.
Type Definitions
- A fallible future that resolves to a stream of
RecordBatch