Module pipeline

Module pipeline 

Source
Expand description

Vortex crate containing vectorized operator processing.

This module contains experiments into pipelined data processing within Vortex.

Arrays (and eventually Layouts) will be convertible into a Kernel that can then be exported into a ViewMut one chunk of N elements at a time. This allows us to keep compute largely within the L1 cache, as well as to write out canonical data into externally provided buffers.

Each chunk is represented in a canonical physical form, as determined by the logical vortex_dtype::DType of the array. This provides a predicate base on which to perform compute. Unlike DuckDB and other vectorized systems, we force a single canonical representation instead of supporting multiple encodings because compute push-down is applied a priori to the logical representation.

It is a work-in-progress and is not yet used in production.

Modules§

bits
vec
Vectors contain owned fixed-size canonical arrays of elements.
view

Structs§

KernelContext
Context passed to kernels during execution, providing access to vectors.

Enums§

VType
Defines the “vector type”, a physical type describing the data that’s held in the vector.

Constants§

N
The number of elements in each step of a Vortex evaluation operator.
N_WORDS

Traits§

BindContext
The context used when binding an operator for execution.
Element
A trait to identify canonical vector types.
Kernel
A operator provides a push-based way to emit a stream of canonical data.
PipelinedOperator

Type Aliases§

BatchId
The ID of the batch input to use.
VectorId
The ID of the vector to use.