pub struct PiecewiseMergeJoinExec {
pub buffered: Arc<dyn ExecutionPlan>,
pub streamed: Arc<dyn ExecutionPlan>,
pub on: (Arc<dyn PhysicalExpr>, Arc<dyn PhysicalExpr>),
pub operator: Operator,
pub join_type: JoinType,
/* private fields */
}Expand description
PiecewiseMergeJoinExec is a join execution plan that only evaluates single range filter and show much
better performance for these workloads than NestedLoopJoin
The physical planner will choose to evaluate this join when there is only one comparison filter. This
is a binary expression which contains Operator::Lt, Operator::LtEq, Operator::Gt, and
Operator::GtEq.:
Examples:
col0<colb,col0<=colb,col0>colb,col0>=colb
§Execution Plan Inputs
For PiecewiseMergeJoin we label all right inputs as the `streamed’ side and the left outputs as the
‘buffered’ side.
PiecewiseMergeJoin takes a sorted input for the side to be buffered and is able to sort streamed record
batches during processing. Sorted input must specifically be ascending/descending based on the operator.
§Algorithms
Classic joins are processed differently compared to existence joins.
§Classic Joins (Inner, Full, Left, Right)
For classic joins we buffer the build side and stream the probe side (the “probe” side). Both sides are sorted so that we can iterate from index 0 to the end on each side. This ordering ensures that when we find the first matching pair of rows, we can emit the current stream row joined with all remaining probe rows from the match position onward, without rescanning earlier probe rows.
For < and <= operators, both inputs are sorted in descending order, while for > and >= operators
they are sorted in ascending order. This choice ensures that the pointer on the buffered side can advance
monotonically as we stream new batches from the stream side.
The streamed side may arrive unsorted, so this operator sorts each incoming batch in memory before
processing. The buffered side is required to be globally sorted; the plan declares this requirement
in requires_input_order, which allows the optimizer to automatically insert a SortExec on that side if needed.
By the time this operator runs, the buffered side is guaranteed to be in the proper order.
The pseudocode for the algorithm looks like this:
for stream_row in stream_batch:
for buffer_row in buffer_batch:
if compare(stream_row, probe_row):
output stream_row X buffer_batch[buffer_row:]
else:
continueThe algorithm uses the streamed side (larger) to drive the loop. This is due to every row on the stream side iterating the buffered side to find every first match. By doing this, each match can output more result so that output handling can be better vectorized for performance.
Here is an example:
We perform a JoinType::Left with these two batches and the operator being Operator::Lt(<). For each
row on the streamed side we move a pointer on the buffered until it matches the condition. Once we reach
the row which matches (in this case with row 1 on streamed will have its first match on row 2 on
buffered; 100 < 200 is true), we can emit all rows after that match. We can emit the rows like this because
if the batch is sorted in ascending order, every subsequent row will also satisfy the condition as they will
all be larger values.
SQL statement:
SELECT *
FROM (VALUES (100), (200), (500)) AS streamed(a)
LEFT JOIN (VALUES (100), (200), (200), (300), (400)) AS buffered(b)
ON streamed.a < buffered.b;
Processing Row 1:
Sorted Buffered Side Sorted Streamed Side
┌──────────────────┐ ┌──────────────────┐
1 │ 100 │ 1 │ 100 │
├──────────────────┤ ├──────────────────┤
2 │ 200 │ ─┐ 2 │ 200 │
├──────────────────┤ │ For row 1 on streamed side with ├──────────────────┤
3 │ 200 │ │ value 100, we emit rows 2 - 5. 3 │ 500 │
├──────────────────┤ │ as matches when the operator is └──────────────────┘
4 │ 300 │ │ `Operator::Lt` (<) Emitting all
├──────────────────┤ │ rows after the first match (row
5 │ 400 │ ─┘ 2 buffered side; 100 < 200)
└──────────────────┘
Processing Row 2:
By sorting the streamed side we know
Sorted Buffered Side Sorted Streamed Side
┌──────────────────┐ ┌──────────────────┐
1 │ 100 │ 1 │ 100 │
├──────────────────┤ ├──────────────────┤
2 │ 200 │ <- Start here when probing for the 2 │ 200 │
├──────────────────┤ streamed side row 2. ├──────────────────┤
3 │ 200 │ 3 │ 500 │
├──────────────────┤ └──────────────────┘
4 │ 300 │
├──────────────────┤
5 │ 400 │
└──────────────────┘ §Existence Joins (Semi, Anti, Mark)
Existence joins are made magnitudes of times faster with a PiecewiseMergeJoin as we only need to find
the min/max value of the streamed side to be able to emit all matches on the buffered side. By putting
the side we need to mark onto the sorted buffer side, we can emit all these matches at once.
For less than operations (<) both inputs are to be sorted in descending order and vice versa for greater
than (>) operations. SortExec is used to enforce sorting on the buffered side and streamed side does not
need to be sorted due to only needing to find the min/max.
For Left Semi, Anti, and Mark joins we swap the inputs so that the marked side is on the buffered side.
The pseudocode for the algorithm looks like this:
// Using the example of a less than `<` operation
let max = max_batch(streamed_batch)
for buffer_row in buffer_batch:
if buffer_row < max:
output buffer_batch[buffer_row:]Only need to find the min/max value and iterate through the buffered side once.
Here is an example:
We perform a JoinType::LeftSemi with these two batches and the operator being Operator::Lt(<). Because
the operator is Operator::Lt we can find the minimum value in the streamed side; in this case it is 200.
We can then advance a pointer from the start of the buffer side until we find the first value that satisfies
the predicate. All rows after that first matched value satisfy the condition 200 < x so we can mark all of
those rows as matched.
SQL statement:
SELECT *
FROM (VALUES (500), (200), (300)) AS streamed(a)
LEFT SEMI JOIN (VALUES (100), (200), (200), (300), (400)) AS buffered(b)
ON streamed.a < buffered.b;
Sorted Buffered Side Unsorted Streamed Side
┌──────────────────┐ ┌──────────────────┐
1 │ 100 │ 1 │ 500 │
├──────────────────┤ ├──────────────────┤
2 │ 200 │ 2 │ 200 │
├──────────────────┤ ├──────────────────┤
3 │ 200 │ 3 │ 300 │
├──────────────────┤ └──────────────────┘
4 │ 300 │ ─┐
├──────────────────┤ | We emit matches for row 4 - 5
5 │ 400 │ ─┘ on the buffered side.
└──────────────────┘
min value: 200For both types of joins, the buffered side must be sorted ascending for Operator::Lt (<) or
Operator::LtEq (<=) and descending for Operator::Gt (>) or Operator::GtEq (>=).
§Partitioning Logic
Piecewise Merge Join requires one buffered side partition + round robin partitioned stream side. A counter is used in the buffered side to coordinate when all streamed partitions are finished execution. This allows for processing the rest of the unmatched rows for Left and Full joins. The last partition that finishes execution will be responsible for outputting the unmatched rows.
§Performance Explanation (cost)
Piecewise Merge Join is used over Nested Loop Join due to its superior performance. Here is the breakdown:
R: Buffered Side S: Streamed Side
§Piecewise Merge Join (PWMJ)
§Classic Join:
Requires sorting the probe side and, for each probe row, scanning the buffered side until the first match
is found.
Complexity: O(sort(S) + num_of_batches(|S|) * scan(R)).
§Mark Join:
Sorts the probe side, then computes the min/max range of the probe keys and scans the buffered side only
within that range.
Complexity: O(|S| + scan(R[range])).
§Nested Loop Join
Compares every row from S with every row from R.
Complexity: O(|S| * |R|).
§Nested Loop Join
Always going to be probe (O(S) * O(R)).
§Further Reference Material
DuckDB blog on Range Joins: Range Joins in DuckDB
Fields§
§buffered: Arc<dyn ExecutionPlan>Left buffered execution plan
streamed: Arc<dyn ExecutionPlan>Right streamed execution plan
on: (Arc<dyn PhysicalExpr>, Arc<dyn PhysicalExpr>)The two expressions being compared
operator: OperatorComparison operator in the range predicate
join_type: JoinTypeHow the join is performed
Implementations§
Source§impl PiecewiseMergeJoinExec
impl PiecewiseMergeJoinExec
pub fn try_new( buffered: Arc<dyn ExecutionPlan>, streamed: Arc<dyn ExecutionPlan>, on: (Arc<dyn PhysicalExpr>, Arc<dyn PhysicalExpr>), operator: Operator, join_type: JoinType, num_partitions: usize, ) -> Result<Self>
Sourcepub fn buffered(&self) -> &Arc<dyn ExecutionPlan>
pub fn buffered(&self) -> &Arc<dyn ExecutionPlan>
Reference to buffered side execution plan
Sourcepub fn streamed(&self) -> &Arc<dyn ExecutionPlan>
pub fn streamed(&self) -> &Arc<dyn ExecutionPlan>
Reference to streamed side execution plan
Sourcepub fn sort_options(&self) -> &SortOptions
pub fn sort_options(&self) -> &SortOptions
Reference to sort options
Sourcepub fn probe_side(join_type: &JoinType) -> JoinSide
pub fn probe_side(join_type: &JoinType) -> JoinSide
Get probe side (streamed side) for the PiecewiseMergeJoin In current implementation, probe side is determined according to join type.
pub fn compute_properties( buffered: &Arc<dyn ExecutionPlan>, streamed: &Arc<dyn ExecutionPlan>, schema: SchemaRef, join_type: JoinType, join_on: &(PhysicalExprRef, PhysicalExprRef), ) -> Result<PlanProperties>
pub fn swap_inputs(&self) -> Result<Arc<dyn ExecutionPlan>>
Trait Implementations§
Source§impl Debug for PiecewiseMergeJoinExec
impl Debug for PiecewiseMergeJoinExec
Source§impl DisplayAs for PiecewiseMergeJoinExec
impl DisplayAs for PiecewiseMergeJoinExec
Source§impl ExecutionPlan for PiecewiseMergeJoinExec
impl ExecutionPlan for PiecewiseMergeJoinExec
Source§fn as_any(&self) -> &dyn Any
fn as_any(&self) -> &dyn Any
Any so that it can be
downcast to a specific implementation.Source§fn properties(&self) -> &PlanProperties
fn properties(&self) -> &PlanProperties
ExecutionPlan, such as output
ordering(s), partitioning information etc. Read moreSource§fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>>
fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>>
ExecutionPlans that act as inputs to this plan.
The returned list will be empty for leaf nodes such as scans, will contain
a single value for unary nodes, or two values for binary nodes (such as
joins).Source§fn required_input_distribution(&self) -> Vec<Distribution>
fn required_input_distribution(&self) -> Vec<Distribution>
ExecutionPlan, By default it’s [Distribution::UnspecifiedDistribution] for each child,Source§fn required_input_ordering(&self) -> Vec<Option<OrderingRequirements>>
fn required_input_ordering(&self) -> Vec<Option<OrderingRequirements>>
ExecutionPlan. Read moreSource§fn with_new_children(
self: Arc<Self>,
children: Vec<Arc<dyn ExecutionPlan>>,
) -> Result<Arc<dyn ExecutionPlan>>
fn with_new_children( self: Arc<Self>, children: Vec<Arc<dyn ExecutionPlan>>, ) -> Result<Arc<dyn ExecutionPlan>>
ExecutionPlan where all existing children were replaced
by the children, in orderSource§fn execute(
&self,
partition: usize,
context: Arc<TaskContext>,
) -> Result<SendableRecordBatchStream>
fn execute( &self, partition: usize, context: Arc<TaskContext>, ) -> Result<SendableRecordBatchStream>
Source§fn static_name() -> &'static strwhere
Self: Sized,
fn static_name() -> &'static strwhere
Self: Sized,
name but can be called without an instance.Source§fn check_invariants(&self, check: InvariantLevel) -> Result<()>
fn check_invariants(&self, check: InvariantLevel) -> Result<()>
Source§fn maintains_input_order(&self) -> Vec<bool>
fn maintains_input_order(&self) -> Vec<bool>
false if this ExecutionPlan’s implementation may reorder
rows within or between partitions. Read moreSource§fn benefits_from_input_partitioning(&self) -> Vec<bool>
fn benefits_from_input_partitioning(&self) -> Vec<bool>
ExecutionPlan benefits from increased
parallelization at its input for each child. Read moreSource§fn reset_state(self: Arc<Self>) -> Result<Arc<dyn ExecutionPlan>>
fn reset_state(self: Arc<Self>) -> Result<Arc<dyn ExecutionPlan>>
ExecutionPlan. Read moreSource§fn repartitioned(
&self,
_target_partitions: usize,
_config: &ConfigOptions,
) -> Result<Option<Arc<dyn ExecutionPlan>>>
fn repartitioned( &self, _target_partitions: usize, _config: &ConfigOptions, ) -> Result<Option<Arc<dyn ExecutionPlan>>>
ExecutionPlan to
produce target_partitions partitions. Read moreSource§fn metrics(&self) -> Option<MetricsSet>
fn metrics(&self) -> Option<MetricsSet>
Metrics for this
ExecutionPlan. If no Metrics are available, return None. Read moreSource§fn statistics(&self) -> Result<Statistics>
fn statistics(&self) -> Result<Statistics>
partition_statistics method insteadExecutionPlan node. If statistics are not
available, should return Statistics::new_unknown (the default), not
an error. Read moreSource§fn partition_statistics(&self, partition: Option<usize>) -> Result<Statistics>
fn partition_statistics(&self, partition: Option<usize>) -> Result<Statistics>
ExecutionPlan node.
If statistics are not available, should return Statistics::new_unknown
(the default), not an error.
If partition is None, it returns statistics for the entire plan.Source§fn supports_limit_pushdown(&self) -> bool
fn supports_limit_pushdown(&self) -> bool
Source§fn with_fetch(&self, _limit: Option<usize>) -> Option<Arc<dyn ExecutionPlan>>
fn with_fetch(&self, _limit: Option<usize>) -> Option<Arc<dyn ExecutionPlan>>
ExecutionPlan node, if it supports
fetch limits. Returns None otherwise.Source§fn fetch(&self) -> Option<usize>
fn fetch(&self) -> Option<usize>
None means there is no fetch.Source§fn cardinality_effect(&self) -> CardinalityEffect
fn cardinality_effect(&self) -> CardinalityEffect
Source§fn try_swapping_with_projection(
&self,
_projection: &ProjectionExec,
) -> Result<Option<Arc<dyn ExecutionPlan>>>
fn try_swapping_with_projection( &self, _projection: &ProjectionExec, ) -> Result<Option<Arc<dyn ExecutionPlan>>>
ExecutionPlan. Read moreSource§fn gather_filters_for_pushdown(
&self,
_phase: FilterPushdownPhase,
parent_filters: Vec<Arc<dyn PhysicalExpr>>,
_config: &ConfigOptions,
) -> Result<FilterDescription>
fn gather_filters_for_pushdown( &self, _phase: FilterPushdownPhase, parent_filters: Vec<Arc<dyn PhysicalExpr>>, _config: &ConfigOptions, ) -> Result<FilterDescription>
ExecutionPlan::gather_filters_for_pushdown: Read moreSource§fn handle_child_pushdown_result(
&self,
_phase: FilterPushdownPhase,
child_pushdown_result: ChildPushdownResult,
_config: &ConfigOptions,
) -> Result<FilterPushdownPropagation<Arc<dyn ExecutionPlan>>>
fn handle_child_pushdown_result( &self, _phase: FilterPushdownPhase, child_pushdown_result: ChildPushdownResult, _config: &ConfigOptions, ) -> Result<FilterPushdownPropagation<Arc<dyn ExecutionPlan>>>
ExecutionPlan::gather_filters_for_pushdown.
It allows the current node to process the results of filter pushdown from
its children, deciding whether to absorb filters, modify the plan, or pass
filters back up to its parent. Read moreAuto Trait Implementations§
impl !Freeze for PiecewiseMergeJoinExec
impl !RefUnwindSafe for PiecewiseMergeJoinExec
impl Send for PiecewiseMergeJoinExec
impl Sync for PiecewiseMergeJoinExec
impl Unpin for PiecewiseMergeJoinExec
impl !UnwindSafe for PiecewiseMergeJoinExec
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more