Crate datafusion[][src]

DataFusion is an extensible query execution framework that uses Apache Arrow as its in-memory format.

DataFusion supports both an SQL and a DataFrame API for building logical query plans as well as a query optimizer and execution engine capable of parallel execution against partitioned data sources (CSV and Parquet) using threads.

Below is an example of how to execute a query against data stored in a CSV file using a DataFrame:


let mut ctx = ExecutionContext::new();

// create the dataframe
let df = ctx.read_csv("tests/example.csv", CsvReadOptions::new())?;

// create a plan
let df = df.filter(col("a").lt_eq(col("b")))?
           .aggregate(vec![col("a")], vec![min(col("b"))])?
           .limit(100)?;

// execute the plan
let results: Vec<RecordBatch> = df.collect().await?;

// format the results
let pretty_results = arrow::util::pretty::pretty_format_batches(&results)?;

let expected = vec![
    "+---+--------+",
    "| a | MIN(b) |",
    "+---+--------+",
    "| 1 | 2      |",
    "+---+--------+"
];

assert_eq!(pretty_results.trim().lines().collect::<Vec<_>>(), expected);

and how to execute a query against a CSV using SQL:


let mut ctx = ExecutionContext::new();

ctx.register_csv("example", "tests/example.csv", CsvReadOptions::new())?;

// create a plan
let df = ctx.sql("SELECT a, MIN(b) FROM example GROUP BY a LIMIT 100")?;

// execute the plan
let results: Vec<RecordBatch> = df.collect().await?;

// format the results
let pretty_results = arrow::util::pretty::pretty_format_batches(&results)?;

let expected = vec![
    "+---+--------+",
    "| a | MIN(b) |",
    "+---+--------+",
    "| 1 | 2      |",
    "+---+--------+"
];

assert_eq!(pretty_results.trim().lines().collect::<Vec<_>>(), expected);

Parse, Plan, Optimize, Execute

DataFusion is a fully fledged query engine capable of performing complex operations. Specifically, when DataFusion receives an SQL query, there are different steps that it passes through until a result is obtained. Broadly, they are:

  1. The string is parsed to an Abstract syntax tree (AST) using sqlparser.
  2. The planner SqlToRel converts logical expressions on the AST to logical expressions Exprs.
  3. The planner SqlToRel converts logical nodes on the AST to a LogicalPlan.
  4. OptimizerRules are applied to the LogicalPlan to optimize it.
  5. The LogicalPlan is converted to an ExecutionPlan by a PhysicalPlanner
  6. The ExecutionPlan is executed against data through the ExecutionContext

With a DataFrame API, steps 1-3 are not used as the DataFrame builds the LogicalPlan directly.

Phases 1-5 are typically cheap when compared to phase 6, and thus DataFusion puts a lot of effort to ensure that phase 6 runs efficiently and without errors.

DataFusion’s planning is divided in two main parts: logical planning and physical planning.

Logical plan

Logical planning yields logical plans and logical expressions. These are Schema-aware traits that represent statements whose result is independent of how it should physically be executed.

A LogicalPlan is a Direct Asyclic graph of other LogicalPlans and each node contains logical expressions (Exprs). All of these are located in logical_plan.

Physical plan

A Physical plan (ExecutionPlan) is a plan that can be executed against data. Contrarily to a logical plan, the physical plan has concrete information about how the calculation should be performed (e.g. what Rust functions are used) and how data should be loaded into memory.

ExecutionPlan uses the Arrow format as its in-memory representation of data, through the arrow crate. We recommend going through its documentation for details on how the data is physically represented.

A ExecutionPlan is composed by nodes (implement the trait ExecutionPlan), and each node is composed by physical expressions (PhysicalExpr) or aggreagate expressions (AggregateExpr). All of these are located in the module physical_plan.

Broadly speaking,

(*) Technically, it aggregates the results on each partition and then merges the results into a single partition.

The following physical nodes are currently implemented:

Customize

DataFusion allows users to

  • extend the planner to use user-defined logical and physical nodes (QueryPlanner)
  • declare and use user-defined scalar functions (ScalarUDF)
  • declare and use user-defined aggregate functions (AggregateUDF)

you can find examples of each of them in examples section.

Modules

catalog

This module contains interfaces and default implementations of table namespacing concepts, including catalogs and schemas.

dataframe

DataFrame API for building and executing query plans.

datasource

DataFusion data sources

error

DataFusion error types

execution

DataFusion query execution

logical_plan

This module provides a logical query plan enum that can describe queries. Logical query plans can be created from a SQL statement or built programmatically via the Table API.

optimizer

This module contains a query optimizer that operates against a logical plan and applies some simple rules to a logical plan, such as “Projection Push Down” and “Type Coercion”.

physical_optimizer

This module contains a query optimizer that operates against a physical plan and applies rules to a physical plan, such as “Repartition”.

physical_plan

Traits for physical query plan, supporting parallel execution for partitioned relations.

prelude

A “prelude” for users of the datafusion crate.

scalar

This module provides ScalarValue, an enum that can be used for storage of single elements

sql

This module provides a SQL parser that translates SQL queries into an abstract syntax tree (AST), and a SQL query planner that creates a logical plan from the AST.

variable

Variable provider

Macros

binary_array_op

The binary_array_op macro includes types that extend beyond the primitive, such as Utf8 strings.

binary_array_op_scalar

The binary_array_op_scalar macro includes types that extend beyond the primitive, such as Utf8 strings.