DataFusion in Python
This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.
DataFusion's Python bindings can be used as a foundation for building new data systems in Python. Here are some examples:
- Dask SQL uses DataFusion's Python bindings for SQL parsing, query planning, and logical plan optimizations, and then transpiles the logical plan to Dask operations for execution.
- DataFusion Ballista is a distributed SQL query engine that extends DataFusion's Python bindings for distributed use cases.
- DataFusion Ray is another distributed query engine that uses DataFusion's Python bindings.
Features
- Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
- Queries are optimized using DataFusion's query optimizer.
- Execute user-defined Python code from SQL.
- Exchange data with Pandas and other DataFrame libraries that support PyArrow.
- Serialize and deserialize query plans in Substrait format.
- Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.
Example Usage
The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.
The Parquet file used in this example can be downloaded from the following page:
# Create a DataFusion context
=
# Register table with context
# Execute SQL
=
# convert to Pandas
=
# create a chart
=
This produces the following chart:
Registering a DataFrame as a View
You can use SessionContext's register_view
method to convert a DataFrame into a view and register it with the context.
# Create a DataFusion context
=
# Create sample data
=
# Create a DataFrame from the dictionary
=
# Filter the DataFrame (for example, keep rows where a > 2)
=
# Register the dataframe as a view with the context
# Now run a SQL query against the registered view
=
# Collect the results
=
# Convert results to a list of dictionaries for display
=
This will output:
Configuration
It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.
=
=
=
Refer to the API documentation for more information.
Printing the context will show the current configuration settings.
Extensions
For information about how to extend DataFusion Python, please see the extensions page of the online documentation.
More Examples
See examples for more information.
Executing Queries with DataFusion
- Query a Parquet file using SQL
- Query a Parquet file using the DataFrame API
- Run a SQL query and store the results in a Pandas DataFrame
- Run a SQL query with a Python user-defined function (UDF)
- Run a SQL query with a Python user-defined aggregation function (UDAF)
- Query PyArrow Data
- Create dataframe
- Export dataframe
Running User-Defined Python Code
Substrait Support
How to install
uv
Pip
# or
Conda
You can verify the installation by running:
>>>
>>>
How to develop
This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin. The Maturin tools used in this workflow can be installed either via uv
or pip
. Both approaches should offer the same experience. It is recommended to use uv
since it has significant performance improvements
over pip
.
Bootstrap (uv
):
By default uv
will attempt to build the datafusion python package. For our development we prefer to build manually. This means
that when creating your virtual environment using uv sync
you need to pass in the additional --no-install-package datafusion
and for uv run
commands the additional parameter --no-project
# fetch this repo
# create the virtual enviornment
# activate the environment
Bootstrap (pip
):
# fetch this repo
# prepare development environment (used to build wheel / install in development)
# activate the venv
# update pip itself if necessary
# install dependencies
The tests rely on test data in git submodules.
Whenever rust code changes (your changes or via git pull
):
# make sure you activate the venv using "source venv/bin/activate" first
Alternatively if you are using uv
you can do the following without
needing to activate the virtual environment:
Running & Installing pre-commit hooks
datafusion-python
takes advantage of pre-commit to assist developers with code linting to help reduce
the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the
developer but certainly helpful for keeping PRs clean and concise.
Our pre-commit hooks can be installed by running pre-commit install
, which will install the configurations in
your DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete
the commit if an offending lint is found allowing you to make changes locally before pushing.
The pre-commit hooks can also be run adhoc without installing them by simply running pre-commit run --all-files
Running linters without using pre-commit
There are scripts in ci/scripts
for running Rust and Python linters.
./ci/scripts/python_lint.sh
./ci/scripts/rust_clippy.sh
./ci/scripts/rust_fmt.sh
./ci/scripts/rust_toml_fmt.sh
How to update dependencies
To change test dependencies, change the pyproject.toml
and run