Ballista: Distributed Scheduler for Apache Arrow DataFusion
Ballista is a distributed compute platform primarily implemented in Rust, and powered by Apache Arrow and DataFusion. It is built on an architecture that allows other programming languages (such as Python, C++, and Java) to be supported as first-class citizens without paying a penalty for serialization costs.
The foundational technologies in Ballista are:
- Apache Arrow memory model and compute kernels for efficient processing of data.
- Apache Arrow Flight Protocol for efficient data transfer between processes.
- Google Protocol Buffers for serializing query plans.
- Docker for packaging up executors along with user-defined code.
Ballista can be deployed as a standalone cluster and also supports Kubernetes. In either case, the scheduler can be configured to use etcd as a backing store to (eventually) provide redundancy in the case of a scheduler failing.
Rust Version Compatbility
This crate is tested with the latest stable version of Rust. We do not currrently test against other, older versions of the Rust compiler.
Starting a cluster
There are numerous ways to start a Ballista cluster, including support for Docker and Kubernetes. For full documentation, refer to the DataFusion User Guide
A simple way to start a local cluster for testing purposes is to use cargo to install the scheduler and executor crates.
With these crates installed, it is now possible to start a scheduler process.
RUST_LOG=info
The scheduler will bind to port 50050 by default.
Next, start an executor processes in a new terminal session with the specified concurrency level.
RUST_LOG=info
The executor will bind to port 50051 by default. Additional executors can be started by manually specifying a bind port. For example:
RUST_LOG=info
Executing a query
Ballista provides a BallistaContext as a starting point for creating queries. DataFrames can be created
by invoking the read_csv, read_parquet, and sql methods.
To build a simple ballista example, add the following dependencies to your Cargo.toml file:
[]
= "0.6"
= "7.0"
= "1.0"
The following example runs a simple aggregate SQL query against a CSV file from the New York Taxi and Limousine Commission data set.
use *;
use pretty;
use CsvReadOptions;
async
More examples can be found in the arrow-datafusion repository.