Expand description

Welcome to arrow2’s documentation. Thanks for checking it out!

This is a library for efficient in-memory data operations with Arrow in-memory format. It is a re-write from the bottom up of the official arrow crate with soundness and type safety in mind.

Check out the guide for an introduction. Below is an example of some of the things you can do with it:

use std::sync::Arc;

use arrow2::array::*;
use arrow2::compute::arithmetics;
use arrow2::error::Result;
use arrow2::io::parquet::write::*;
use arrow2::record_batch::RecordBatch;

fn main() -> Result<()> {
    // declare arrays
    let a = Int32Array::from(&[Some(1), None, Some(3)]);
    let b = Int32Array::from(&[Some(2), None, Some(6)]);

    // compute (probably the fastest implementation of a nullable op you can find out there)
    let c = arithmetics::basic::mul_scalar(&a, &2);
    assert_eq!(c, b);

    // declare records
    let batch = RecordBatch::try_from_iter([
        ("c1", Arc::new(a) as Arc<dyn Array>),
        ("c2", Arc::new(b) as Arc<dyn Array>),
    ])?;
    // with metadata
    println!("{:?}", batch.schema());

    // write to parquet (probably the fastest implementation of writing to parquet out there)
    let schema = batch.schema().clone();

    let options = WriteOptions {
        write_statistics: true,
        compression: Compression::Snappy,
        version: Version::V1,
    };

    let row_groups = RowGroupIterator::try_new(
        vec![Ok(batch)].into_iter(),
        &schema,
        options,
        vec![Encoding::Plain, Encoding::Plain],
    )?;

    // anything implementing `std::io::Write` works
    let mut file = vec![];

    let parquet_schema = row_groups.parquet_schema().clone();
    let _ = write_file(
        &mut file,
        row_groups,
        &schema,
        parquet_schema,
        options,
        None,
    )?;

    Ok(())
}

Cargo features

This crate has a significant number of cargo features to reduce compilation time and number of dependencies. The feature "full" activates most functionality, such as:

  • io_ipc: to interact with the Arrow IPC format
  • io_ipc_compression: to read and write compressed Arrow IPC (v2)
  • io_csv to read and write CSV
  • io_json to read and write JSON
  • io_flight to read and write to Arrow’s Flight protocol
  • io_parquet to read and write parquet
  • io_parquet_compression to read and write compressed parquet
  • io_print to write batches to formatted ASCII tables
  • compute to operate on arrays (addition, sum, sort, etc.)

The feature simd (not part of full) produces more explicit SIMD instructions via packed_simd, but requires the nightly channel.

The feature cache_aligned uses a custom allocator instead of Vec, which may be more performant but is not interoperable with Vec.

Modules

Contains the Array and MutableArray trait objects declaring arrays, as well as concrete arrays (such as Utf8Array and MutableUtf8Array).

contains Bitmap and MutableBitmap, containers of bool.

Contains Buffer and MutableBuffer, containers for all Arrow physical types (e.g. i32, f64).

computecompute

contains a wide range of compute operations (e.g. arithmetics, aggregate, filter, comparison, and sort)

Contains all metadata, such as PhysicalType, DataType, Field and Schema.

Defines ArrowError, representing all errors returned by this crate.

contains FFI bindings to import and export Array via Arrow’s C Data Interface

Contains modules to interface with other formats such as csv, parquet, json, ipc, print and avro.

contains the Scalar trait object representing individual items of Arrays, as well as concrete implementations such as BooleanScalar.

Conversion methods for dates and times.

Traits and implementations to handle all types used in this crate.

Misc utilities used in different places in the crate.