Crate shardio[][src]

Expand description

Serialize large streams of Serialize-able structs to disk from multiple threads, with a customizable on-disk sort order. Data is written to sorted chunks. When reading shardio will merge the data on the fly into a single sorted view. You can also procss disjoint subsets of sorted data.

Additionally, you can also iterate through the data in the order they are written to disk. The items will not in general follow the sort order. Such an iterator does not involve the merge sort step and hence does not have the memory overhead associated with keeping multiple items in memory to perform the merge sort.

use serde::{Deserialize, Serialize};
use shardio::*;
use std::fs::File;
use anyhow::Error;

#[derive(Clone, Eq, PartialEq, Serialize, Deserialize, PartialOrd, Ord, Debug)]
struct DataStruct {
    a: u64,
    b: u32,

fn main() -> Result<(), Error>
    let filename = "test.shardio";
        // Open a shardio output file
        // Parameters here control buffering, and the size of disk chunks
        // which affect how many disk chunks need to be read to
        // satisfy a range query when reading.
        // In this example the 'built-in' sort order given by #[derive(Ord)]
        // is used.
        let mut writer: ShardWriter<DataStruct> =
            ShardWriter::new(filename, 64, 256, 1<<16)?;

        // Get a handle to send data to the file
        let mut sender = writer.get_sender();

        // Generate some test data
        for i in 0..(2 << 16) {
            sender.send(DataStruct { a: (i%25) as u64, b: (i%100) as u32 });

        // done sending items

        // Write errors are accessible by calling the finish() method

    // Open finished file & test chunked reads
    let reader = ShardReader::<DataStruct>::open(filename)?;

    let mut all_items = Vec::new();

    // Shardio will divide the key space into 5 roughly equally sized chunks.
    // These chunks can be processed serially, in parallel in different threads,
    // or on different machines.
    let chunks = reader.make_chunks(5, &Range::all());

    for c in chunks {
        // Iterate over all the data in chunk c.
        let mut range_iter = reader.iter_range(&c)?;
        for i in range_iter {

    // Data will be return in sorted order
    let mut all_items_sorted = all_items.clone();
    assert_eq!(all_items, all_items_sorted);

    // If you want to iterate through the items in unsorted order.
    let unsorted_items: Vec<_> = UnsortedShardReader::<DataStruct>::open(filename)?.collect();
    // You will get the items in the order they are written to disk.
    assert_eq!(unsorted_items.len(), all_items.len());



pub use crate::range::Range;


Helper methods

Represent a range of key space


Marker struct for sorting types that implement Ord in the order defined by their Ord impl. This sort order is used by default when writing, unless an alternative sort order is provided.

Iterator over merged shardio files

Iterator of items from a single shardio reader

Read from a collection of shardio files. The input data is merged to give a single sorted view of the combined dataset. The input files must be created with the same sort order S as they are read with.

A handle that is used to send data to a ShardWriter. Each thread that is producing data needs it’s own ShardSender. A ShardSender can be obtained with the get_sender method of ShardWriter. ShardSender implement clone.

Write a stream data items of type T to disk, in the sort order defined by S.

Read from a collection of shardio files in the order in which items are written without considering the sort order.


The size (in bytes) of a ShardIter object (mostly buffers)


Specify a key function from data items of type T to a sort key of type Key. Impelment this trait to create a custom sort order. The function sort_key returns a Cow so that we abstract over Owned or Borrowed data.