Crate hff

source ·
Expand description

The high level wrapper around the supporting HFF crates.

§Examples

use hff_std::*;

// Creating the content can use the builder:
let content = hff([
    table((Ecc::new("Prime"), Ecc::new("Second")))
    // Metadata and chunks can be pulled from many types of source data.
    .metadata("Each table can have metadata.").unwrap()
    // Tables can have chunks.
    .chunks([
        chunk((Ecc::new("AChunk"), Ecc::INVALID), "Each table can have 0..n chunks of data.").unwrap()
    ])
    // Tables can have child tables.
    .children([
        table((Ecc::new("Child1"), Ecc::INVALID))
        .metadata("Unique to this table.").unwrap()
        .chunks([
            chunk((Ecc::new("ThisFile"), Ecc::new("Copy")), "more data").unwrap()
        ])
    ]),
    // And there can be multiple tables at the root.
    table((Ecc::new("Child2"), Ecc::INVALID))
]);

// The results can be packaged into an output stream.
// This can be anything which supports the std::io::Write trait.
let mut buffer = vec![];
content.write::<NE>(IdType::Ecc2, "Test", &mut buffer).unwrap();

// Hff can be read back from anything which supports the std::io::Read
// trait.  In this case we also read all the data into a cache in memory.
// The cache is simply an array with Read+Seek implemented on top of a
// Vec<u8>.
let hff = read(&mut buffer.as_slice()).unwrap();

// The Hff instance contains the structure of the content and can be
// iterated in multiple ways.  Here, we'll use the depth first iterator
// just to see all the content.
for (depth, table) in hff.depth_first() {
    // Print information about the table.
    let metadata = hff.read(&table).unwrap_or(&[0; 0]);
    println!("{}: {:?} ({})",
        depth,
        table.identifier(),
        std::str::from_utf8(metadata).unwrap()
    );

    // Iterate the chunks.
    for chunk in table.chunks() {
        println!("{}", std::str::from_utf8(hff.read(&chunk).unwrap()).unwrap());
    }
}

§In Progress

  • Depth first iterator through tables.
  • More metadata/chunk data source types. Most things which can be turned into Vec exist now, read trait for anything which can be immediately pulled in at runtime and finally std::path::{Path, PathBuf} to pull data from a file.
  • Yet more metadata/chunk data source types. Compression is done and uses lzma due to the desired performance versus compression. Pass in a tuple with: (level, any valid data source) where level is 0-9.
  • Utility types for metadata. For instance a simple key=value string map and a simple array of strings.
  • Change the table builder to allow multiple tables at the ‘root’ level. Currently the builder expects a single outer table to contain all others. This is a holdover from a prior format structure which was removed.
  • After fixing the table builder, implement the lazy header variation so compressed chunks do not have to be stored in memory prior to writing.
  • Remove the development testing and write better and more complete tests.
  • Better examples.
  • Async-std implementation of the reader.
  • Async-std implementation of the writer.
  • Tokio implementation of the reader.
  • Tokio implementation of the writer.
  • Mmap, io_ring and whatever other variations make sense in the long run.

Modules§

  • This crate provides convenience methods for encoding and decoding numbers in either big-endian or little-endian order.
  • Implements the basic reader/writer functionality for HFF.
  • Higher level wrapping over the structure in order to support reading hff files.
  • Simple helpers for working with Hff data, primarilly metadata.
  • Generate and parse universally unique identifiers (UUIDs).
  • Higher level support for writing hff files.

Structs§

  • Specifies a chunk of data within the file.
  • Act as a ReadSeek IO object for purposes of having an entire HFF in memory at one time.
  • 8 character code.
  • The file header.
  • An identifier for the tables and chunks.
  • A table entry in the file format. Tables are 48 bytes in length when stored.
  • Version of the file format.

Enums§

  • Runtime endianess values.
  • Common error type.
  • Identifier type as specified in the hff header. This has no impact on behavior at all, it is only a hint to the end user about how to use/view the ID’s.

Constants§

Traits§

  • ByteOrder describes types that can serialize integers as bytes.
  • Information about the metadata or chunk data contained within the source.

Type Aliases§

  • Big Endian.
  • Little Endian.
  • Native Endian.
  • Opposing Endian.
  • The standard result type used in the crate.