Skip to main content

run

Function run 

Source
pub fn run(
    disk: Option<PathBuf>,
    memory: Option<PathBuf>,
    output: PathBuf,
    compression: String,
    encrypt: bool,
    train_dict: bool,
    block_size: u32,
    cdc: bool,
    min_chunk: u32,
    avg_chunk: u32,
    max_chunk: u32,
    silent: bool,
) -> Result<()>
Expand description

Execute the pack command to create a Hexz snapshot archive.

This command creates a .st snapshot file from disk and/or memory dump files. It supports compression (LZ4 or Zstd), optional encryption, deduplication, content-defined chunking (CDC), and dictionary training for improved compression.

§Workflow

The packing process follows these steps:

  1. Password prompt (if encryption enabled): Prompts for password and derives encryption key
  2. Dictionary training (if enabled): Samples blocks and trains a Zstd compression dictionary
  3. Chunking: Splits input file(s) into blocks using fixed-size or CDC chunking
  4. Compression: Compresses each block using the selected algorithm and optional dictionary
  5. Deduplication: Hashes compressed blocks and eliminates duplicates (enabled by default)
  6. Index building: Constructs the master index with page entries and block metadata
  7. Header writing: Serializes header with format version, offsets, and feature flags

§Arguments

  • disk - Optional path to disk image file (raw or qcow2)
  • memory - Optional path to memory dump file
  • output - Output path for the .st snapshot file
  • compression - Compression algorithm: “lz4” (fast) or “zstd” (balanced)
  • encrypt - Enable AES-256-GCM encryption (prompts for password)
  • train_dict - Train a Zstd dictionary for improved compression ratios
  • block_size - Block size in bytes (default: 64 KiB)
  • cdc - Enable content-defined chunking for variable-sized blocks
  • min_chunk - Minimum chunk size for CDC (default: 16 KiB)
  • avg_chunk - Average chunk size for CDC (default: 64 KiB)
  • max_chunk - Maximum chunk size for CDC (default: 128 KiB)
  • silent - Suppress progress output

§Performance Characteristics

  • LZ4: ~500 MB/s compression throughput
  • Zstd level 3: ~200 MB/s compression throughput
  • Deduplication overhead: ~5-10% additional time for hashing
  • Dictionary training: 2-5 seconds for typical datasets

§Example

// Pack a disk image with Zstd compression and dictionary training
pack::run(
    Some(PathBuf::from("disk.img")),
    None,
    PathBuf::from("snapshot.hxz"),
    "zstd".to_string(),
    false,  // no encryption
    true,   // train dictionary
    65536,  // 64 KiB blocks
    false,  // fixed-size blocks
    16384,
    65536,
    131072,
    false,  // show progress
);