Skip to main content

Crate copybook_codec_memory

Crate copybook_codec_memory 

Source
Expand description

Memory management utilities for streaming record processing

This module provides bounded memory usage patterns, reusable scratch buffers, and ordered parallel processing with sequence tracking.

§Overview

The memory module implements performance-critical memory management patterns for high-throughput COBOL data processing. It provides three key capabilities:

  1. Scratch buffer reuse (ScratchBuffers) - Eliminate allocations in hot paths
  2. Deterministic parallel processing (WorkerPool, SequenceRing) - Maintain record order
  3. Bounded memory usage (StreamingProcessor) - Process multi-GB files with <256 MiB RAM

§Performance Impact

These utilities enable copybook-rs to achieve:

  • 205 MiB/s throughput on DISPLAY-heavy workloads (baseline: 2025-09-30)
  • 58 MiB/s throughput on COMP-3-heavy workloads
  • <256 MiB steady-state memory for multi-GB file processing
  • Deterministic output with parallel processing (1-8+ worker threads)

§Examples

§Basic Scratch Buffer Usage

use copybook_codec_memory::ScratchBuffers;

let mut scratch = ScratchBuffers::new();

// Use buffers for processing
scratch.digit_buffer.push(5);
scratch.byte_buffer.extend_from_slice(b"data");
scratch.string_buffer.push_str("text");

// Clear for reuse (no deallocation)
scratch.clear();

§Parallel Processing with Deterministic Output

use copybook_codec_memory::{WorkerPool, ScratchBuffers};

// Create worker pool with 4 threads
let mut pool = WorkerPool::new(
    4,   // num_workers
    100, // channel_capacity
    50,  // max_window_size
    |input: Vec<u8>, _scratch: &mut ScratchBuffers| -> String {
        // Process input (runs in parallel)
        String::from_utf8_lossy(&input).to_string()
    },
);

// Collect chunks so we know the count
let chunks: Vec<Vec<u8>> = get_data_chunks();
let num_chunks = chunks.len();

// Submit work
for chunk in chunks {
    pool.submit(chunk).unwrap();
}

// Receive exactly num_chunks results in order (non-blocking pattern)
for _ in 0..num_chunks {
    let result = pool.recv_ordered().unwrap().unwrap();
    println!("{}", result);
}

pool.shutdown().unwrap();

§Memory-Bounded Streaming

use copybook_codec_memory::StreamingProcessor;

let mut processor = StreamingProcessor::with_default_limit(); // 256 MiB

for record in records {
    // Check memory pressure before processing
    if processor.is_memory_pressure() {
        // Flush buffers or throttle input
    }

    // Track memory usage
    processor.update_memory_usage(record.len() as isize);
    processor.record_processed(record.len());
}

// Get statistics
let stats = processor.stats();
println!("Processed {} records", stats.records_processed);

Structs§

ScratchBuffers
Reusable scratch buffers for worker threads
SequenceRing
Bounded channel with sequence tracking for ordered emission
SequenceRingStats
Statistics about sequence ring operation
SequencedRecord
Record with sequence ID for ordered processing
StreamingProcessor
Memory-bounded streaming processor
StreamingProcessorStats
Statistics about streaming processor operation
WorkerPool
Worker pool for parallel record processing with bounded memory
WorkerPoolStats
Statistics about worker pool operation

Type Aliases§

DigitBuffer
Small vector for digit buffers (≤32 bytes on stack)