Expand description
Memory management utilities for streaming record processing
This module provides bounded memory usage patterns, reusable scratch buffers, and ordered parallel processing with sequence tracking.
§Overview
The memory module implements performance-critical memory management patterns for high-throughput COBOL data processing. It provides three key capabilities:
- Scratch buffer reuse (
ScratchBuffers) - Eliminate allocations in hot paths - Deterministic parallel processing (
WorkerPool,SequenceRing) - Maintain record order - Bounded memory usage (
StreamingProcessor) - Process multi-GB files with <256 MiB RAM
§Performance Impact
These utilities enable copybook-rs to achieve:
- 205 MiB/s throughput on DISPLAY-heavy workloads (baseline: 2025-09-30)
- 58 MiB/s throughput on COMP-3-heavy workloads
- <256 MiB steady-state memory for multi-GB file processing
- Deterministic output with parallel processing (1-8+ worker threads)
§Examples
§Basic Scratch Buffer Usage
use copybook_codec_memory::ScratchBuffers;
let mut scratch = ScratchBuffers::new();
// Use buffers for processing
scratch.digit_buffer.push(5);
scratch.byte_buffer.extend_from_slice(b"data");
scratch.string_buffer.push_str("text");
// Clear for reuse (no deallocation)
scratch.clear();§Parallel Processing with Deterministic Output
use copybook_codec_memory::{WorkerPool, ScratchBuffers};
// Create worker pool with 4 threads
let mut pool = WorkerPool::new(
4, // num_workers
100, // channel_capacity
50, // max_window_size
|input: Vec<u8>, _scratch: &mut ScratchBuffers| -> String {
// Process input (runs in parallel)
String::from_utf8_lossy(&input).to_string()
},
);
// Collect chunks so we know the count
let chunks: Vec<Vec<u8>> = get_data_chunks();
let num_chunks = chunks.len();
// Submit work
for chunk in chunks {
pool.submit(chunk).unwrap();
}
// Receive exactly num_chunks results in order (non-blocking pattern)
for _ in 0..num_chunks {
let result = pool.recv_ordered().unwrap().unwrap();
println!("{}", result);
}
pool.shutdown().unwrap();§Memory-Bounded Streaming
use copybook_codec_memory::StreamingProcessor;
let mut processor = StreamingProcessor::with_default_limit(); // 256 MiB
for record in records {
// Check memory pressure before processing
if processor.is_memory_pressure() {
// Flush buffers or throttle input
}
// Track memory usage
processor.update_memory_usage(record.len() as isize);
processor.record_processed(record.len());
}
// Get statistics
let stats = processor.stats();
println!("Processed {} records", stats.records_processed);Structs§
- Scratch
Buffers - Reusable scratch buffers for worker threads
- Sequence
Ring - Bounded channel with sequence tracking for ordered emission
- Sequence
Ring Stats - Statistics about sequence ring operation
- Sequenced
Record - Record with sequence ID for ordered processing
- Streaming
Processor - Memory-bounded streaming processor
- Streaming
Processor Stats - Statistics about streaming processor operation
- Worker
Pool - Worker pool for parallel record processing with bounded memory
- Worker
Pool Stats - Statistics about worker pool operation
Type Aliases§
- Digit
Buffer - Small vector for digit buffers (≤32 bytes on stack)