Skip to main content

MAX_DECOMPRESSED_BYTES

Constant MAX_DECOMPRESSED_BYTES 

Source
pub const MAX_DECOMPRESSED_BYTES: u64 = _; // 5_368_709_120u64
Expand description

v0.8.6 #89: maximum decompressed payload size honoured at decompress entry by every codec. Manifests claiming a larger original_size are rejected pre-allocation as forged / corrupted, so a malicious manifest cannot drive Vec::with_capacity(huge) into an OOM (memory-DoS) before the CRC check ever runs.

Was nvcomp::MAX_DECOMPRESSED_BYTES (v0.8.5 #83), promoted to s4_codec::MAX_DECOMPRESSED_BYTES so CPU codecs (CpuZstd / CpuGzip) share the exact same ceiling — the continuous fuzz farm hit OOM in cpu_zstd_decompress_bolero (issue #89) within minutes because the CPU codecs were doing Vec::with_capacity(manifest.original_size) before this guard had been promoted out of the GPU-only module.

Rationale for 5 GiB: matches AWS S3’s documented single-PUT object ceiling (PUT Object is capped at 5 GiB; bigger payloads must use multipart upload, which is split into ≤5 GiB parts). Real S4 chunks are bounded by the same ceiling end-to-end, so a manifest whose original_size exceeds it cannot have come from a well-formed S4 PUT.