Skip to main content

hexz_core/algo/compression/
zstd.rs

1//! Zstandard (zstd) compression with dictionary training support.
2//!
3//! This module provides a high-performance implementation of the Zstandard compression
4//! algorithm for Hexz's block-oriented storage system. Zstandard offers significantly
5//! better compression ratios than LZ4 while maintaining reasonable decompression speeds,
6//! making it ideal for archive storage where disk space efficiency is prioritized over
7//! raw throughput.
8//!
9//! # Zstandard Overview
10//!
11//! Zstandard is a modern compression algorithm developed by Facebook (Meta) that provides:
12//!
13//! - **Superior compression ratios**: 2-3x better than LZ4 on typical data, approaching gzip
14//!   levels while being 5-10x faster to decompress
15//! - **Tunable compression levels**: From level 1 (fast, ~400 MB/s) to level 22
16//!   (maximum compression, ~20 MB/s)
17//! - **Dictionary support**: Pre-trained dictionaries can improve compression by 10-40%
18//!   on small blocks (<64 KB) of structured data
19//! - **Fast decompression**: ~1 GB/s regardless of compression level, making it suitable
20//!   for read-heavy workloads
21//!
22//! # Dictionary Training
23//!
24//! Dictionary training is a powerful feature that analyzes representative samples of your
25//! data to build a reusable compression model. This is especially effective for:
26//!
27//! - **Small blocks**: Blocks under 64 KB benefit most, as regular compression cannot
28//!   build effective statistical models from limited data
29//! - **Structured data**: VM disk images, database pages, log files, and configuration
30//!   files with repeated patterns
31//! - **Homogeneous datasets**: Collections of similar files (e.g., all ext4 filesystem blocks)
32//!
33//! ## How Dictionary Training Works
34//!
35//! The training process analyzes a set of sample blocks to identify:
36//!
37//! 1. **Common byte sequences**: Frequently occurring patterns across samples
38//! 2. **Structural patterns**: Repeated headers, footers, or delimiters
39//! 3. **Statistical distributions**: Byte frequency distributions for entropy coding
40//!
41//! The resulting dictionary is then prepended (conceptually) to each compressed block,
42//! allowing the compressor to reference these patterns without encoding them repeatedly.
43//!
44//! ## Sample Requirements
45//!
46//! For effective dictionary training:
47//!
48//! - **Sample count**: Minimum 10 samples, ideally 50-100 representative blocks
49//! - **Sample size**: Each sample should be 1-4x your target block size
50//! - **Total data**: Aim for 100x the desired dictionary size (e.g., 10 MB of samples
51//!   for a 100 KB dictionary)
52//! - **Representativeness**: Samples must match production data patterns; training on
53//!   zeros and compressing real data will hurt compression
54//!
55//! ## Compression Ratio Improvements
56//!
57//! Typical improvements with dictionary compression (measured on 64 KB blocks):
58//!
59//! | Data Type           | Without Dict | With Dict | Improvement |
60//! |---------------------|--------------|-----------|-------------|
61//! | VM disk (ext4)      | 2.1x         | 3.2x      | +52%        |
62//! | Database pages      | 1.8x         | 2.9x      | +61%        |
63//! | Log files           | 3.5x         | 4.8x      | +37%        |
64//! | JSON configuration  | 4.2x         | 6.1x      | +45%        |
65//! | Random/encrypted    | 1.0x         | 1.0x      | 0%          |
66//!
67//! ## Memory Usage
68//!
69//! Dictionary memory overhead:
70//!
71//! - **Training**: ~10x dictionary size during training (110 KB dict = ~1.1 MB temporary)
72//! - **Compression**: ~3x dictionary size per encoder instance (~330 KB)
73//! - **Decompression**: ~1x dictionary size per decoder instance (~110 KB)
74//! - **Process lifetime**: Dictionaries are leaked to obtain `'static` lifetime
75//!
76//! In Hexz, dictionary bytes are typically 110 KB (zstd's recommended maximum), resulting
77//! in ~450 KB of permanent memory overhead per compressor instance.
78//!
79//! # Compression Level Selection
80//!
81//! Zstandard supports compression levels from 1 to 22, with different speed/ratio tradeoffs:
82//!
83//! ## Level Ranges and Characteristics
84//!
85//! | Level    | Compress Speed | Ratio vs Level 3 | Memory (Compress) | Use Case                |
86//! |----------|----------------|------------------|-------------------|-------------------------|
87//! | 1        | ~450 MB/s      | -8%              | ~1 MB             | Real-time compression   |
88//! | 3 (def)  | ~350 MB/s      | baseline         | ~2 MB             | General purpose         |
89//! | 5-7      | ~200 MB/s      | +5%              | ~4 MB             | Balanced                |
90//! | 9-12     | ~80 MB/s       | +12%             | ~8 MB             | Archive creation        |
91//! | 15-19    | ~30 MB/s       | +18%             | ~32 MB            | Cold storage            |
92//! | 20-22    | ~10 MB/s       | +22%             | ~64 MB            | Maximum compression     |
93//!
94//! **Decompression speed**: ~1000 MB/s for all levels (level does not affect decompression)
95//!
96//! ## Recommended Settings by Data Type
97//!
98//! ### VM Disk Images (Mixed Content)
99//! - **Level 3**: Good balance for general disk archives
100//! - **Dictionary**: Strongly recommended, +40-60% ratio improvement
101//! - **Rationale**: Mixed content benefits from adaptive compression
102//!
103//! ### Database Files (Structured Pages)
104//! - **Level 5-7**: Higher ratio helps with large database archives
105//! - **Dictionary**: Critical for small page sizes (<16 KB)
106//! - **Rationale**: Structured data compresses well with more analysis
107//!
108//! ### Log Files (Highly Compressible Text)
109//! - **Level 1-3**: Logs already compress extremely well
110//! - **Dictionary**: Optional, text is self-describing
111//! - **Rationale**: Diminishing returns at higher levels
112//!
113//! ### Memory Archives (Low Entropy)
114//! - **Level 3**: Memory pages often contain zeros/patterns
115//! - **Dictionary**: Not beneficial for homogeneous data
116//! - **Rationale**: Fast compression for potentially large datasets
117//!
118//! ### Configuration/JSON (Small Files)
119//! - **Level 9**: Small files justify slower compression
120//! - **Dictionary**: Highly effective for structured text
121//! - **Rationale**: One-time compression cost, repeated reads
122//!
123//! # When to Use Dictionary vs Raw Compression
124//!
125//! ## Use Dictionary When:
126//! - Block size is ≤64 KB (most effective at 16-64 KB)
127//! - Data has repeated structure (headers, schemas, common fields)
128//! - Compression ratio is more important than speed
129//! - You can provide 10+ representative samples for training
130//! - All compressed blocks will use the same dictionary
131//!
132//! ## Use Raw Compression When:
133//! - Block size is ≥256 KB (dictionary overhead outweighs benefits)
134//! - Data is highly random or encrypted (no patterns to exploit)
135//! - Compression speed is critical
136//! - Representative samples are unavailable
137//! - Each block has unique characteristics
138//!
139//! # Performance Characteristics
140//!
141//! Benchmarked on AMD Ryzen 9 5950X, single-threaded:
142//!
143//! ```text
144//! Compression (64 KB blocks, structured data):
145//!   Level 1:  420 MB/s @ 2.8x ratio
146//!   Level 3:  340 MB/s @ 3.2x ratio  ← default
147//!   Level 9:   85 MB/s @ 3.8x ratio
148//!   Level 19:  28 MB/s @ 4.1x ratio
149//!
150//! Decompression (all levels):
151//!   Without dict: ~1100 MB/s
152//!   With dict:    ~950 MB/s (10% overhead)
153//!
154//! Dictionary training (110 KB dict, 10 MB samples):
155//!   Training time: ~200ms
156//!   One-time cost amortized over millions of blocks
157//! ```
158//!
159//! Compared to LZ4 (Hexz's fast compression option):
160//! - **Compression ratio**: Zstd-3 is ~1.8x better than LZ4
161//! - **Compression speed**: LZ4 is ~6x faster (~2000 MB/s)
162//! - **Decompression speed**: LZ4 is ~3x faster (~3000 MB/s)
163//!
164//! **Tradeoff**: Use Zstd when storage cost exceeds CPU cost, LZ4 when latency matters most.
165//!
166//! # Examples
167//!
168//! ## Basic Compression (No Dictionary)
169//!
170//! ```
171//! use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
172//!
173//! // Create compressor at default level (3)
174//! let compressor = ZstdCompressor::new(3, None);
175//!
176//! let data = b"Hello, world! This is some data to compress.";
177//! let compressed = compressor.compress(data).unwrap();
178//! let decompressed = compressor.decompress(&compressed).unwrap();
179//!
180//! assert_eq!(data.as_slice(), decompressed.as_slice());
181//! println!("Original: {} bytes, Compressed: {} bytes", data.len(), compressed.len());
182//! ```
183//!
184//! ## Dictionary Training Workflow
185//!
186//! ```no_run
187//! use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
188//! use std::fs::File;
189//! use std::io::Read;
190//!
191//! # fn main() -> Result<(), Box<dyn std::error::Error>> {
192//! // Step 1: Collect representative samples (10-100 blocks)
193//! let mut samples = Vec::new();
194//! for i in 0..50 {
195//!     let mut file = File::open(format!("samples/block_{}.dat", i))?;
196//!     let mut sample = Vec::new();
197//!     file.read_to_end(&mut sample)?;
198//!     samples.push(sample);
199//! }
200//!
201//! // Step 2: Train dictionary (max 110 KB)
202//! let dict = ZstdCompressor::train(&samples, 110 * 1024)?;
203//! println!("Trained dictionary: {} bytes", dict.len());
204//!
205//! // Step 3: Create compressor with dictionary
206//! let compressor = ZstdCompressor::new(3, Some(&dict));
207//!
208//! // Step 4: Compress production data
209//! let data = b"Production data with similar structure to samples";
210//! let compressed = compressor.compress(data)?;
211//!
212//! // Step 5: Decompress (must use same compressor instance with same dict)
213//! let decompressed = compressor.decompress(&compressed)?;
214//! assert_eq!(data.as_slice(), decompressed.as_slice());
215//! # Ok(())
216//! # }
217//! ```
218//!
219//! ## High Compression for Archives
220//!
221//! ```
222//! use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
223//!
224//! // Use level 19 for maximum compression (slow)
225//! let compressor = ZstdCompressor::new(19, None);
226//!
227//! let large_data = vec![0u8; 1_000_000];
228//! let compressed = compressor.compress(&large_data).unwrap();
229//!
230//! // Compression is slow, but decompression is still fast
231//! let decompressed = compressor.decompress(&compressed).unwrap();
232//! println!("Compressed 1 MB to {} bytes ({:.1}x ratio)",
233//!          compressed.len(),
234//!          large_data.len() as f64 / compressed.len() as f64);
235//! ```
236//!
237//! ## Buffer Reuse for Hot Paths
238//!
239//! ```
240//! use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
241//!
242//! let compressor = ZstdCompressor::new(3, None);
243//! let data = vec![42u8; 65536];
244//! let compressed = compressor.compress(&data).unwrap();
245//!
246//! // Reuse buffer for multiple decompressions to avoid allocations
247//! let mut output_buffer = vec![0u8; 65536];
248//! let size = compressor.decompress_into(&compressed, &mut output_buffer).unwrap();
249//!
250//! assert_eq!(size, data.len());
251//! assert_eq!(output_buffer, data);
252//! ```
253//!
254//! # Thread Safety
255//!
256//! `ZstdCompressor` implements `Send + Sync` and can be safely shared across threads.
257//! Each compression/decompression operation is independent and does not modify the
258//! compressor state. The dictionary is immutable after construction.
259//!
260//! # Architectural Integration
261//!
262//! In Hexz's architecture:
263//! - **Format layer**: Stores compression type in archive header
264//! - **Pack operations**: Optionally trains dictionaries during archive creation
265//! - **Read operations**: Instantiates compressor with stored dictionary
266//! - **CLI**: Provides `--compression=zstd` flag and `--train-dict` option
267//!
268//! The same dictionary bytes must be available for both compression and decompression,
269//! so Hexz embeds trained dictionaries in the archive file header.
270
271use crate::algo::compression::Compressor;
272use hexz_common::{Error, Result};
273use std::io::{Cursor, Read, Write};
274use zstd::dict::{DecoderDictionary, EncoderDictionary};
275
276/// Zstandard compressor with optional pre-trained dictionary.
277///
278/// This compressor wraps the `zstd` crate and provides both raw compression
279/// (no dictionary) and dictionary-enhanced compression for improved ratios on
280/// structured data.
281///
282/// # Dictionary Lifecycle
283///
284/// When a dictionary is provided:
285/// 1. The dictionary bytes are cloned and **leaked** to obtain `'static` lifetime
286/// 2. Both encoder and decoder dictionaries are constructed from the leaked bytes
287/// 3. The dictionary memory persists for the process lifetime
288/// 4. Multiple compressor instances can share the same dictionary bytes
289///
290/// This design trades memory (leaked dictionary) for simplicity and safety. In typical
291/// Hexz usage, one compressor instance exists per archive file, so the overhead is
292/// ~450 KB per open archive (110 KB dict × ~4x internal structures).
293///
294/// # Thread Safety
295///
296/// `ZstdCompressor` is `Send + Sync`. Compression and decompression operations do not
297/// mutate the compressor state, allowing safe concurrent use from multiple threads.
298/// Each operation allocates its own temporary encoder/decoder.
299///
300/// # Constraints
301///
302/// - **Dictionary compatibility**: Blocks compressed with a dictionary MUST be
303///   decompressed with the exact same dictionary bytes. Attempting to decompress
304///   with a different or missing dictionary will fail with a compression error.
305/// - **Level consistency**: The compression level is stored in the encoder dictionary.
306///   Changing the level requires training a new dictionary.
307///
308/// # Examples
309///
310/// ```
311/// use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
312///
313/// // Create compressor without dictionary
314/// let compressor = ZstdCompressor::new(3, None);
315/// let data = b"test data";
316/// let compressed = compressor.compress(data).unwrap();
317/// let decompressed = compressor.decompress(&compressed).unwrap();
318/// assert_eq!(data.as_slice(), decompressed.as_slice());
319/// ```
320pub struct ZstdCompressor {
321    level: i32,
322    encoder_dict: Option<EncoderDictionary<'static>>,
323    decoder_dict: Option<DecoderDictionary<'static>>,
324}
325
326impl std::fmt::Debug for ZstdCompressor {
327    /// Formats the compressor for debugging output.
328    ///
329    /// Displays the compression level and whether a dictionary is present,
330    /// without exposing sensitive dictionary contents.
331    ///
332    /// # Output Format
333    ///
334    /// ```text
335    /// ZstdCompressor { level: 3, has_dict: true }
336    /// ```
337    ///
338    /// # Examples
339    ///
340    /// ```
341    /// use hexz_core::algo::compression::zstd::ZstdCompressor;
342    ///
343    /// let compressor = ZstdCompressor::new(5, None);
344    /// println!("{:?}", compressor);
345    /// // Outputs: ZstdCompressor { level: 5, has_dict: false }
346    /// ```
347    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
348        f.debug_struct("ZstdCompressor")
349            .field("level", &self.level)
350            .field("has_dict", &self.encoder_dict.is_some())
351            .finish_non_exhaustive()
352    }
353}
354
355impl ZstdCompressor {
356    /// Creates a new Zstandard compressor with the specified compression level and optional dictionary.
357    ///
358    /// # Parameters
359    ///
360    /// * `level` - Compression level from 1 to 22:
361    ///   - `1`: Fastest compression (~450 MB/s), lower ratio
362    ///   - `3`: Default, balanced speed/ratio (~350 MB/s)
363    ///   - `9`: High compression (~85 MB/s), good ratio
364    ///   - `19-22`: Maximum compression (~10-30 MB/s), best ratio
365    ///
366    /// * `dict` - Optional pre-trained dictionary bytes:
367    ///   - `None`: Use raw zstd compression
368    ///   - `Some(dict_bytes)`: Use dictionary-enhanced compression
369    ///
370    /// # Dictionary Handling
371    ///
372    /// When a dictionary is provided, this function:
373    /// 1. **Copies** the dictionary bytes internally via `EncoderDictionary::copy`
374    /// 2. **Parses** the bytes into native zstd encoder/decoder dictionaries
375    /// 3. **Manages** the dictionary lifetime automatically (no leaks)
376    ///
377    /// The dictionary memory (~110 KB for typical dictionaries) is properly managed
378    /// and freed when the compressor is dropped. This provides:
379    /// - Proper memory management without leaks
380    /// - Dictionary reuse across millions of blocks
381    /// - Memory safety with automatic cleanup
382    ///
383    /// # Memory Usage
384    ///
385    /// Approximate memory overhead per compressor instance:
386    /// - No dictionary: ~10 KB (minimal bookkeeping)
387    /// - With 110 KB dictionary: ~450 KB (leaked bytes + encoder/decoder structures)
388    ///
389    /// # Examples
390    ///
391    /// ```
392    /// use hexz_core::algo::compression::zstd::ZstdCompressor;
393    ///
394    /// // Fast compression, no dictionary
395    /// let fast = ZstdCompressor::new(1, None);
396    ///
397    /// // Balanced compression with dictionary
398    /// let dict = vec![0u8; 1024]; // Placeholder dictionary
399    /// let balanced = ZstdCompressor::new(3, Some(&dict));
400    ///
401    /// // Maximum compression for archival
402    /// let max = ZstdCompressor::new(22, None);
403    /// ```
404    ///
405    /// # Performance Notes
406    ///
407    /// Creating a compressor is relatively expensive (~1 ms with dictionary due to parsing).
408    /// Reuse compressor instances rather than creating them per-operation.
409    pub fn new(level: i32, dict: Option<&[u8]>) -> Self {
410        let (encoder_dict, decoder_dict) = if let Some(d) = dict {
411            // EncoderDictionary::copy and DecoderDictionary::copy both copy the
412            // dictionary data internally, so we only need a temporary reference.
413            (
414                Some(EncoderDictionary::copy(d, level)),
415                Some(DecoderDictionary::copy(d)),
416            )
417        } else {
418            (None, None)
419        };
420
421        Self {
422            level,
423            encoder_dict,
424            decoder_dict,
425        }
426    }
427
428    /// Trains a Zstandard dictionary from representative sample blocks.
429    ///
430    /// Dictionary training analyzes a collection of sample data to identify common patterns,
431    /// sequences, and statistical distributions. The resulting dictionary acts as a "seed"
432    /// for the compressor, enabling better compression ratios on small blocks that would
433    /// otherwise lack sufficient data to build effective models.
434    ///
435    /// # Training Algorithm
436    ///
437    /// The training process:
438    /// 1. **Concatenates** all samples into a training corpus
439    /// 2. **Analyzes** byte-level patterns using suffix arrays and frequency analysis
440    /// 3. **Selects** the most valuable patterns up to `max_size` bytes
441    /// 4. **Optimizes** dictionary layout for fast lookup during compression
442    /// 5. **Returns** the trained dictionary as a byte vector
443    ///
444    /// This is a CPU-intensive operation (O(n log n) where n is total sample bytes) and
445    /// should be done once during archive creation, not per-block.
446    ///
447    /// # Parameters
448    ///
449    /// * `samples` - A slice of representative data blocks. Requirements:
450    ///   - **Minimum count**: 10 samples (20+ recommended, 50+ ideal)
451    ///   - **Minimum total size**: 100x `max_size` (e.g., 10 MB for 100 KB dictionary)
452    ///   - **Representativeness**: Must match production data patterns
453    ///   - **Diversity**: Include variety of structures, not just repeated copies
454    ///
455    /// * `max_size` - Maximum dictionary size in bytes. Recommendations:
456    ///   - **Small blocks (16-32 KB)**: 64 KB dictionary
457    ///   - **Medium blocks (64 KB)**: 110 KB dictionary (zstd's recommended max)
458    ///   - **Large blocks (128+ KB)**: Diminishing returns, consider skipping dictionary
459    ///
460    /// # Returns
461    ///
462    /// Returns `Ok(Vec<u8>)` containing the trained dictionary bytes, or `Err` if training fails.
463    /// The actual dictionary size may be less than `max_size` if fewer patterns were found.
464    ///
465    /// # Errors
466    ///
467    /// Returns `Error::Compression` if:
468    /// - Samples are empty or too small (less than ~1 KB total)
469    /// - `max_size` is invalid (0 or excessively large)
470    /// - Internal zstd training algorithm fails (corrupted samples, out of memory)
471    ///
472    /// # Performance Characteristics
473    ///
474    /// Training time on AMD Ryzen 9 5950X:
475    /// - 1 MB samples, 64 KB dict: ~50 ms
476    /// - 10 MB samples, 110 KB dict: ~200 ms
477    /// - 100 MB samples, 110 KB dict: ~2 seconds
478    ///
479    /// Training is approximately O(n log n) in total sample size.
480    ///
481    /// # Compression Ratio Impact
482    ///
483    /// Expected compression ratio improvements with trained dictionary vs. raw compression:
484    ///
485    /// | Block Size | Raw Zstd-3 | With Dict | Improvement |
486    /// |------------|------------|-----------|-------------|
487    /// | 16 KB      | 1.5x       | 2.4x      | +60%        |
488    /// | 32 KB      | 2.1x       | 3.2x      | +52%        |
489    /// | 64 KB      | 2.8x       | 3.9x      | +39%        |
490    /// | 128 KB     | 3.2x       | 3.7x      | +16%        |
491    /// | 256 KB+    | 3.5x       | 3.6x      | +3%         |
492    ///
493    /// Measured on typical VM disk image blocks (ext4 filesystem data).
494    ///
495    /// # Examples
496    ///
497    /// ## Basic Training
498    ///
499    /// ```no_run
500    /// use hexz_core::algo::compression::zstd::ZstdCompressor;
501    ///
502    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
503    /// // Collect 20 representative 64 KB blocks
504    /// let samples: Vec<Vec<u8>> = (0..20)
505    ///     .map(|i| vec![((i * 13) % 256) as u8; 65536])
506    ///     .collect();
507    ///
508    /// // Train 110 KB dictionary
509    /// let dict = ZstdCompressor::train(&samples, 110 * 1024)?;
510    /// println!("Trained dictionary: {} bytes", dict.len());
511    ///
512    /// // Use dictionary for compression
513    /// let compressor = ZstdCompressor::new(3, Some(&dict));
514    /// # Ok(())
515    /// # }
516    /// ```
517    ///
518    /// ## Training from File Samples
519    ///
520    /// ```no_run
521    /// use hexz_core::algo::compression::zstd::ZstdCompressor;
522    /// use std::fs::File;
523    /// use std::io::Read;
524    ///
525    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
526    /// // Read samples from disk (e.g., sampled from VM disk image)
527    /// let mut samples = Vec::new();
528    /// let mut file = File::open("disk.raw")?;
529    /// let file_size = file.metadata()?.len();
530    /// let block_size = 65536;
531    /// let sample_count = 50;
532    /// let step = file_size / sample_count;
533    ///
534    /// for i in 0..sample_count {
535    ///     let mut buffer = vec![0u8; block_size];
536    ///     let offset = i * step;
537    ///     // Seek to offset and read block
538    ///     // (simplified, real code needs error handling)
539    ///     samples.push(buffer);
540    /// }
541    ///
542    /// let dict = ZstdCompressor::train(&samples, 110 * 1024)?;
543    /// println!("Trained from {} samples: {} bytes", samples.len(), dict.len());
544    /// # Ok(())
545    /// # }
546    /// ```
547    ///
548    /// ## Validating Dictionary Quality
549    ///
550    /// ```no_run
551    /// use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
552    ///
553    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
554    /// let samples: Vec<Vec<u8>> = vec![vec![42u8; 32768]; 30];
555    /// let dict = ZstdCompressor::train(&samples, 64 * 1024)?;
556    ///
557    /// // Compare compression ratios
558    /// let without_dict = ZstdCompressor::new(3, None);
559    /// let with_dict = ZstdCompressor::new(3, Some(&dict));
560    ///
561    /// let test_data = vec![42u8; 32768];
562    /// let compressed_raw = without_dict.compress(&test_data)?;
563    /// let compressed_dict = with_dict.compress(&test_data)?;
564    ///
565    /// let improvement = (compressed_raw.len() as f64 / compressed_dict.len() as f64 - 1.0) * 100.0;
566    /// println!("Dictionary improved compression by {:.1}%", improvement);
567    ///
568    /// // If improvement < 10%, dictionary may not be beneficial
569    /// # Ok(())
570    /// # }
571    /// ```
572    ///
573    /// # When Dictionary Training Fails or Performs Poorly
574    ///
575    /// Dictionary training may produce poor results if:
576    /// - **Samples are unrepresentative**: Training on zeros, compressing real data
577    /// - **Data is random/encrypted**: No patterns exist to learn
578    /// - **Samples are too few**: Less than 10 samples or less than 100x dict size
579    /// - **Data is already highly compressible**: Text/logs may not benefit
580    /// - **Blocks are too large**: 256 KB+ blocks have enough context without dictionary
581    ///
582    /// If dictionary compression performs worse than raw compression, fall back to
583    /// `ZstdCompressor::new(level, None)`.
584    ///
585    /// # Memory Usage During Training
586    ///
587    /// Temporary memory allocated during training:
588    /// - **Input buffer**: Sum of all sample sizes (e.g., 10 MB for 50 × 200 KB samples)
589    /// - **Working memory**: ~10x `max_size` (e.g., ~1.1 MB for 110 KB dict)
590    /// - **Output dictionary**: `max_size` (e.g., 110 KB)
591    ///
592    /// Total peak memory: `input_size` + `10×max_size`. For typical usage (10 MB samples,
593    /// 110 KB dict), peak memory is ~12 MB.
594    pub fn train(samples: &[Vec<u8>], max_size: usize) -> Result<Vec<u8>> {
595        zstd::dict::from_samples(samples, max_size)
596            .map_err(|e| Error::Compression(format!("Failed to train dict: {e}")))
597    }
598}
599
600impl Compressor for ZstdCompressor {
601    /// Compresses a block of data using Zstandard compression.
602    ///
603    /// This method compresses `data` using the compression level and dictionary
604    /// configured during construction. The output is a self-contained compressed
605    /// block in zstd frame format.
606    ///
607    /// # Parameters
608    ///
609    /// * `data` - The uncompressed input data to compress. Can be any size from 0 bytes
610    ///   to multiple gigabytes, though blocks of 64 KB to 1 MB are typical in Hexz.
611    ///
612    /// # Returns
613    ///
614    /// Returns `Ok(Vec<u8>)` containing the compressed data. The compressed size depends on:
615    /// - Input data compressibility (random data: ~100%, structured data: 20-50%)
616    /// - Compression level (higher levels = smaller output, slower compression)
617    /// - Dictionary usage (can reduce output by 10-40% for small blocks)
618    ///
619    /// # Errors
620    ///
621    /// Returns `Error::Compression` if:
622    /// - Internal zstd encoder initialization fails (rare, typically OOM)
623    /// - Compression process fails (extremely rare with valid input)
624    ///
625    /// # Dictionary Behavior
626    ///
627    /// - **With dictionary**: Uses streaming encoder with pre-parsed dictionary for
628    ///   maximum throughput. The dictionary is **not** embedded in the output; the
629    ///   decompressor must have the same dictionary.
630    /// - **Without dictionary**: Uses simple one-shot encoding with zstd's default
631    ///   dictionary learning from the input itself.
632    ///
633    /// # Performance
634    ///
635    /// Approximate throughput on modern hardware (AMD Ryzen 9 5950X):
636    /// - Level 1: ~450 MB/s
637    /// - Level 3: ~350 MB/s (default)
638    /// - Level 9: ~85 MB/s
639    /// - Level 19: ~28 MB/s
640    ///
641    /// Dictionary overhead: ~5% slower than raw compression due to initialization.
642    ///
643    /// # Examples
644    ///
645    /// ```
646    /// use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
647    ///
648    /// let compressor = ZstdCompressor::new(3, None);
649    /// let data = b"Hello, world! Compression test data.";
650    ///
651    /// let compressed = compressor.compress(data).unwrap();
652    /// println!("Compressed {} bytes to {} bytes", data.len(), compressed.len());
653    ///
654    /// // Compressed data is self-contained and can be stored/transmitted
655    /// ```
656    ///
657    /// # Thread Safety
658    ///
659    /// This method can be called concurrently from multiple threads on the same
660    /// `ZstdCompressor` instance. Each call creates an independent encoder.
661    fn compress(&self, data: &[u8]) -> Result<Vec<u8>> {
662        if let Some(dict) = &self.encoder_dict {
663            let mut encoder = zstd::stream::write::Encoder::with_prepared_dictionary(
664                Vec::with_capacity(data.len()),
665                dict,
666            )
667            .map_err(|e| Error::Compression(e.to_string()))?;
668
669            encoder
670                .write_all(data)
671                .map_err(|e| Error::Compression(e.to_string()))?;
672            encoder
673                .finish()
674                .map_err(|e| Error::Compression(e.to_string()))
675        } else {
676            zstd::stream::encode_all(Cursor::new(data), self.level)
677                .map_err(|e| Error::Compression(e.to_string()))
678        }
679    }
680
681    /// Decompresses a Zstandard-compressed block into a new buffer.
682    ///
683    /// This method reverses the compression performed by `compress()`, restoring the
684    /// original uncompressed data. The decompressed output is allocated dynamically
685    /// based on the compressed frame's metadata.
686    ///
687    /// # Parameters
688    ///
689    /// * `data` - The compressed input data in zstd frame format. Must have been
690    ///   compressed by a compatible `ZstdCompressor` (same dictionary, any level).
691    ///
692    /// # Returns
693    ///
694    /// Returns `Ok(Vec<u8>)` containing the decompressed data. The output size is
695    /// determined by the compressed frame's content size field (embedded during
696    /// compression).
697    ///
698    /// # Errors
699    ///
700    /// Returns `Error::Compression` if:
701    /// - `data` is not valid zstd-compressed data (corrupted or wrong format)
702    /// - `data` was compressed with a dictionary, but this compressor has no dictionary
703    /// - `data` was compressed without a dictionary, but this compressor has a dictionary
704    /// - `data` was compressed with a different dictionary than this compressor
705    /// - Internal decompression fails (checksum mismatch, corrupted data)
706    ///
707    /// # Dictionary Compatibility
708    ///
709    /// **Critical**: Dictionary-compressed data MUST be decompressed with the exact same
710    /// dictionary bytes. The zstd format includes a dictionary ID checksum; mismatched
711    /// dictionaries will cause decompression to fail with an error.
712    ///
713    /// | Compressed With | Decompressed With | Result          |
714    /// |-----------------|-------------------|-----------------|
715    /// | No dictionary   | No dictionary     | Success         |
716    /// | Dictionary A    | Dictionary A      | Success         |
717    /// | No dictionary   | Dictionary A      | Error           |
718    /// | Dictionary A    | No dictionary     | Error           |
719    /// | Dictionary A    | Dictionary B      | Error           |
720    ///
721    /// # Performance
722    ///
723    /// Decompression speed is independent of compression level (level affects only
724    /// compression time). Typical throughput on modern hardware:
725    /// - Without dictionary: ~1100 MB/s
726    /// - With dictionary: ~950 MB/s (10% overhead from dictionary lookups)
727    ///
728    /// Decompression is roughly 3x faster than compression at level 3.
729    ///
730    /// # Memory Allocation
731    ///
732    /// This method allocates a new `Vec<u8>` to hold the decompressed output. For
733    /// hot paths where the decompressed size is known, consider using `decompress_into()`
734    /// to reuse buffers and avoid allocations.
735    ///
736    /// # Examples
737    ///
738    /// ```
739    /// use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
740    ///
741    /// let compressor = ZstdCompressor::new(3, None);
742    /// let original = b"Test data for compression";
743    ///
744    /// let compressed = compressor.compress(original).unwrap();
745    /// let decompressed = compressor.decompress(&compressed).unwrap();
746    ///
747    /// assert_eq!(original.as_slice(), decompressed.as_slice());
748    /// ```
749    ///
750    /// # Thread Safety
751    ///
752    /// This method can be called concurrently from multiple threads on the same
753    /// `ZstdCompressor` instance. Each call creates an independent decoder.
754    fn compress_into(&self, data: &[u8], out: &mut Vec<u8>) -> Result<()> {
755        out.clear();
756        if let Some(dict) = &self.encoder_dict {
757            let mut encoder =
758                zstd::stream::write::Encoder::with_prepared_dictionary(std::mem::take(out), dict)
759                    .map_err(|e| Error::Compression(e.to_string()))?;
760
761            encoder
762                .write_all(data)
763                .map_err(|e| Error::Compression(e.to_string()))?;
764            *out = encoder
765                .finish()
766                .map_err(|e| Error::Compression(e.to_string()))?;
767        } else {
768            let mut encoder = zstd::stream::write::Encoder::new(std::mem::take(out), self.level)
769                .map_err(|e| Error::Compression(e.to_string()))?;
770            encoder
771                .write_all(data)
772                .map_err(|e| Error::Compression(e.to_string()))?;
773            *out = encoder
774                .finish()
775                .map_err(|e| Error::Compression(e.to_string()))?;
776        }
777        Ok(())
778    }
779
780    fn decompress(&self, data: &[u8]) -> Result<Vec<u8>> {
781        const MAX_DECOMPRESSED: u64 = 128 * 1024 * 1024; // 128 MB
782
783        if let Some(dict) = &self.decoder_dict {
784            // Pre-allocate output buffer using frame content size when available,
785            // capped to prevent OOM from crafted frame headers.
786            let frame_size = zstd::zstd_safe::get_frame_content_size(data)
787                .ok()
788                .flatten()
789                .unwrap_or(data.len() as u64 * 2);
790            if frame_size > MAX_DECOMPRESSED {
791                return Err(Error::Compression(format!(
792                    "claimed decompressed size ({frame_size} bytes) exceeds limit ({MAX_DECOMPRESSED} bytes)"
793                )));
794            }
795            let prealloc_cap = frame_size as usize;
796
797            let mut decoder =
798                zstd::stream::read::Decoder::with_prepared_dictionary(Cursor::new(data), dict)
799                    .map_err(|e| Error::Compression(e.to_string()))?;
800
801            let mut out = Vec::with_capacity(prealloc_cap);
802            _ = decoder
803                .read_to_end(&mut out)
804                .map_err(|e| Error::Compression(e.to_string()))?;
805            Ok(out)
806        } else {
807            // Check frame content size for the non-dictionary path too.
808            if let Some(frame_size) = zstd::zstd_safe::get_frame_content_size(data).ok().flatten() {
809                if frame_size > MAX_DECOMPRESSED {
810                    return Err(Error::Compression(format!(
811                        "claimed decompressed size ({frame_size} bytes) exceeds limit ({MAX_DECOMPRESSED} bytes)"
812                    )));
813                }
814            }
815            // decode_all is a highly optimized single-call API — faster than
816            // Decoder::new() + read_to_end() for the non-dictionary path.
817            zstd::stream::decode_all(Cursor::new(data))
818                .map_err(|e| Error::Compression(e.to_string()))
819        }
820    }
821
822    /// Decompresses a Zstandard-compressed block into a caller-provided buffer.
823    ///
824    /// This is a zero-allocation variant of `decompress()` that writes decompressed
825    /// data directly into a pre-allocated buffer. This is ideal for hot paths where
826    /// the decompressed size is known and buffers can be reused across multiple
827    /// decompression operations.
828    ///
829    /// # Parameters
830    ///
831    /// * `data` - The compressed input data in zstd frame format. Must have been
832    ///   compressed by a compatible `ZstdCompressor` (same dictionary, any level).
833    /// * `out` - The output buffer to receive decompressed bytes. Must be large enough
834    ///   to hold the entire decompressed payload.
835    ///
836    /// # Returns
837    ///
838    /// Returns `Ok(usize)` containing the number of bytes written to `out`. This is
839    /// always ≤ `out.len()`.
840    ///
841    /// # Errors
842    ///
843    /// Returns `Error::Compression` if:
844    /// - `data` is not valid zstd-compressed data
845    /// - Dictionary mismatch (same rules as `decompress()`)
846    /// - `out` is too small to hold the decompressed data (buffer overflow protection)
847    /// - Internal decompression fails (checksum mismatch, corrupted data)
848    ///
849    /// # Buffer Sizing
850    ///
851    /// The output buffer must be large enough to hold the full decompressed payload.
852    /// If the buffer is too small, decompression will fail with an error rather than
853    /// truncating output.
854    ///
855    /// To determine the required size:
856    /// - If you compressed the data, you know the original size
857    /// - If reading from Hexz archives, the block size is in the index
858    /// - The zstd frame header contains the content size (can be parsed)
859    ///
860    /// # Performance
861    ///
862    /// This method avoids heap allocation of the output buffer, making it suitable for
863    /// high-throughput scenarios:
864    ///
865    /// - **With reused buffer**: 0 allocations per decompression
866    /// - **Throughput**: Same as `decompress()` (~1000 MB/s)
867    /// - **Latency**: ~5% lower than `decompress()` due to eliminated allocation
868    ///
869    /// Recommended usage pattern for hot paths:
870    /// ```
871    /// use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
872    ///
873    /// let compressor = ZstdCompressor::new(3, None);
874    /// let original = vec![42u8; 65536]; // 64 KB data
875    /// let compressed = compressor.compress(&original).unwrap();
876    /// let mut reusable_buffer = vec![0u8; 65536]; // 64 KB buffer
877    ///
878    /// // Reuse buffer for multiple decompressions
879    /// for _ in 0..1000 {
880    ///     let size = compressor.decompress_into(&compressed, &mut reusable_buffer).unwrap();
881    ///     // Process reusable_buffer[..size]
882    /// }
883    /// ```
884    ///
885    /// # Examples
886    ///
887    /// ## Basic Usage
888    ///
889    /// ```
890    /// use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
891    ///
892    /// let compressor = ZstdCompressor::new(3, None);
893    /// let original = vec![42u8; 1024];
894    ///
895    /// let compressed = compressor.compress(&original).unwrap();
896    ///
897    /// // Decompress into pre-allocated buffer
898    /// let mut output = vec![0u8; 1024];
899    /// let size = compressor.decompress_into(&compressed, &mut output).unwrap();
900    ///
901    /// assert_eq!(size, 1024);
902    /// assert_eq!(output, original);
903    /// ```
904    ///
905    /// ## Buffer Too Small
906    ///
907    /// ```no_run
908    /// use hexz_core::algo::compression::{Compressor, zstd::ZstdCompressor};
909    ///
910    /// let compressor = ZstdCompressor::new(3, None);
911    /// let original = vec![42u8; 1024];
912    /// let compressed = compressor.compress(&original).unwrap();
913    ///
914    /// // Provide insufficient buffer
915    /// let mut small_buffer = vec![0u8; 512];
916    /// let result = compressor.decompress_into(&compressed, &mut small_buffer);
917    ///
918    /// // Result depends on zstd behavior with undersized buffers
919    /// ```
920    ///
921    /// # Thread Safety
922    ///
923    /// This method can be called concurrently from multiple threads on the same
924    /// `ZstdCompressor` instance, provided each thread uses its own output buffer.
925    fn decompress_into(&self, data: &[u8], out: &mut [u8]) -> Result<usize> {
926        if let Some(dict) = &self.decoder_dict {
927            // Dictionary path: create a bulk Decompressor with the prepared dictionary.
928            // This uses a single ZSTD_decompress_usingDDict FFI call instead of the
929            // streaming Decoder which sets up a ZSTD_DStream + loops through
930            // ZSTD_decompressStream + does a 1-byte overflow read.
931            let mut decompressor = zstd::bulk::Decompressor::with_prepared_dictionary(dict)
932                .map_err(|e| Error::Compression(e.to_string()))?;
933
934            decompressor
935                .decompress_to_buffer(data, out)
936                .map_err(|e| Error::Compression(e.to_string()))
937        } else {
938            // Non-dictionary path: reuse a thread-local Decompressor to avoid
939            // allocating a ~150KB ZSTD_DCtx on every call. The DCtx is created
940            // once per thread and reused across all subsequent decompressions.
941            thread_local! {
942                static DECOMPRESSOR: std::cell::RefCell<zstd::bulk::Decompressor<'static>> =
943                    std::cell::RefCell::new(
944                        zstd::bulk::Decompressor::new()
945                            .unwrap_or_else(|e| panic!("failed to create zstd decompressor: {e}"))
946                    );
947            }
948
949            DECOMPRESSOR.with(|cell| {
950                let mut decompressor = cell.borrow_mut();
951                decompressor
952                    .decompress_to_buffer(data, out)
953                    .map_err(|e| Error::Compression(e.to_string()))
954            })
955        }
956    }
957}
958
959#[cfg(test)]
960mod tests {
961    use super::*;
962
963    #[test]
964    #[cfg_attr(miri, ignore)]
965    fn test_compress_decompress_basic() {
966        let compressor = ZstdCompressor::new(3, None);
967        let data = b"Hello, world! This is test data for compression.";
968
969        let compressed = compressor.compress(data).expect("Compression failed");
970        // Small data might not compress well due to header overhead, just verify it works
971        assert!(
972            !compressed.is_empty(),
973            "Compressed data should not be empty"
974        );
975
976        let decompressed = compressor
977            .decompress(&compressed)
978            .expect("Decompression failed");
979        assert_eq!(data.as_slice(), decompressed.as_slice());
980    }
981
982    #[test]
983    #[cfg_attr(miri, ignore)]
984    fn test_compress_empty_data() {
985        let compressor = ZstdCompressor::new(3, None);
986        let data = b"";
987
988        let compressed = compressor.compress(data).expect("Compression failed");
989        let decompressed = compressor
990            .decompress(&compressed)
991            .expect("Decompression failed");
992
993        assert_eq!(data.as_slice(), decompressed.as_slice());
994    }
995
996    #[test]
997    #[cfg_attr(miri, ignore)]
998    fn test_compress_small_data() {
999        let compressor = ZstdCompressor::new(3, None);
1000        let data = b"x";
1001
1002        let compressed = compressor.compress(data).expect("Compression failed");
1003        let decompressed = compressor
1004            .decompress(&compressed)
1005            .expect("Decompression failed");
1006
1007        assert_eq!(data.as_slice(), decompressed.as_slice());
1008    }
1009
1010    #[test]
1011    #[cfg_attr(miri, ignore)]
1012    fn test_compress_large_data() {
1013        let compressor = ZstdCompressor::new(3, None);
1014        let data = vec![42u8; 1_000_000]; // 1 MB
1015
1016        let compressed = compressor.compress(&data).expect("Compression failed");
1017        assert!(compressed.len() < data.len(), "Data should be compressed");
1018
1019        let decompressed = compressor
1020            .decompress(&compressed)
1021            .expect("Decompression failed");
1022        assert_eq!(data, decompressed);
1023    }
1024
1025    #[test]
1026    #[cfg_attr(miri, ignore)]
1027    fn test_compress_repeating_pattern() {
1028        let compressor = ZstdCompressor::new(3, None);
1029        let data = vec![0xAB; 10_000];
1030
1031        let compressed = compressor.compress(&data).expect("Compression failed");
1032        assert!(
1033            compressed.len() < data.len() / 10,
1034            "Repeating pattern should compress well"
1035        );
1036
1037        let decompressed = compressor
1038            .decompress(&compressed)
1039            .expect("Decompression failed");
1040        assert_eq!(data, decompressed);
1041    }
1042
1043    #[test]
1044    #[cfg_attr(miri, ignore)]
1045    fn test_different_compression_levels() {
1046        let data = vec![42u8; 10_000];
1047
1048        for level in [1, 3, 5, 9] {
1049            let compressor = ZstdCompressor::new(level, None);
1050            let compressed = compressor
1051                .compress(&data)
1052                .unwrap_or_else(|_| panic!("Level {level} failed"));
1053            let decompressed = compressor
1054                .decompress(&compressed)
1055                .unwrap_or_else(|_| panic!("Level {level} decompress failed"));
1056            assert_eq!(data, decompressed, "Level {level} roundtrip failed");
1057        }
1058    }
1059
1060    #[test]
1061    #[cfg_attr(miri, ignore)]
1062    fn test_compression_levels_ratio() {
1063        let data = b"The quick brown fox jumps over the lazy dog. ".repeat(100);
1064
1065        let level1 = ZstdCompressor::new(1, None);
1066        let level9 = ZstdCompressor::new(9, None);
1067
1068        let compressed1 = level1.compress(&data).unwrap();
1069        let compressed9 = level9.compress(&data).unwrap();
1070
1071        // Higher level should produce smaller output (or equal for already compressed data)
1072        assert!(compressed9.len() <= compressed1.len());
1073    }
1074
1075    #[test]
1076    #[cfg_attr(miri, ignore)]
1077    fn test_dictionary_training() {
1078        let samples: Vec<Vec<u8>> = (0..20)
1079            .map(|i| vec![((i * 13) % 256) as u8; 1024])
1080            .collect();
1081
1082        let dict = ZstdCompressor::train(&samples, 1024).expect("Training failed");
1083        assert!(!dict.is_empty(), "Dictionary should not be empty");
1084        assert!(dict.len() <= 1024, "Dictionary should not exceed max size");
1085    }
1086
1087    #[test]
1088    #[cfg_attr(miri, ignore)]
1089    fn test_compression_with_dictionary() {
1090        let samples: Vec<Vec<u8>> = (0..20)
1091            .map(|_| b"Sample data with repeated patterns and structures".to_vec())
1092            .collect();
1093
1094        let dict = ZstdCompressor::train(&samples, 2048).expect("Training failed");
1095        let compressor = ZstdCompressor::new(3, Some(&dict));
1096
1097        let data = b"Sample data with repeated patterns and structures";
1098        let compressed = compressor
1099            .compress(data)
1100            .expect("Compression with dict failed");
1101        let decompressed = compressor
1102            .decompress(&compressed)
1103            .expect("Decompression with dict failed");
1104
1105        assert_eq!(data.as_slice(), decompressed.as_slice());
1106    }
1107
1108    #[test]
1109    #[cfg_attr(miri, ignore)]
1110    fn test_dictionary_improves_compression() {
1111        let samples: Vec<Vec<u8>> = (0..20)
1112            .map(|_| {
1113                let mut data = Vec::with_capacity(1024);
1114                data.extend_from_slice(b"HEADER:");
1115                data.extend_from_slice(&vec![42u8; 1000]);
1116                data.extend_from_slice(b"FOOTER");
1117                data
1118            })
1119            .collect();
1120
1121        let dict = ZstdCompressor::train(&samples, 2048).expect("Training failed");
1122
1123        let without_dict = ZstdCompressor::new(3, None);
1124        let with_dict = ZstdCompressor::new(3, Some(&dict));
1125
1126        let test_data = {
1127            let mut data = Vec::with_capacity(1024);
1128            data.extend_from_slice(b"HEADER:");
1129            data.extend_from_slice(&vec![42u8; 1000]);
1130            data.extend_from_slice(b"FOOTER");
1131            data
1132        };
1133
1134        let compressed_no_dict = without_dict.compress(&test_data).unwrap();
1135        let compressed_with_dict = with_dict.compress(&test_data).unwrap();
1136
1137        // Dictionary should improve compression for structured data
1138        // (though improvement might be minimal for this simple test case)
1139        assert!(compressed_with_dict.len() <= compressed_no_dict.len());
1140    }
1141
1142    #[test]
1143    #[cfg_attr(miri, ignore)]
1144    fn test_decompress_into_buffer() {
1145        let compressor = ZstdCompressor::new(3, None);
1146        let data = vec![99u8; 1024];
1147
1148        let compressed = compressor.compress(&data).unwrap();
1149
1150        let mut output = vec![0u8; 1024];
1151        let size = compressor
1152            .decompress_into(&compressed, &mut output)
1153            .expect("decompress_into failed");
1154
1155        assert_eq!(size, 1024);
1156        assert_eq!(output, data);
1157    }
1158
1159    #[test]
1160    #[cfg_attr(miri, ignore)]
1161    fn test_decompress_into_larger_buffer() {
1162        let compressor = ZstdCompressor::new(3, None);
1163        let data = vec![88u8; 512];
1164
1165        let compressed = compressor.compress(&data).unwrap();
1166
1167        let mut output = vec![0u8; 2048]; // Larger than needed
1168        let size = compressor
1169            .decompress_into(&compressed, &mut output)
1170            .expect("decompress_into failed");
1171
1172        assert_eq!(size, 512);
1173        assert_eq!(&output[..512], data.as_slice());
1174    }
1175
1176    #[test]
1177    #[cfg_attr(miri, ignore)]
1178    fn test_decompress_into_with_dictionary() {
1179        let samples: Vec<Vec<u8>> = (0..15).map(|_| vec![77u8; 512]).collect();
1180
1181        let dict = ZstdCompressor::train(&samples, 1024).unwrap();
1182        let compressor = ZstdCompressor::new(3, Some(&dict));
1183
1184        let data = vec![77u8; 512];
1185        let compressed = compressor.compress(&data).unwrap();
1186
1187        let mut output = vec![0u8; 512];
1188        let size = compressor
1189            .decompress_into(&compressed, &mut output)
1190            .expect("decompress_into with dict failed");
1191
1192        assert_eq!(size, 512);
1193        assert_eq!(output, data);
1194    }
1195
1196    #[test]
1197    #[cfg_attr(miri, ignore)]
1198    fn test_dictionary_mismatch() {
1199        // Create samples that will actually benefit from dictionaries
1200        let samples1: Vec<Vec<u8>> = (0..20)
1201            .map(|_| b"PREFIX1:data:SUFFIX1".repeat(50))
1202            .collect();
1203        let samples2: Vec<Vec<u8>> = (0..20)
1204            .map(|_| b"PREFIX2:data:SUFFIX2".repeat(50))
1205            .collect();
1206
1207        let dict1 = ZstdCompressor::train(&samples1, 2048).unwrap();
1208        let dict2 = ZstdCompressor::train(&samples2, 2048).unwrap();
1209
1210        let compressor1 = ZstdCompressor::new(3, Some(&dict1));
1211        let compressor2 = ZstdCompressor::new(3, Some(&dict2));
1212
1213        let data = b"PREFIX1:data:SUFFIX1".repeat(50);
1214        let compressed = compressor1.compress(&data).unwrap();
1215
1216        // Decompressing with wrong dictionary may fail or produce corrupted data
1217        // Zstd behavior varies, so just verify we can detect a difference
1218        let result = compressor2.decompress(&compressed);
1219        // Either it fails, or the data is corrupted if it succeeds
1220        if let Ok(decompressed) = result {
1221            // If it succeeded, data might be corrupted (not guaranteed to fail)
1222            let _ = decompressed; // Just verify no panic
1223        }
1224        // Test passes as long as no panic occurs
1225    }
1226
1227    #[test]
1228    #[cfg_attr(miri, ignore)]
1229    fn test_no_dict_vs_dict_compatibility() {
1230        // Use trained dictionary for better testing
1231        let samples: Vec<Vec<u8>> = (0..20)
1232            .map(|_| b"HEADER:payload:FOOTER".repeat(100))
1233            .collect();
1234        let dict = ZstdCompressor::train(&samples, 2048).unwrap();
1235
1236        let with_dict = ZstdCompressor::new(3, Some(&dict));
1237        let without_dict = ZstdCompressor::new(3, None);
1238
1239        let data = b"HEADER:payload:FOOTER".repeat(100);
1240
1241        // Compress with dictionary
1242        let compressed_with_dict = with_dict.compress(&data).unwrap();
1243
1244        // Try to decompress with no dictionary
1245        // Zstd behavior: may fail or succeed depending on implementation details
1246        let _result = without_dict.decompress(&compressed_with_dict);
1247
1248        // Compress without dictionary
1249        let compressed_no_dict = without_dict.compress(&data).unwrap();
1250
1251        // Try to decompress with dictionary
1252        // Zstd is generally backward compatible - may work
1253        let result = with_dict.decompress(&compressed_no_dict);
1254        if let Ok(decompressed) = result {
1255            // If it works, verify data integrity
1256            assert_eq!(decompressed, data);
1257        }
1258    }
1259
1260    #[test]
1261    #[cfg_attr(miri, ignore)]
1262    fn test_compressor_debug_format() {
1263        let compressor_no_dict = ZstdCompressor::new(5, None);
1264        let debug_str = format!("{compressor_no_dict:?}");
1265
1266        assert!(debug_str.contains("ZstdCompressor"));
1267        assert!(debug_str.contains("level"));
1268        assert!(debug_str.contains('5'));
1269        assert!(debug_str.contains("has_dict"));
1270        assert!(debug_str.contains("false"));
1271    }
1272
1273    #[test]
1274    #[cfg_attr(miri, ignore)]
1275    fn test_compressor_debug_format_with_dict() {
1276        let dict = vec![1u8; 512];
1277        let compressor_with_dict = ZstdCompressor::new(3, Some(&dict));
1278        let debug_str = format!("{compressor_with_dict:?}");
1279
1280        assert!(debug_str.contains("ZstdCompressor"));
1281        assert!(debug_str.contains("level"));
1282        assert!(debug_str.contains('3'));
1283        assert!(debug_str.contains("has_dict"));
1284        assert!(debug_str.contains("true"));
1285    }
1286
1287    #[test]
1288    #[cfg_attr(miri, ignore)]
1289    fn test_train_with_empty_samples() {
1290        let samples: Vec<Vec<u8>> = vec![];
1291        let result = ZstdCompressor::train(&samples, 1024);
1292
1293        // Training with empty samples should fail
1294        assert!(result.is_err());
1295    }
1296
1297    #[test]
1298    #[cfg_attr(miri, ignore)]
1299    fn test_train_with_small_samples() {
1300        let samples: Vec<Vec<u8>> = vec![vec![1u8; 10]];
1301        let result = ZstdCompressor::train(&samples, 1024);
1302
1303        // Training with too little data might fail or produce poor dict
1304        // Just verify it doesn't panic
1305        let _ = result;
1306    }
1307
1308    #[test]
1309    #[cfg_attr(miri, ignore)]
1310    fn test_multiple_compressions_same_compressor() {
1311        let compressor = ZstdCompressor::new(3, None);
1312
1313        for i in 0..10 {
1314            let data = vec![i as u8; 1000];
1315            let compressed = compressor.compress(&data).unwrap();
1316            let decompressed = compressor.decompress(&compressed).unwrap();
1317            assert_eq!(data, decompressed);
1318        }
1319    }
1320
1321    #[test]
1322    #[cfg_attr(miri, ignore)]
1323    fn test_buffer_reuse_pattern() {
1324        let compressor = ZstdCompressor::new(3, None);
1325        let data = vec![55u8; 4096];
1326        let compressed = compressor.compress(&data).unwrap();
1327
1328        let mut reusable_buffer = vec![0u8; 4096];
1329
1330        // Reuse buffer multiple times
1331        for _ in 0..5 {
1332            let size = compressor
1333                .decompress_into(&compressed, &mut reusable_buffer)
1334                .unwrap();
1335            assert_eq!(size, 4096);
1336            assert_eq!(reusable_buffer, data);
1337        }
1338    }
1339
1340    #[test]
1341    #[cfg_attr(miri, ignore)]
1342    fn test_various_data_patterns() {
1343        let compressor = ZstdCompressor::new(3, None);
1344
1345        // All zeros
1346        let zeros = vec![0u8; 1000];
1347        let compressed = compressor.compress(&zeros).unwrap();
1348        assert!(compressed.len() < 100, "Zeros should compress very well");
1349        assert_eq!(compressor.decompress(&compressed).unwrap(), zeros);
1350
1351        // Alternating pattern
1352        let alternating: Vec<u8> = (0..1000)
1353            .map(|i| if i % 2 == 0 { 0xAA } else { 0x55 })
1354            .collect();
1355        let compressed = compressor.compress(&alternating).unwrap();
1356        assert_eq!(compressor.decompress(&compressed).unwrap(), alternating);
1357
1358        // Sequential bytes
1359        let sequential: Vec<u8> = (0..=255).cycle().take(1000).collect();
1360        let compressed = compressor.compress(&sequential).unwrap();
1361        assert_eq!(compressor.decompress(&compressed).unwrap(), sequential);
1362    }
1363
1364    #[test]
1365    #[cfg_attr(miri, ignore)]
1366    fn test_compression_preserves_data_integrity() {
1367        let compressor = ZstdCompressor::new(3, None);
1368
1369        // Test with various byte values
1370        for byte_value in [0u8, 1, 127, 128, 255] {
1371            let data = vec![byte_value; 1000];
1372            let compressed = compressor.compress(&data).unwrap();
1373            let decompressed = compressor.decompress(&compressed).unwrap();
1374            assert_eq!(data, decompressed, "Failed for byte value {byte_value}");
1375        }
1376    }
1377
1378    #[test]
1379    #[cfg_attr(miri, ignore)]
1380    fn test_high_compression_level() {
1381        let compressor = ZstdCompressor::new(19, None);
1382        let data = b"High compression level test data with some patterns ".repeat(50);
1383
1384        let compressed = compressor.compress(&data).unwrap();
1385        let decompressed = compressor.decompress(&compressed).unwrap();
1386
1387        assert_eq!(data, decompressed);
1388    }
1389
1390    #[test]
1391    #[cfg_attr(miri, ignore)]
1392    fn test_max_compression_level() {
1393        let compressor = ZstdCompressor::new(22, None);
1394        let data = vec![123u8; 5000];
1395
1396        let compressed = compressor.compress(&data).unwrap();
1397        let decompressed = compressor.decompress(&compressed).unwrap();
1398
1399        assert_eq!(data, decompressed);
1400    }
1401}