pub struct FileHeader {
pub app_version: String,
pub format_version: u16,
pub original_filename: String,
pub original_size: u64,
pub original_checksum: String,
pub output_checksum: String,
pub processing_steps: Vec<ProcessingStep>,
pub chunk_size: u32,
pub chunk_count: u32,
pub processed_at: DateTime<Utc>,
pub pipeline_id: String,
pub metadata: HashMap<String, String>,
}Expand description
File header for Adaptive Pipeline processed files (.adapipe format)
This header contains all information needed to:
- Recover the original document (filename, size, processing steps)
- Verify integrity of the processed output file we created
- Validate the restored input file matches the original exactly
§Adaptive Pipeline File Format (.adapipe)
[CHUNK_DATA][JSON_HEADER][HEADER_LENGTH][FORMAT_VERSION][MAGIC_BYTES]Note: This is NOT a general binary file format like .png or .exe. This is specifically for files processed by the Adaptive Pipeline system that have been compressed and/or encrypted with restoration metadata.
§Recovery Process
Fields§
§app_version: StringApplication version that created this file
format_version: u16File format version for backward compatibility
original_filename: StringOriginal input filename (for restoration)
original_size: u64Original file size in bytes (for validation)
original_checksum: StringSHA256 checksum of original input file (for validation)
output_checksum: StringSHA256 checksum of this output file (for integrity verification)
processing_steps: Vec<ProcessingStep>Processing pipeline information (for restoration)
chunk_size: u32Chunk size used for processing
chunk_count: u32Number of chunks in the processed file
processed_at: DateTime<Utc>Processing timestamp (RFC3339)
pipeline_id: StringPipeline ID that processed this file
metadata: HashMap<String, String>Additional metadata for debugging/auditing
Implementations§
Source§impl FileHeader
impl FileHeader
Sourcepub fn new(
original_filename: String,
original_size: u64,
original_checksum: String,
) -> Self
pub fn new( original_filename: String, original_size: u64, original_checksum: String, ) -> Self
Creates a new file header with default values
§Purpose
Creates a FileHeader for tracking processing metadata and enabling
file recovery. The header stores all information needed to validate
and restore processed files.
§Why
File headers provide:
- Recovery information to restore original files
- Integrity verification through checksums
- Processing history for debugging and auditing
- Version management for backward compatibility
§Arguments
original_filename- Name of the original input file (for restoration)original_size- Size of the original file in bytes (for validation)original_checksum- SHA256 checksum of original file (for validation)
§Returns
FileHeader with default values:
app_version: Current package version from Cargo.tomlformat_version: Current format version (1)chunk_size: 1MB defaultprocessed_at: Current timestamp- Empty processing steps, pipeline ID, and metadata
§Examples
Sourcepub fn add_compression_step(self, algorithm: &str, level: u32) -> Self
pub fn add_compression_step(self, algorithm: &str, level: u32) -> Self
Adds a compression step to the processing pipeline
§Purpose
Records a compression operation in the processing steps. This information is used during file recovery to decompress the data.
§Arguments
algorithm- Name of compression algorithm (e.g., “brotli”, “gzip”, “zstd”, “lz4”)level- Compression level (algorithm-specific, typically 1-9)
§Returns
Updated FileHeader with compression step added (builder pattern)
§Examples
Sourcepub fn add_encryption_step(
self,
algorithm: &str,
key_derivation: &str,
key_size: u32,
nonce_size: u32,
) -> Self
pub fn add_encryption_step( self, algorithm: &str, key_derivation: &str, key_size: u32, nonce_size: u32, ) -> Self
Adds an encryption step
Sourcepub fn add_custom_step(
self,
step_name: &str,
algorithm: &str,
parameters: HashMap<String, String>,
) -> Self
pub fn add_custom_step( self, step_name: &str, algorithm: &str, parameters: HashMap<String, String>, ) -> Self
Adds a custom processing step
Sourcepub fn add_processing_step(self, descriptor: ProcessingStepDescriptor) -> Self
pub fn add_processing_step(self, descriptor: ProcessingStepDescriptor) -> Self
Adds a processing step using domain-driven ProcessingStepDescriptor This is the preferred method that respects DIP and uses Value Objects
Sourcepub fn add_checksum_step(self, algorithm: &str) -> Self
pub fn add_checksum_step(self, algorithm: &str) -> Self
Adds a checksum processing step
Sourcepub fn add_passthrough_step(self, algorithm: &str) -> Self
pub fn add_passthrough_step(self, algorithm: &str) -> Self
Adds a pass-through processing step
Sourcepub fn with_chunk_info(self, chunk_size: u32, chunk_count: u32) -> Self
pub fn with_chunk_info(self, chunk_size: u32, chunk_count: u32) -> Self
Sets chunk processing information
Sourcepub fn with_pipeline_id(self, pipeline_id: String) -> Self
pub fn with_pipeline_id(self, pipeline_id: String) -> Self
Sets pipeline ID
Sourcepub fn with_output_checksum(self, checksum: String) -> Self
pub fn with_output_checksum(self, checksum: String) -> Self
Sets output file checksum (call after processing is complete)
Sourcepub fn with_metadata(self, key: String, value: String) -> Self
pub fn with_metadata(self, key: String, value: String) -> Self
Adds metadata
Serializes the header to binary format for file footer
§Purpose
Converts the header to the binary footer format that is appended to processed files. The footer allows reading metadata from the end of files without scanning the entire file.
§Why
Storing metadata at the end provides:
- Efficient metadata access without reading full file
- Streaming-friendly format (header written after data)
- Simple format detection via magic bytes at end
§Binary Format
[JSON_HEADER][HEADER_LENGTH (4 bytes)][FORMAT_VERSION (2 bytes)][MAGIC_BYTES (8 bytes)]§Returns
Ok(Vec<u8>)- Serialized footer bytesErr(PipelineError::SerializationError)- JSON serialization failed
§Errors
Returns PipelineError::SerializationError if JSON serialization fails.
§Examples
Deserializes the header from file footer bytes
§Purpose
Extracts and parses the file header from the footer at the end of a processed file. This is the primary method for reading metadata from .adapipe files.
§Why
Reading from the footer enables:
- Quick metadata access without processing entire file
- Format validation before attempting recovery
- Backward compatibility checking
§Arguments
file_data- Complete file data including footer
§Returns
Ok((FileHeader, usize))- Parsed header and total footer size in bytesErr(PipelineError)- Validation or parsing error
§Errors
Returns PipelineError when:
- File too short (< 14 bytes minimum footer size)
- Invalid magic bytes (not an .adapipe file)
- Unsupported format version
- Incomplete footer data
- Invalid UTF-8 in JSON header
- JSON deserialization fails
§Examples
Sourcepub fn verify_output_integrity(
&self,
file_data: &[u8],
) -> Result<bool, PipelineError>
pub fn verify_output_integrity( &self, file_data: &[u8], ) -> Result<bool, PipelineError>
Verifies the integrity of the processed output file
§Purpose
Validates that the processed file data has not been corrupted or tampered with by comparing its SHA256 checksum against the stored checksum.
§Why
Integrity verification provides:
- Detection of file corruption during storage or transmission
- Protection against data tampering
- Confidence in file recovery operations
§Arguments
file_data- Complete processed file data (including footer)
§Returns
Ok(true)- File integrity verified, checksum matchesOk(false)- File corrupted, checksum mismatchErr(PipelineError::ValidationError)- No checksum available
§Errors
Returns PipelineError::ValidationError if output_checksum is empty.
§Examples
Sourcepub fn get_restoration_steps(&self) -> Vec<&ProcessingStep>
pub fn get_restoration_steps(&self) -> Vec<&ProcessingStep>
Gets the processing steps in reverse order for file restoration
§Purpose
Returns processing steps in the order they must be reversed to restore the original file. For example, if compression then encryption was applied, restoration must decrypt then decompress.
§Why
Processing operations must be reversed in opposite order:
- Apply: Compress → Encrypt
- Restore: Decrypt → Decompress
§Returns
Vector of processing steps sorted by descending order (highest order first)
§Examples
Sourcepub fn validate_restored_file(
&self,
restored_data: &[u8],
) -> Result<bool, PipelineError>
pub fn validate_restored_file( &self, restored_data: &[u8], ) -> Result<bool, PipelineError>
Validates a restored file against original specifications
§Purpose
Verifies that a restored file matches the original file exactly by checking both size and SHA256 checksum. This ensures complete recovery fidelity.
§Why
Restoration validation provides:
- Confidence that recovery was successful
- Detection of processing errors or data loss
- Verification of processing reversibility
§Arguments
restored_data- The restored/recovered file data
§Returns
Ok(true)- Restored file matches original (size and checksum)Ok(false)- Restored file does not match original
§Examples
Sourcepub fn get_processing_summary(&self) -> String
pub fn get_processing_summary(&self) -> String
Gets information about what processing was applied
Sourcepub fn is_compressed(&self) -> bool
pub fn is_compressed(&self) -> bool
Checks if the file uses compression
Sourcepub fn is_encrypted(&self) -> bool
pub fn is_encrypted(&self) -> bool
Checks if the file uses encryption
Sourcepub fn compression_algorithm(&self) -> Option<&str>
pub fn compression_algorithm(&self) -> Option<&str>
Gets the compression algorithm if used
Sourcepub fn encryption_algorithm(&self) -> Option<&str>
pub fn encryption_algorithm(&self) -> Option<&str>
Gets the encryption algorithm if used
Sourcepub fn validate(&self) -> Result<(), PipelineError>
pub fn validate(&self) -> Result<(), PipelineError>
Validates the header for consistency
Trait Implementations§
Source§impl Clone for FileHeader
impl Clone for FileHeader
Source§fn clone(&self) -> FileHeader
fn clone(&self) -> FileHeader
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for FileHeader
impl Debug for FileHeader
Source§impl Default for FileHeader
impl Default for FileHeader
Source§impl<'de> Deserialize<'de> for FileHeader
impl<'de> Deserialize<'de> for FileHeader
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Source§impl PartialEq for FileHeader
impl PartialEq for FileHeader
Source§impl Serialize for FileHeader
impl Serialize for FileHeader
impl StructuralPartialEq for FileHeader
Auto Trait Implementations§
impl Freeze for FileHeader
impl RefUnwindSafe for FileHeader
impl Send for FileHeader
impl Sync for FileHeader
impl Unpin for FileHeader
impl UnwindSafe for FileHeader
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more