pub trait CompressionService: StageService {
// Required methods
fn compress_chunk(
&self,
chunk: FileChunk,
config: &CompressionConfig,
context: &mut ProcessingContext,
) -> Result<FileChunk, PipelineError>;
fn decompress_chunk(
&self,
chunk: FileChunk,
config: &CompressionConfig,
context: &mut ProcessingContext,
) -> Result<FileChunk, PipelineError>;
fn estimate_compression_ratio(
&self,
data_sample: &[u8],
algorithm: &CompressionAlgorithm,
) -> Result<f64, PipelineError>;
fn get_optimal_config(
&self,
file_extension: &str,
data_sample: &[u8],
performance_priority: CompressionPriority,
) -> Result<CompressionConfig, PipelineError>;
fn validate_config(
&self,
config: &CompressionConfig,
) -> Result<(), PipelineError>;
fn supported_algorithms(&self) -> Vec<CompressionAlgorithm>;
fn benchmark_algorithm(
&self,
algorithm: &CompressionAlgorithm,
test_data: &[u8],
) -> Result<CompressionBenchmark, PipelineError>;
}Expand description
Domain service interface for compression operations in the adaptive pipeline system
This trait defines the contract for compression services that handle data compression and decompression operations. Implementations provide algorithm-specific compression logic while maintaining consistent interfaces across different compression algorithms.
§Design Principles
- Stateless Operations: All methods are stateless and thread-safe
- Chunk-Based Processing: Operates on file chunks for streaming support
- Configuration-Driven: Behavior controlled through configuration objects
- Error Handling: Comprehensive error reporting through
PipelineError - Context Integration: Integrates with processing context for state management
§Implementation Requirements
Implementations must:
- Be thread-safe (
Send + Sync) - Handle all supported compression algorithms
- Provide consistent error handling
- Support streaming operations through chunk processing
- Maintain compression metadata and statistics
§Usage Examples
§Architecture Note
This trait is synchronous following DDD principles. The domain layer defines what operations exist, not how they execute. Async execution is an infrastructure concern. Infrastructure adapters can wrap this trait to provide async interfaces when needed.
§Unified Stage Interface
This trait extends StageService, providing the unified process_chunk()
method that all stages implement. The specialized compress_chunk() and
decompress_chunk() methods are maintained for backward compatibility and
internal use, but process_chunk() is the primary interface used by the
pipeline system.
Required Methods§
Sourcefn compress_chunk(
&self,
chunk: FileChunk,
config: &CompressionConfig,
context: &mut ProcessingContext,
) -> Result<FileChunk, PipelineError>
fn compress_chunk( &self, chunk: FileChunk, config: &CompressionConfig, context: &mut ProcessingContext, ) -> Result<FileChunk, PipelineError>
Compresses a file chunk using the specified configuration
This method compresses the data contained in a file chunk according to the provided compression configuration. The operation is stateless and can be called concurrently from multiple threads.
§Parameters
chunk: The file chunk containing data to compressconfig: Compression configuration specifying algorithm and parameterscontext: Processing context for state management and metadata
§Returns
Returns a new FileChunk containing the compressed data, or a
PipelineError if compression fails.
§Errors
CompressionError: Algorithm-specific compression failuresConfigurationError: Invalid compression configurationMemoryError: Insufficient memory for compression operationDataError: Invalid or corrupted input data
§Note on Async
This method is synchronous in the domain. For async contexts,
use AsyncCompressionAdapter from the infrastructure layer.
Sourcefn decompress_chunk(
&self,
chunk: FileChunk,
config: &CompressionConfig,
context: &mut ProcessingContext,
) -> Result<FileChunk, PipelineError>
fn decompress_chunk( &self, chunk: FileChunk, config: &CompressionConfig, context: &mut ProcessingContext, ) -> Result<FileChunk, PipelineError>
Decompresses a file chunk using the specified configuration
This method decompresses the data contained in a file chunk that was previously compressed using a compatible compression algorithm. The decompression parameters must match those used during compression.
§Parameters
chunk: The file chunk containing compressed data to decompressconfig: Compression configuration specifying algorithm and parameterscontext: Processing context for state management and metadata
§Returns
Returns a new FileChunk containing the decompressed data, or a
PipelineError if decompression fails.
§Errors
DecompressionError: Algorithm-specific decompression failuresConfigurationError: Mismatched compression configurationMemoryError: Insufficient memory for decompression operationDataCorruptionError: Corrupted or invalid compressed data
§Note on Async
This method is synchronous in the domain. For async contexts,
use AsyncCompressionAdapter from the infrastructure layer.
Sourcefn estimate_compression_ratio(
&self,
data_sample: &[u8],
algorithm: &CompressionAlgorithm,
) -> Result<f64, PipelineError>
fn estimate_compression_ratio( &self, data_sample: &[u8], algorithm: &CompressionAlgorithm, ) -> Result<f64, PipelineError>
Estimates compression ratio for given data
§Note
Parallel processing of chunks is an infrastructure concern. Use infrastructure adapters for batch/parallel operations.
Sourcefn get_optimal_config(
&self,
file_extension: &str,
data_sample: &[u8],
performance_priority: CompressionPriority,
) -> Result<CompressionConfig, PipelineError>
fn get_optimal_config( &self, file_extension: &str, data_sample: &[u8], performance_priority: CompressionPriority, ) -> Result<CompressionConfig, PipelineError>
Gets optimal compression configuration for file type
Analyzes file characteristics and recommends configuration.
Sourcefn validate_config(
&self,
config: &CompressionConfig,
) -> Result<(), PipelineError>
fn validate_config( &self, config: &CompressionConfig, ) -> Result<(), PipelineError>
Validates compression configuration
Checks if the configuration is valid and supported.
Sourcefn supported_algorithms(&self) -> Vec<CompressionAlgorithm>
fn supported_algorithms(&self) -> Vec<CompressionAlgorithm>
Gets supported algorithms
Returns list of compression algorithms supported by this implementation.
Sourcefn benchmark_algorithm(
&self,
algorithm: &CompressionAlgorithm,
test_data: &[u8],
) -> Result<CompressionBenchmark, PipelineError>
fn benchmark_algorithm( &self, algorithm: &CompressionAlgorithm, test_data: &[u8], ) -> Result<CompressionBenchmark, PipelineError>
Benchmarks compression performance
Tests compression performance with sample data.