Expand description
§Compression Core
Statistical compression for LLM prompts using intelligent filtering.
§Overview
This library reduces token usage in LLM prompts by using statistical analysis to identify and filter less important content while preserving semantic meaning.
§Architecture
The compression pipeline:
- Tokenize: Convert input to tokens using pluggable tokenizer
 - Analyze: Apply statistical filtering to identify important content
 - Filter: Remove less important tokens/segments
 - Validate: Ensure compression preserves semantic quality
 
§Example
ⓘ
use compression_prompt::{Compressor, CompressorConfig};
let config = CompressorConfig::default();
let compressor = Compressor::new(config);
let result = compressor.compress(input, &tokenizer)?;
println!("Saved {} tokens ({:.1}% compression)",
    result.original_tokens - result.compressed_tokens,
    (1.0 - result.compression_ratio) * 100.0
);Re-exports§
pub use compressor::CompressionResult;pub use compressor::Compressor;pub use compressor::CompressorConfig;pub use compressor::OutputFormat;pub use statistical_filter::StatisticalFilter;pub use statistical_filter::StatisticalFilterConfig;
Modules§
- compressor
 - Main compression pipeline and result structures.
 - quality_
metrics  - Quality metrics for compression evaluation (model-free)
 - statistical_
filter  - Statistical token importance filtering (LLMLingua-inspired, model-free)
 
Constants§
- VERSION
 - Library version