Expand description
§Zipora: High-Performance Data Structures and Compression
This crate provides a comprehensive Rust implementation of advanced data structures and compression algorithms, offering high-performance solutions with modern Rust design.
§Key Features
- Fast Containers: Optimized vector and string types with zero-copy semantics
- Succinct Data Structures: Rank-select operations with SIMD optimizations
- Advanced Tries: LOUDS, Critical-Bit, and Patricia tries with full FSA support
- Blob Storage: Memory-mapped and compressed blob storage systems
- Entropy Coding: Huffman, rANS, and dictionary-based compression algorithms
- Memory Management: Advanced allocators including memory pools and bump allocators
- Specialized Algorithms: Suffix arrays, radix sort, and multi-way merge
- Fiber-based Concurrency: High-performance async/await with work-stealing execution
- Real-time Compression: Adaptive algorithms with strict latency guarantees
- C FFI Support: Complete C API compatibility layer for gradual migration
- Memory Safety: All the performance of C++ with Rust’s memory safety guarantees
§Quick Start
use zipora::{
FastVec, FastStr, MemoryBlobStore, BlobStore,
LoudsTrie, Trie, GoldHashMap, HuffmanEncoder,
MemoryPool, PoolConfig, SuffixArray, FiberPool
};
// High-performance vector with realloc optimization
let mut vec = FastVec::new();
vec.push(42).unwrap();
// Zero-copy string operations
let s = FastStr::from_string("hello world");
println!("Hash: {:x}", s.hash_fast());
// Blob storage with compression
let mut store = MemoryBlobStore::new();
let id = store.put(b"Hello, World!").unwrap();
let data = store.get(id).unwrap();
// Advanced trie operations
let mut trie = LoudsTrie::new();
trie.insert(b"hello").unwrap();
assert!(trie.contains(b"hello"));
// High-performance hash map
let mut map = GoldHashMap::new();
map.insert("key", "value").unwrap();
// Entropy coding
let encoder = HuffmanEncoder::new(b"sample data").unwrap();
let compressed = encoder.encode(b"sample data").unwrap();
// Memory pool allocation
let pool = MemoryPool::new(PoolConfig::small()).unwrap();
let chunk = pool.allocate().unwrap();
// Suffix array construction
let sa = SuffixArray::new(b"banana").unwrap();
let (pos, count) = sa.search(b"banana", b"ana");
Re-exports§
pub use containers::FastVec;
pub use error::Result;
pub use error::ZiporaError;
pub use string::FastStr;
pub use succinct::BitVector;
pub use succinct::BitwiseOp;
pub use succinct::CpuFeatures;
pub use succinct::RankSelect256;
pub use succinct::RankSelectSe256;
pub use blob_store::BlobStore;
pub use blob_store::MemoryBlobStore;
pub use blob_store::PlainBlobStore;
pub use fsa::CritBitTrie;
pub use fsa::FiniteStateAutomaton;
pub use fsa::LoudsTrie;
pub use fsa::PatriciaTrie;
pub use fsa::Trie;
pub use io::DataInput;
pub use io::DataOutput;
pub use io::VarInt;
pub use hash_map::GoldHashMap;
pub use io::MemoryMappedInput;
pub use io::MemoryMappedOutput;
pub use blob_store::DictionaryBlobStore;
pub use blob_store::EntropyAlgorithm;
pub use blob_store::EntropyCompressionStats;
pub use blob_store::HuffmanBlobStore;
pub use blob_store::RansBlobStore;
pub use entropy::dictionary::Dictionary;
pub use entropy::rans::RansSymbol;
pub use entropy::DictionaryBuilder;
pub use entropy::DictionaryCompressor;
pub use entropy::OptimizedDictionaryCompressor;
pub use entropy::EntropyStats;
pub use entropy::HuffmanDecoder;
pub use entropy::HuffmanEncoder;
pub use entropy::HuffmanTree;
pub use entropy::RansDecoder;
pub use entropy::RansEncoder;
pub use entropy::RansState;
pub use memory::BumpAllocator;
pub use memory::BumpArena;
pub use memory::CacheAlignedVec;
pub use memory::MemoryConfig;
pub use memory::MemoryPool;
pub use memory::MemoryStats;
pub use memory::NumaStats;
pub use memory::NumaPoolStats;
pub use memory::PoolConfig;
pub use memory::PooledBuffer;
pub use memory::PooledVec;
pub use memory::CACHE_LINE_SIZE;
pub use memory::get_numa_stats;
pub use memory::set_current_numa_node;
pub use memory::numa_alloc_aligned;
pub use memory::numa_dealloc;
pub use memory::get_optimal_numa_node;
pub use memory::init_numa_pools;
pub use memory::clear_numa_pools;
pub use memory::SecureMemoryPool;
pub use memory::SecurePoolConfig;
pub use memory::SecurePoolStats;
pub use memory::SecurePooledPtr;
pub use memory::get_global_pool_for_size;
pub use memory::get_global_secure_pool_stats;
pub use memory::size_to_class;
pub use memory::HugePage;
pub use memory::HugePageAllocator;
pub use algorithms::AlgorithmConfig;
pub use algorithms::LcpArray;
pub use algorithms::MergeSource;
pub use algorithms::MultiWayMerge;
pub use algorithms::RadixSort;
pub use algorithms::RadixSortConfig;
pub use algorithms::SuffixArray;
pub use algorithms::SuffixArrayBuilder;
pub use concurrency::AsyncBlobStore;
pub use concurrency::AsyncFileStore;
pub use concurrency::AsyncMemoryBlobStore;
pub use concurrency::ConcurrencyConfig;
pub use concurrency::Fiber;
pub use concurrency::FiberHandle;
pub use concurrency::FiberId;
pub use concurrency::FiberPool;
pub use concurrency::FiberPoolConfig;
pub use concurrency::FiberStats;
pub use concurrency::ParallelLoudsTrie;
pub use concurrency::ParallelTrieBuilder;
pub use concurrency::Pipeline;
pub use concurrency::PipelineBuilder;
pub use concurrency::PipelineStage;
pub use concurrency::PipelineStats;
pub use concurrency::Task;
pub use concurrency::WorkStealingExecutor;
pub use concurrency::WorkStealingQueue;
pub use compression::AdaptiveCompressor;
pub use compression::AdaptiveConfig;
pub use compression::Algorithm;
pub use compression::CompressionMode;
pub use compression::CompressionProfile;
pub use compression::CompressionStats;
pub use compression::Compressor;
pub use compression::CompressorFactory;
pub use compression::PerformanceRequirements;
pub use compression::RealtimeCompressor;
pub use compression::RealtimeConfig;
pub use blob_store::ZstdBlobStore;
Modules§
- algorithms
- Specialized algorithms for high-performance data processing
- blob_
store - Blob storage systems
- compression
- Real-time compression with adaptive algorithms
- concurrency
- Fiber-based concurrency and pipeline processing
- containers
- High-performance container types
- entropy
- Entropy coding and compression algorithms
- error
- Error handling for the zipora library
- ffi
- C FFI compatibility layer
- fsa
- Finite State Automata and Trie structures
- hash_
map - High-performance hash map implementations
- io
- I/O operations and streaming
- memory
- Memory management utilities and allocators
- string
- Zero-copy string operations with SIMD optimization
- succinct
- Succinct data structures with constant-time rank and select operations
Constants§
- VERSION
- Library version information
Functions§
- has_
simd_ support - Check if SIMD optimizations are available
- init
- Initialize the library (currently no-op, for future use)