Expand description
§LCPFS - LCP File System
A modern, ZFS-inspired copy-on-write filesystem implementation in pure Rust,
designed for no_std environments such as operating system kernels. Features
post-quantum cryptography, CXL memory tiering, and ML-based prefetching.
§Overview
LCPFS provides enterprise-grade storage features with cutting-edge optimizations:
§Core Features
- Copy-on-Write (COW): All writes create new blocks, preserving data integrity
- RAID-Z (1/2/3): Multi-disk redundancy with self-healing capabilities
- Adaptive Replacement Cache (ARC): 100 GiB self-tuning cache with ghost lists
- L2ARC: Persistent SSD-backed second-level cache
- Deduplication: Block-level dedup with fast RAM-only DDT for hot data
- Tiered Compression: LZ4/ZSTD/LZMA with automatic selection
- Checksums: BLAKE3/SHA-256 integrity verification for every block
- Snapshots: Instant, space-efficient point-in-time copies
- Post-Quantum Crypto: Kyber-1024 + Hybrid KEM for quantum resistance
§Advanced Features
- CXL Memory Tiering: Automatic data placement across DRAM/CXL/Storage based on temperature
- Computational Storage: Offload compression/checksums to smart storage devices (~80% CPU savings)
- ML-based Prefetching: Neural network predicts I/O patterns (4→8→5 architecture)
- dRAID: Distributed spare across pool for faster rebuilds
- Direct I/O: Cache bypass for large sequential operations
§Architecture
LCPFS follows ZFS’s layered architecture:
┌─────────────────────────────────────────────────────────┐
│ ZPL (POSIX Layer) │
│ Files, Directories, Attributes │
├─────────────────────────────────────────────────────────┤
│ DMU (Data Management) │
│ Objects, Transactions, Datasets │
├─────────────────────────────────────────────────────────┤
│ ARC (Adaptive Cache) │
│ Block Caching, Prefetch, L2ARC │
├─────────────────────────────────────────────────────────┤
│ SPA (Storage Pool) │
│ VDEVs, I/O Pipeline, Checksums │
└─────────────────────────────────────────────────────────┘§Usage
LCPFS is designed for kernel integration. To use it:
ⓘ
use lcpfs::{BlockDevice, register_device, set_log_fn};
// 1. Implement BlockDevice for your storage hardware
struct NvmeDevice { /* ... */ }
impl BlockDevice for NvmeDevice {
fn read_block(&mut self, block: usize, buf: &mut [u8]) -> Result<(), &'static str> {
// Hardware-specific read
Ok(())
}
// ... other methods
}
// 2. Register the device with LCPFS
let device = Box::new(NvmeDevice::new());
let dev_id = register_device(device);
// 3. Set up logging (optional but recommended)
set_log_fn(|args| kprintln!("{}", args));
// 4. Initialize LCPFS
lcpfs::init();§Feature Flags
pqc: Enable post-quantum cryptography (Kyber-1024) for future-proof encryptionhw-accel: Use hardware-accelerated checksums when available
§Safety
LCPFS prioritizes data integrity:
- Every block is checksummed on write and verified on read
- Corrupted blocks are automatically repaired from RAID-Z parity or mirrors
- Atomic transactions ensure consistent on-disk state
- Copy-on-write prevents in-place corruption
§Minimum Supported Rust Version
LCPFS requires Rust 1.85 or later (2024 edition).
Re-exports§
pub use storage::zpl::O_APPEND;pub use storage::zpl::O_CREAT;pub use storage::zpl::O_DIRECTORY;pub use storage::zpl::O_EXCL;pub use storage::zpl::O_RDONLY;pub use storage::zpl::O_RDWR;pub use storage::zpl::O_TRUNC;pub use storage::zpl::O_WRONLY;pub use storage::zpl::S_IFBLK;pub use storage::zpl::S_IFIFO;pub use storage::zpl::S_IFLNK;pub use storage::zpl::S_IFMT;pub use storage::zpl::S_IFSOCK;pub use storage::zpl::SEEK_CUR;pub use storage::zpl::SEEK_END;pub use storage::zpl::SEEK_SET;
Modules§
- analytics
- Storage Analytics: Detailed storage usage and performance metrics. Storage Analytics Module
- arch
- Platform-specific implementations (entropy, timestamps, syscalls). Isolates inline assembly for portability across x86_64, AArch64, etc. Platform-specific implementations for LCPFS.
- archive
- LunAr Archives: Native archive support (ZIP, TAR, 7z) with transparent access.
- branch
- Git-style branching: zero-copy branches, merge, cherry-pick, and commit tracking. Git-style branching for LCPFS.
- cache
- Caching: ARC, L2ARC, spacemap. Caching subsystems.
- cloud
- Cloud: S3 storage, cloud tiering. Cloud storage integration.
- compress
- Compression: LZ4, ZSTD, computational storage, GPU compression, QLoRA. Data compression algorithms.
- crypto
- Cryptography: AES-NI, PQC (Kyber), CSPRNG, core crypto, secure erase. Cryptographic operations for LCPFS.
- dedup
- Deduplication: DDT, fast dedup. Block-level deduplication.
- defrag
- Online Defragmentation: Compact fragmented files without unmounting.
- delta
- Delta Sync: Efficient rsync-style synchronization. Delta Synchronization for LCPFS.
- dictcomp
- Dictionary Compression: Shared compression dictionaries. Shared Compression Dictionaries for LCPFS.
- distributed
- Distributed: cluster, Ceph-like OSD/MDS/CRUSH. Distributed and clustered storage.
- fscore
- Core filesystem structures, constants, and main implementation. Core data structures and constants for LCPFS.
- fts
- Full-Text Search: Index file contents for instant search with BM25 ranking.
- hw
- Hardware acceleration: CXL, GPU/CUDA, DPU, Intel QAT, PMem, NVMe-oF, SMART. Hardware acceleration.
- integrity
- Integrity: checksums, scrubbing, anomaly detection. Checksum and integrity verification.
- io
- I/O pipeline and QoS. I/O pipeline and quality of service.
- lineage
- Data Lineage: Track data provenance and transformations. Data Lineage Tracking for LCPFS.
- lunaos
- LunaOS-specific integration. LunaOS-specific integration.
- mgmt
- Pool, dataset, and volume management. Pool, dataset, and volume management.
- ml
- Machine learning: prefetch, classification, GF solver. Machine learning features.
- net
- Networking: NFS, SMB, replication, send/receive. Network filesystem protocols.
- nfs
- NFS Server: Native NFSv4/v4.1 server for exporting datasets.
- notify
- Filesystem Events: real-time notification for filesystem changes (inotify-like).
- quota
- User/Group Quotas: Per-user and per-group storage limits. User and Group Quota Management for LCPFS.
- raid
- RAID: mirrors, RAID-Z, dRAID, erasure coding. RAID and redundancy implementations.
- s3
- S3 Gateway: native S3-compatible API for exposing datasets as S3 buckets.
- sparse
- Sparse Files: Efficient storage of files with holes. Sparse Files Module
- storage
- Storage layer: DMU, ZPL, ZAP, ZIL, VDEV, ZVOL. Storage layer (ZFS-like layers).
- streams
- Alternate Data Streams: Multiple data streams per file (NTFS-style). Alternate Data Streams Module
- telemetry
- Telemetry: Prometheus/Grafana metrics export. Telemetry Module
- thin
- Thin Provisioning: Overcommit storage with on-demand block allocation.
- tier
- Data tiering and maintenance. Data tiering and maintenance.
- time
- Time: Unified timestamp provider.
- timetravel
- Time-travel queries: SQL-like access to historical filesystem state. Time-Travel Query Engine for LCPFS.
- trash
- Trash / Recycle Bin: Move deleted files to trash instead of permanent delete.
- txn
- Multi-File Transactions: Atomic operations across multiple files with WAL. Multi-File Atomic Transactions for LCPFS.
- util
- Utilities and benchmarks. Utilities and benchmarks.
- vault
- LunaVault: Encrypted container support (VeraCrypt-compatible). LunaVault encrypted container support.
- vector
- Vector search: HNSW indexing, embeddings, semantic similarity. Vector/Semantic Search Module for LCPFS.
- wasm
- WASM Storage Plugins: sandboxed WebAssembly for custom storage policies.
Macros§
- lcpfs_
println - Print a log message from LCPFS.
Structs§
- BLOCK_
DEVICES - Global registry of block devices available to LCPFS.
- File
Stat - POSIX-compatible file status structure.
- Lcpfs
Crypto - Cryptographic operations for LCPFS.
- Pool
- LCPFS Storage Pool - unified API for filesystem operations.
- Properties
- Dataset properties
- RamDisk
- In-memory RAM disk for testing and fallback.
Enums§
- FsError
- Comprehensive error type for LCPFS operations.
- Property
Source - Property source (where value came from)
- Property
Value - Property value types
Constants§
- ENCRYPTION_
ACTIVE - Indicates whether encryption is currently active.
- MAX_
NAME_ LEN - Maximum length of a single path component (filename/dirname).
- MAX_
PATH_ DEPTH - Maximum path depth (number of directory components).
- MAX_
PATH_ LEN - Maximum total path length in bytes.
- S_IFCHR
- Character device file type.
- S_IFDIR
- Directory file type.
- S_IFREG
- Regular file type.
- S_IRUSR
- Owner read permission.
- S_IWUSR
- Owner write permission.
- S_IXUSR
- Owner execute permission.
Traits§
- Block
Device - Trait for block storage devices.
Functions§
- cooperative_
yield - Cooperatively yield CPU for the specified duration.
- get_
block_ device - Get a block device by ID.
- get_
time - Get current time in nanoseconds.
- init
- Initialize the LCPFS subsystem.
- register_
device - Register a block device with LCPFS.
- scheduler_
available - Check if the kernel scheduler is available.
- set_
log_ fn - Set the logging callback function.
- set_
spawn_ fn - Set the task spawn function.
- set_
time_ fn - Set the time provider function.
- set_
yield_ fn - Set the cooperative yield function.
- spawn_
on_ core - Spawn a background task.