Expand description
§IPFRS Core
Core types and traits for the IPFRS (InterPlanetary File Replication System).
This crate provides fundamental building blocks for content-addressed storage:
Block- Content-addressed data blocks with CID verificationCid- Content Identifiers for unique data addressingIpld- InterPlanetary Linked Data for structured content- Chunking - Split large files into Merkle DAG structures
- Streaming - Async readers for DAG traversal
§Quick Start
use ipfrs_core::{Block, CidBuilder};
use bytes::Bytes;
// Create a block from data
let block = Block::new(Bytes::from_static(b"Hello, IPFS!")).unwrap();
println!("CID: {}", block.cid());
// Generate a CID directly
let cid = CidBuilder::new().build(b"some data").unwrap();
println!("Generated CID: {}", cid);§Chunking Large Files
use ipfrs_core::{Chunker, ChunkingConfig};
let data = vec![0u8; 1_000_000]; // 1MB of data
let chunker = Chunker::new();
let chunked = chunker.chunk(&data).unwrap();
println!("Root CID: {}", chunked.root_cid);
println!("Chunks: {}", chunked.chunk_count);§IPLD Encoding
use ipfrs_core::Ipld;
use std::collections::BTreeMap;
// Create structured data
let mut map = BTreeMap::new();
map.insert("name".to_string(), Ipld::String("example".to_string()));
map.insert("version".to_string(), Ipld::Integer(1));
let ipld = Ipld::Map(map);
// Encode to DAG-CBOR
let cbor = ipld.to_dag_cbor().unwrap();
// Decode back
let decoded = Ipld::from_dag_cbor(&cbor).unwrap();§Features
- SHA2-256, SHA2-512, SHA3-256, SHA3-512, BLAKE2b, BLAKE2s, and BLAKE3 hash algorithms with SIMD acceleration
- CIDv0 and CIDv1 support with conversion
- Multibase encoding (Base32, Base58btc, Base64)
- DAG-CBOR, DAG-JSON, and DAG-JOSE codecs
- Pluggable codec registry for custom encoding/decoding
- DAG traversal and analysis utilities for Merkle DAGs
- CAR (Content Addressable aRchive) format support for data portability
- Compression support with Zstd and LZ4 algorithms for storage efficiency
- Streaming compression for efficient compression/decompression of large files
- Async streaming for large files
- LRU block cache for fast repeated access to frequently used blocks
- Apache Arrow integration for zero-copy tensor access
- Parallel batch processing with Rayon for high performance
- Parallel chunking for multi-core large file processing
- Content-defined chunking with deduplication
- Production metrics and observability with percentile tracking
Re-exports§
pub use self::arrow::arrow_dtype_to_tensor;pub use self::arrow::arrow_to_tensor_block;pub use self::arrow::tensor_dtype_to_arrow;pub use self::arrow::TensorBlockArrowExt;pub use self::batch::BatchProcessor;pub use self::batch::BatchStats;pub use self::block::Block;pub use self::block::BlockBuilder;pub use self::block::BlockMetadata;pub use self::block::MAX_BLOCK_SIZE;pub use self::block::MIN_BLOCK_SIZE;pub use self::block_cache::BlockCache;pub use self::block_cache::CacheStats;pub use self::car::CarCompressionStats;pub use self::car::CarHeader;pub use self::car::CarReader;pub use self::car::CarWriter;pub use self::car::CarWriterBuilder;pub use self::chunking::ChunkedFile;pub use self::chunking::Chunker;pub use self::chunking::ChunkingConfig;pub use self::chunking::ChunkingConfigBuilder;pub use self::chunking::ChunkingStrategy;pub use self::chunking::DagBuilder;pub use self::chunking::DagLink;pub use self::chunking::DagNode;pub use self::chunking::DeduplicationStats;pub use self::cid::codec;pub use self::cid::parse_cid;pub use self::cid::parse_cid_with_base;pub use self::cid::CidBuilder;pub use self::cid::CidExt;pub use self::cid::HashAlgorithm;pub use self::cid::MultibaseEncoding;pub use self::codec_registry::global_codec_registry;pub use self::codec_registry::Codec;pub use self::codec_registry::CodecRegistry;pub use self::codec_registry::DagCborCodec;pub use self::codec_registry::DagJsonCodec;pub use self::codec_registry::RawCodec;pub use self::compression::compress;pub use self::compression::compression_ratio;pub use self::compression::decompress;pub use self::compression::CompressionAlgorithm;pub use self::config::global_config;pub use self::config::set_global_config;pub use self::config::Config;pub use self::config::ConfigBuilder;pub use self::dag::collect_all_links;pub use self::dag::collect_unique_links;pub use self::dag::count_links_by_depth;pub use self::dag::dag_fanout_by_level;pub use self::dag::extract_links;pub use self::dag::filter_dag;pub use self::dag::find_paths_to_cid;pub use self::dag::is_dag;pub use self::dag::map_dag;pub use self::dag::subgraph_size;pub use self::dag::topological_sort;pub use self::dag::traverse_bfs;pub use self::dag::traverse_dfs;pub use self::dag::DagMetrics;pub use self::dag::DagStats;pub use self::error::Error;pub use self::error::Result;pub use self::hash::global_hash_registry;pub use self::hash::Blake2b256Engine;pub use self::hash::Blake2b512Engine;pub use self::hash::Blake2s256Engine;pub use self::hash::Blake3Engine;pub use self::hash::CpuFeatures;pub use self::hash::HashEngine;pub use self::hash::HashRegistry;pub use self::hash::Sha256Engine;pub use self::hash::Sha3_256Engine;pub use self::hash::Sha3_512Engine;pub use self::hash::Sha512Engine;pub use self::integration::DeduplicationStats as TensorDeduplicationStats;pub use self::integration::TensorBatchProcessor;pub use self::integration::TensorDeduplicator;pub use self::integration::TensorStore;pub use self::ipld::Ipld;pub use self::jose::JoseBuilder;pub use self::jose::JoseSignature;pub use self::metrics::global_metrics;pub use self::metrics::Metrics;pub use self::metrics::MetricsSnapshot;pub use self::metrics::PercentileStats;pub use self::metrics::Timer;pub use self::parallel_chunking::ParallelChunker;pub use self::parallel_chunking::ParallelChunkingConfig;pub use self::parallel_chunking::ParallelChunkingResult;pub use self::parallel_chunking::ParallelDeduplicator;pub use self::pool::freeze_bytes;pub use self::pool::global_bytes_pool;pub use self::pool::global_cid_string_pool;pub use self::pool::BytesPool;pub use self::pool::CidStringPool;pub use self::pool::PoolStats;pub use self::safetensors::SafetensorInfo;pub use self::safetensors::SafetensorsFile;pub use self::streaming::read_chunked_file;pub use self::streaming::AsyncBlockReader;pub use self::streaming::BlockFetcher;pub use self::streaming::BlockReader;pub use self::streaming::DagChunkStream;pub use self::streaming::MemoryBlockFetcher;pub use self::streaming_compression::CompressingStream;pub use self::streaming_compression::DecompressingStream;pub use self::streaming_compression::StreamingStats;pub use self::tensor::TensorBlock;pub use self::tensor::TensorDtype;pub use self::tensor::TensorMetadata;pub use self::tensor::TensorShape;pub use self::types::BlockSize;pub use self::types::PeerId;pub use self::types::Priority;
Modules§
- arrow
- Apache Arrow memory layout integration for zero-copy tensor access.
- batch
- Batch processing utilities with parallel execution
- block
- Content-addressed data blocks.
- block_
cache - LRU cache for blocks
- car
- CAR (Content Addressable aRchive) format support.
- chunking
- Chunking and DAG (Directed Acyclic Graph) support for large file handling
- cid
- Content Identifier (CID) wrapper and utilities
- codec_
registry - Codec registry system for pluggable encoding/decoding.
- compression
- Compression support for block data
- config
- Centralized configuration management for IPFRS
- dag
- DAG (Directed Acyclic Graph) traversal and analysis utilities.
- error
- Error types for IPFRS operations.
- hash
- Hardware-accelerated hashing with SIMD support
- integration
- Integration utilities combining multiple ipfrs-core features.
- ipld
- IPLD (InterPlanetary Linked Data) support
- jose
- DAG-JOSE codec for encrypted and signed IPLD data
- metrics
- Metrics and observability for production monitoring
- parallel_
chunking - Parallel chunking for high-performance large file processing
- pool
- Memory pooling for frequent allocations
- safetensors
- Safetensors Format Support
- streaming
- Streaming support for reading and writing blocks
- streaming_
compression - Streaming compression and decompression support
- tensor
- Tensor-aware block types for neural network data.
- types
- Common types used across IPFRS
- utils
- Utility functions for common IPFRS operations.
Type Aliases§
- Cid
- A Cid that contains a multihash with an allocated size of 512 bits.