Burn Store
Advanced model storage and serialization for the Burn deep learning framework
A comprehensive storage library for Burn that enables efficient model serialization, cross-framework interoperability, and advanced tensor management.
Features
Core Capabilities
- Burnpack Format - Native Burn format with CBOR metadata, memory-mapped loading, ParamId persistence for stateful training, and no-std support
- SafeTensors Format - Industry-standard format for secure and efficient tensor serialization
- PyTorch Support - Direct loading of PyTorch .pth/.pt files with automatic weight transformation
- Zero-Copy Loading - Memory-mapped files and lazy tensor materialization for optimal performance
- Cross-Framework Support - Seamless PyTorch ↔ Burn model conversion with automatic adaptations
- Flexible Filtering - Load/save specific model subsets with regex, exact paths, or custom predicates
- Tensor Remapping - Rename tensors during load/save for framework compatibility
- No-std Support - Burnpack and SafeTensors formats available in embedded and WASM environments
Note: no-std support for SafeTensors format is temporarily disabled due to https://github.com/huggingface/safetensors/issues/650 not released yet.
Advanced Features
- Framework Adapters - Automatic weight transposition and parameter renaming for PyTorch compatibility
- Lazy Transformations - Chain tensor transformations without materializing intermediate data
- Partial Loading - Continue loading even when some tensors are missing
- Custom Metadata - Attach version info, training details, or other metadata to saved models
Quick Start
Basic Save and Load
Burnpack (Native Format)
use ;
// Save a model with metadata
let mut store = from_file
.metadata
.metadata;
model.save_into?;
// Load a model (automatically memory-mapped when available)
let mut store = from_file;
model.load_from?;
Performance: Burnpack provides faster loading times and reduced memory overhead compared to other formats.
Training State Persistence: Burnpack automatically preserves parameter identifiers (ParamId) for stateful training continuation.
SafeTensors
use ;
// Save a model
let mut store = from_file;
model.save_into?;
// Load a model
let mut store = from_file;
model.load_from?;
Filtering Tensors
// Save only encoder layers
let mut store = from_file
.with_regex
.metadata;
model.save_into?;
// Load with multiple filter patterns (OR logic)
let mut store = from_file
.with_regex // Include encoder tensors
.with_regex // OR include any bias tensors
.with_full_path; // OR include specific tensor
model.load_from?;
PyTorch Interoperability
use ;
// Load PyTorch .pth file directly (PyTorchToBurnAdapter is applied automatically)
let mut store = from_file
.with_top_level_key // Access nested state dict
.allow_partial; // Skip unknown tensors
burn_model.load_from?;
// Load PyTorch model from SafeTensors
let mut store = from_file
.with_from_adapter // Auto-transpose linear weights
.allow_partial; // Skip unknown PyTorch tensors
burn_model.load_from?;
// Save Burn model for PyTorch
let mut store = from_file
.with_to_adapter; // Convert back to PyTorch format
burn_model.save_into?;
Tensor Name Remapping
// Simple pattern-based remapping
let mut store = from_file
.with_key_remapping // old_model.X -> new_model.X
.with_key_remapping // X.gamma -> X.weight
.with_key_remapping; // X.beta -> X.bias
// Complex remapping with KeyRemapper
use KeyRemapper;
let remapper = new
.add_pattern? // h.0 -> layer0
.add_pattern?; // attn -> attention
let mut store = from_file
.remap;
// Combining with PyTorch loading
let mut store = from_file
.with_key_remapping // Remove model. prefix
.with_key_remapping; // norm1 -> norm_1
Memory Operations
// Burnpack: Save to memory buffer
let mut store = from_bytes
.with_regex
.metadata;
model.save_into?;
let bytes = store.get_bytes?;
// Burnpack: Load from memory buffer (no-std compatible)
let mut store = from_bytes
.allow_partial;
let result = model.load_from?;
// SafeTensors: Memory operations
let mut store = from_bytes
.with_regex;
model.save_into?;
let bytes = store.get_bytes?;
println!;
if !result.missing.is_empty
Both BurnpackStore and SafetensorsStore support no-std environments when using byte operations
Model Surgery and Partial Operations
Burn Store enables sophisticated model surgery operations for selectively loading, saving, and transferring parts of models.
Direct Model-to-Model Transfer
use ;
// Direct transfer - all compatible tensors
let snapshots = model1.collect;
let result = model2.apply;
// Selective transfer with filtering
let filter = new.with_regex;
let snapshots = model1.collect;
let result = model2.apply;
// Transfer with path transformation
let mut snapshots = model1.collect;
for snapshot in &mut snapshots
model2.apply;
Partial Loading and Exports
// Export only specific layers
let mut store = from_file
.with_regex;
model.save_into?;
// Load with missing tensors allowed
let mut store = from_file
.allow_partial;
let result = model.load_from?;
println!;
Merging Multiple Models
// Merge weights from different sources
let mut merged = Vecnew;
merged.extend;
// Add encoder from specialized model
let encoder_filter = new.with_regex;
merged.extend;
// Apply merged weights
target_model.apply;
// Alternative: Sequential loading from files
let mut base_store = from_file;
model.load_from?;
let mut encoder_store = from_file
.with_regex
.allow_partial;
model.load_from?; // Overlays encoder weights
Complete Example: Migrating PyTorch Models
use ;
// Load directly from PyTorch .pth file (automatic PyTorchToBurnAdapter)
let mut store = from_file
// Access the state dict
.with_top_level_key
// Only load transformer layers
.with_regex
// Rename layer structure to match Burn model
.with_key_remapping
// Rename attention layers
.with_key_remapping
// Handle missing tensors gracefully
.allow_partial;
let mut model = new;
let result = model.load_from?;
println!;
if !result.errors.is_empty
// Save the migrated model in SafeTensors format
let mut save_store = from_file
.metadata
.metadata;
model.save_into?;
Advanced Usage
Custom Filtering with Predicates
// Custom filter function
let mut store = from_file
.with_predicate;
Working with Containers
// Filter based on container types (Linear, Conv2d, etc.)
let mut store = from_file
.with_predicate;
Handling Load Results
let result = model.load_from?;
// Detailed result information
println!;
println!;
println!;
println!;
if !result.errors.is_empty
Benchmarks
Loading Benchmarks
# Generate model files first (one-time setup)
# Run unified loading benchmark with default backend (NdArray CPU)
# Run with specific backend
# Run with multiple backends
Saving Benchmarks
Compares 3 saving methods: BurnpackStore, NamedMpkFileRecorder, and SafetensorsStore.
# Run unified saving benchmark with default backend (NdArray CPU)
# Run with specific backend
# Run with multiple backends
API Overview
Builder Methods
The stores provide a fluent API for configuration:
Filtering
with_regex(pattern)- Filter by regex patternwith_full_path(path)- Include specific tensorwith_full_paths(paths)- Include multiple specific tensorswith_predicate(fn)- Custom filter logicmatch_all()- Include all tensors (no filtering)
Remapping
with_key_remapping(from, to)- Regex-based tensor renamingremap(KeyRemapper)- Complex remapping rules
Adapters
with_from_adapter(adapter)- Loading transformationswith_to_adapter(adapter)- Saving transformations
Configuration
metadata(key, value)- Add custom metadata (Burnpack and SafeTensors)allow_partial(bool)- Continue on missing tensorsvalidate(bool)- Toggle validationwith_top_level_key(key)- Access nested dict in PyTorch filesoverwrite(bool)- Allow overwriting existing files (Burnpack)
Inspecting Burnpack Files
Generate and examine a sample file:
|
The example creates a sample model and outputs inspection commands for examining the binary format.
License
This project is dual-licensed under MIT and Apache-2.0.