Burn Store
Advanced model storage and serialization for the Burn deep learning framework
A comprehensive storage library for Burn that enables efficient model serialization, cross-framework interoperability, and advanced tensor management.
Migrating from burn-import? See the Migration Guide for help moving from
PyTorchFileRecorder/SafetensorsFileRecorderto the new Store API.
Features
- Burnpack Format - Native Burn format with CBOR metadata, memory-mapped loading, ParamId persistence for stateful training, and no-std support
- SafeTensors Format - Industry-standard format for secure and efficient tensor serialization
- PyTorch Support - Direct loading of PyTorch .pth/.pt files with automatic weight transformation
- Zero-Copy Loading - Memory-mapped files and lazy tensor materialization for optimal performance
- Flexible Filtering - Load/save specific model subsets with regex, exact paths, or custom predicates
- Tensor Remapping - Rename tensors during load/save for framework compatibility
- Half-Precision Storage - Automatic F32/F16 conversion with smart defaults for reduced model file size
- No-std Support - Burnpack and SafeTensors formats available in embedded and WASM environments
Quick Start
use ;
// Load from PyTorch
let mut store = from_file;
model.load_from?;
// Load from SafeTensors (with PyTorch adapter)
let mut store = from_file
.with_from_adapter;
model.load_from?;
// Save to Burnpack
let mut store = from_file;
model.save_into?;
// Save with half-precision (F32 -> F16, ~50% smaller files)
let adapter = new;
let mut store = from_file
.with_to_adapter;
model.save_into?;
// Load half-precision back (F16 -> F32, same adapter)
let mut store = from_file
.with_from_adapter;
model.load_from?;
Documentation
For comprehensive documentation including:
- Exporting weights from PyTorch
- Loading weights into Burn models
- Saving models to various formats
- Advanced features (filtering, remapping, partial loading, zero-copy)
- API reference and troubleshooting
See the Burn Book - Saving and Loading chapter.
Running Benchmarks
# Generate model files (one-time setup)
# Run loading benchmarks
# Run saving benchmarks
# With specific backend
License
This project is dual-licensed under MIT and Apache-2.0.