Parcode
High-performance, zero-copy, lazy-loading object storage for Rust.
Parcode is a Rust persistence library designed for true lazy access.
It lets you open massive object graphs and access a single field, record, or asset without deserializing the rest of the file.
This enables capabilities previously reserved for complex databases:
- Lazy Mirrors: Navigate deep struct hierarchies without loading data from disk.
- Surgical Access: Load only the specific field, vector chunk, or map entry you need.
- $O(1)$ Map Lookups: Retrieve items from huge
HashMaps instantly without full deserialization. - Parallel Speed: Writes are fully parallelized using a Zero-Copy graph architecture.
The Innovation: Pure Rust Lazy Loading
Most libraries that offer "Lazy Loading" or "Zero-Copy" access (like FlatBuffers or Cap'n Proto) come with a heavy price: Interface Definition Languages (IDLs). You are forced to write separate schema files (.proto, .fbs), run external compilers, and deal with generated code that doesn't feel like Rust.
Parcode changes the game.
We invented a technique we call "Compile-Time Structural Mirroring (CTSM)". By simply adding #[derive(ParcodeObject)], Parcode analyzes your Rust structs at compile time and invisibly generates a Lazy Mirror API.
| Feature | FlatBuffers / Cap'n Proto | Parcode |
|---|---|---|
| Schema Definition | External IDL files (.fbs) |
Standard Rust Structs |
| Build Process | Requires external CLI (flatc) |
Standard cargo build |
| Refactoring | Manual sync across files | IDE Rename / Refactor |
| Developer Experience | Foreign | Native |
More info in the whitpaper.
Installation
Add this to your Cargo.toml:
[]
= "0.4"
To enable LZ4 compression:
[]
= { = "0.4", = ["lz4_flex"] }
Usage Guide
1. Define your Data
Use #[derive(ParcodeObject)] and the #[parcode(...)] attributes to tell the engine how to shard your data.
use ParcodeObject; // Use ParcodeObject trait to enable lazy procedural macros
use ;
use HashMap;
2. Save Data
You have two ways to save data: Simple and Configured.
A. Simple Save (Default Settings) Perfect for quick prototyping.
use Parcode;
let world = GameWorld ;
// Saves with parallelism enabled
save?;
B. Configured Save Use the builder mode.
// Saves with LZ4 compression enabled to all metadata.
builder
.compression
.write?;
3. Read Data (Lazy)
Here is where the magic happens. We don't load the object; we load a Mirror.
use Parcode;
// 1. Open the file (Instant, uses mmap)
let reader = open?;
// 2. Get the Lazy Mirror (Instant, reads only header)
// Note: We get 'GameWorldLazy', a generated shadow struct.
let world_mirror = reader.?; // Or .load_lazy(), is the same.
// 3. Access local fields directly (Already in memory)
println!;
// 4. Navigate hierarchy without I/O
// 'settings' is a mirror. Accessing it costs nothing.
// 'difficulty' is inline. Accessing it costs nothing.
println!;
// 5. Surgical Load
// Only NOW do we touch disk to load the history vector.
// The massive 'terrain' vector is NEVER loaded.
let history = world_mirror.settings.history.load?;
4. Advanced Access Patterns
O(1) Map Lookup
Retrieve a single user from a million-user database without loading the database.
// .get() returns a full object
// .get_lazy() returns a Mirror of the object!
if let Some = world_mirror.players.get_lazy?
Lazy Vector Iteration
Scan a list of heavy objects without loading their heavy payloads.
// Assume we have Vec<Player>
for player_proxy in world_mirror.all_players.iter?
Advanced Features
Generic I/O: Write to Memory/Network
Parcode isn't limited to files. You can serialize directly to any std::io::Write destination.
let mut buffer = Vecnew;
// Serialize directly to RAM
builder
.compression
.write_to_writer?;
// 'buffer' now contains the full Parcode file structure
Synchronous Mode
For environments where threading is not available (WASM, embedded) or to reduce memory overhead.
builder
.compression
.write_sync?;
Inspector
Parcode includes tools to analyze the structure of your files without deserializing them.
use Parcode;
let report = inspect?;
println!;
Output:
=== PARCODE INSPECTOR REPORT ===
Root Offset: 550368
[GRAPH LAYOUT]
└── [Generic Container] Size: 1b | Algo: None | Children: 2
├── [Vec Container] Size: 13b | Algo: LZ4 | Children: 32 [Vec<50000> items]
└── [Map Container] Size: 4b | Algo: None | Children: 4 [Hashtable with 4 buckets]
Macro Attributes Reference
Control exactly how your data structure maps to disk using #[parcode(...)].
| Attribute | Effect | Best For |
|---|---|---|
| (none) | Field is serialized into the parent's payload. | Small primitives (u32, bool), short Strings, flags. |
#[parcode(chunkable)] |
Field is stored in its own independent Chunk. | Structs, Vectors, or fields you want to load lazily (.load()). |
#[parcode(map)] |
Field (HashMap) is sharded by hash. |
Large Dictionaries/Indices where you need random access (.get()). |
#[parcode(compression="lz4")] |
Overrides compression for this chunk. | Highly compressible data (text, save states). |
Benchmarks
Scenario: Opening a persisted world state (~10 MB) and accessing a single piece of data.
| Serializer | Cold Start | Deep Field Access | Map Lookup | Total (Cold + Targeted Access) |
|---|---|---|---|---|
| Parcode | ~1.38 ms | ~0.000017 ms | ~0.00016 ms | ~1.38 ms + ~0.0001 ms / target |
| Cap’n Proto | ~60 ms | ~0.000046 ms | ~0.00437 ms | ~60 ms + ~0.004 ms / target |
| Postcard | ~80 ms | ~0.000017 ms | ~0.000017 ms | ~80 ms + ~0.00002 ms / target |
| Bincode | ~299 ms | ~0.000015 ms | ~0.0000025 ms | ~299 ms + ~0.00001 ms / target |
This table shows
- Cold Start dominates total latency for traditional serializers
- Targeted access cost is negligible once data is loaded
- Therefore, real-world point access latency ≈ cold-start time
Parcode is the only system where:
- Cold start is constant
- Access cost scales only with the data actually requested
- Unused data is never deserialized
Key takeaway
True laziness is not about fast reads — it is about avoiding unnecessary work.
Parcode minimizes observable latency by paying only for:
Cold start (structural metadata) + exactly the data you touch (and his shard)
Benchmarks run on NVMe SSD. Parallel throughput scales with cores.
License
This project is licensed under the MIT license.
Built for the Rust community by RetypeOS.