ember-persistence
durability layer for ember. handles append-only file logging, point-in-time snapshots, and crash recovery.
each shard gets its own persistence files (shard-{id}.aof and shard-{id}.snap), keeping the shared-nothing architecture all the way down to disk.
what's in here
- aof — append-only file writer/reader with CRC32 integrity checks. binary TLV format, configurable fsync (always, every-second, OS-managed). gracefully handles truncated records from mid-write crashes
- snapshot — point-in-time serialization of an entire shard's keyspace. writes to a
.tmpfile first, then atomic rename to prevent partial snapshots from corrupting existing data - recovery — startup sequence: load snapshot, replay AOF tail, skip expired entries. handles corrupt files gracefully (logs warning, starts empty)
- format — low-level binary serialization helpers: length-prefixed bytes, integers, floats, checksums, header validation
file formats
AOF — [EAOF magic][version][record...] where each record is [tag][payload][crc32]
supported record types: SET, DEL, EXPIRE, LPUSH, RPUSH, LPOP, RPOP, ZADD, ZREM, HSET, HDEL, HINCRBY, SADD, SREM
snapshot (v2) — [ESNP magic][version][shard_id][entry_count][entries...][footer_crc32] where entries are type-tagged (string=0, list=1, sorted set=2, hash=3, set=4). v1 snapshots (no type tags) are still readable.
usage
use ;
use ;
use recover_shard;
// recovery is typically called by the shard on startup
let result = recover_shard;
for entry in result.entries
related crates
| crate | what it does |
|---|---|
| emberkv-core | storage engine, keyspace, sharding |
| ember-protocol | RESP3 parsing and command dispatch |
| ember-server | TCP server and connection handling |
| ember-cluster | distributed coordination |
| ember-cli | interactive command-line client (planned) |