Skip to main content

Crate atomr_distributed_data

Crate atomr_distributed_data 

Source
Expand description

atomr-distributed-data.

Provides CRDTs (GCounter, PNCounter, GSet, ORSet, LWWRegister, Flag, ORMap, LWWMap, PNCounterMap) and a Replicator that stores them in-memory and merges on request.

Structs§

FileDurableStore
Append-only file-backed store. Keys live as <dir>/<sanitized>.bin.
Flag
GCounter
GSet
LWWMap
Last-write-wins map of K → V.
LwwRegister
NoopDurableStore
In-memory no-op implementation. Used when durability is disabled.
ORMap
Observed-remove map of K → V (V itself a CRDT).
ORMultiMap
Map of K → set-of-V, where the set is itself an OrSet<V>. Phase 8.B.
OrSet
Observed-remove set. Each addition gets a unique tag; a removal retains all tags seen at that moment. Merge takes the union of (item, tag) pairs minus tombstones.
PNCounter
PNCounterMap
Map of K → PNCounter.
PruningState
State carried alongside a CRDT entry — maps each removed node to its pruning phase. Per, the map’s keys are the addresses that have left the cluster.
ReadAggregator
Counts replies against a target derived from a crate::ReadConsistency and cluster_size. Identical shape to WriteAggregator but distinct so call sites cannot mix them up.
Replicator
ReplicatorActor
Actor-style replicator handle.
SubscriptionToken
RAII handle returned by Replicator::subscribe.
WriteAggregator
Counts acks against a target derived from a crate::WriteConsistency and cluster_size.

Enums§

PruningPhase
Per-(removed-node, owner) pruning state.
ReadConsistency
ReplicatorAck
Tagged response payloads for ReplicatorActor commands.
ReplicatorError
WriteConsistency
Phase 8.D — typed consistency levels with timeouts. The current in-process Replicator runs every operation as Local (single-node store); cross-node All/Majority/From(n) semantics activate once Phase 6 gossip lands.

Traits§

CrdtMerge
Convergent replicated data type merge semantics.
DeltaCrdt
Optional delta-CRDT layer: emit a small “delta” describing the last local change and merge incoming deltas into the full state.
DurableStore
Abstraction over a durable backing store. Methods are sync so they can be called from anywhere (including the replicator actor task); implementations should keep work small or punt to a worker thread.