aegis-replication
Distributed replication and consensus for the Aegis Database Platform.
Overview
aegis-replication provides the distributed systems layer including Raft consensus, consistent hashing, sharding, distributed transactions, and conflict-free replicated data types (CRDTs).
Features
- Raft Consensus - Leader election and log replication
- Consistent Hashing - HashRing, JumpHash, Rendezvous hashing
- Sharding - Automatic data partitioning
- Distributed Transactions - Two-phase commit (2PC)
- CRDTs - Conflict-free replication for eventual consistency
- Vector Clocks - Causality tracking
Architecture
┌────────────────────────────────────────────────────────┐
│ Cluster Manager │
├────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Raft │ │ Shard │ │ Transaction │ │
│ │ Consensus │ │ Router │ │ Coordinator │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
├────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Hash │ │ Vector │ │ CRDT │ │
│ │ Ring │ │ Clocks │ │ Engine │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
├────────────────────────────────────────────────────────┤
│ Transport Layer │
│ (gRPC / TCP / In-Memory) │
└────────────────────────────────────────────────────────┘
Modules
| Module | Description |
|---|---|
raft |
Raft consensus implementation |
cluster |
Cluster membership management |
shard |
Shard assignment and routing |
partition |
Data partitioning strategies |
hash |
Consistent hashing algorithms |
transaction |
Distributed transaction coordinator |
crdt |
CRDT implementations |
vector_clock |
Vector clock for causality |
transport |
Network communication |
log |
Replicated log |
Usage
[]
= { = "../aegis-replication" }
Raft Consensus
use ;
let config = RaftConfig ;
let node = new?;
// Start the node
node.start.await?;
// Propose a value (only leader can propose)
if node.is_leader
Consistent Hashing
use ;
// HashRing with virtual nodes
let mut ring = new; // 150 virtual nodes per physical node
ring.add_node;
ring.add_node;
ring.add_node;
// Get nodes for a key (returns primary + replicas)
let nodes = ring.get_nodes;
// JumpHash for fixed node count
let node_index = hash;
Distributed Transactions
use ;
let coordinator = new;
// Begin distributed transaction
let tx = coordinator.begin.await?;
// Execute on multiple shards
tx.execute_on_shard.await?;
tx.execute_on_shard.await?;
// Two-phase commit
coordinator.commit.await?;
CRDTs
use ;
// G-Counter (grow-only counter)
let mut counter = new;
counter.increment;
counter.merge;
println!;
// LWW-Register (last-writer-wins)
let mut register = new;
register.set;
// OR-Set (observed-remove set)
let mut set = new;
set.add;
set.remove;
Vector Clocks
use VectorClock;
let mut clock = new;
// Increment local time
clock.increment;
// Merge with remote clock
clock.merge;
// Compare causality
match clock.compare
Configuration
[]
= 3
= "quorum" # one, quorum, all
[]
= 150
= 300
= 50
[]
= "hash" # hash, range
= 16
= true
Tests
Test count: 634 tests (workspace total)
License
Apache-2.0