pub trait ReplicationCore<T>:
Send
+ Sync
+ 'staticwhere
T: TypeConfig,{
// Required methods
fn handle_raft_request_in_batch<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
entry_payloads: Vec<EntryPayload>,
state_snapshot: StateSnapshot,
leader_state_snapshot: LeaderStateSnapshot,
cluster_metadata: &'life1 ClusterMetadata,
ctx: &'life2 RaftContext<T>,
) -> Pin<Box<dyn Future<Output = Result<AppendResults>> + Send + 'async_trait>>
where Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait;
fn handle_success_response(
&self,
peer_id: u32,
peer_term: u64,
success_result: SuccessResult,
leader_term: u64,
) -> Result<PeerUpdate>;
fn handle_conflict_response(
&self,
peer_id: u32,
conflict_result: ConflictResult,
raft_log: &Arc<ROF<T>>,
current_next_index: u64,
) -> Result<PeerUpdate>;
fn if_update_commit_index_as_follower(
my_commit_index: u64,
last_raft_log_id: u64,
leader_commit_index: u64,
) -> Option<u64>;
fn retrieve_to_be_synced_logs_for_peers(
&self,
new_entries: Vec<Entry>,
leader_last_index_before_inserting_new_entries: u64,
max_legacy_entries_per_peer: u64,
peer_next_indices: &HashMap<u32, u64>,
raft_log: &Arc<ROF<T>>,
) -> DashMap<u32, Vec<Entry>>;
fn handle_append_entries<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
request: AppendEntriesRequest,
state_snapshot: &'life1 StateSnapshot,
raft_log: &'life2 Arc<ROF<T>>,
) -> Pin<Box<dyn Future<Output = Result<AppendResponseWithUpdates>> + Send + 'async_trait>>
where Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait;
fn check_append_entries_request_is_legal(
&self,
my_term: u64,
request: &AppendEntriesRequest,
raft_log: &Arc<ROF<T>>,
) -> AppendEntriesResponse;
}Expand description
Core replication protocol operations
Required Methods§
Sourcefn handle_raft_request_in_batch<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
entry_payloads: Vec<EntryPayload>,
state_snapshot: StateSnapshot,
leader_state_snapshot: LeaderStateSnapshot,
cluster_metadata: &'life1 ClusterMetadata,
ctx: &'life2 RaftContext<T>,
) -> Pin<Box<dyn Future<Output = Result<AppendResults>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn handle_raft_request_in_batch<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
entry_payloads: Vec<EntryPayload>,
state_snapshot: StateSnapshot,
leader_state_snapshot: LeaderStateSnapshot,
cluster_metadata: &'life1 ClusterMetadata,
ctx: &'life2 RaftContext<T>,
) -> Pin<Box<dyn Future<Output = Result<AppendResults>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
As Leader, send replications to peers (combines regular heartbeats and client proposals).
Performs peer synchronization checks:
- Verifies if any peer’s next_id <= leader’s commit_index
- For non-synced peers: retrieves unsynced logs and buffers them
- Prepends unsynced entries to the entries queue
§Returns
Ok(AppendResults)with aggregated replication outcomesErrfor unrecoverable errors
§Return Result Semantics
-
Insufficient Quorum: Returns
Ok(AppendResults)withcommit_quorum_achieved = falsewhen:- Responses received from all nodes but majority acceptance not achieved
- Partial timeouts reduce successful responses below majority
-
Timeout Handling:
- Partial timeouts: Returns
Okwithcommit_quorum_achieved = false - Complete timeout: Returns
Okwithcommit_quorum_achieved = false - Timeout peers are EXCLUDED from
peer_updates
- Partial timeouts: Returns
-
Error Conditions: Returns
ErrONLY for:- Empty voting members (
ReplicationError::NoPeerFound) - Log generation failures (
generate_new_entrieserrors) - Higher term detected in peer response (
ReplicationError::HigherTerm) - Critical response handling errors
- Empty voting members (
§Guarantees
- Only peers with successful responses appear in
peer_updates - Timeouts never cause top-level
Err(handled as failed responses) - Leader self-vote always counted in quorum calculation
§Note
- Leader state should be updated by LeaderState only(follows SRP).
§Quorum
- If there are no voters (not even the leader), quorum is not possible.
- If the leader is the only voter, quorum is always achieved.
- If all nodes are learners, quorum is not achieved.
Sourcefn handle_success_response(
&self,
peer_id: u32,
peer_term: u64,
success_result: SuccessResult,
leader_term: u64,
) -> Result<PeerUpdate>
fn handle_success_response( &self, peer_id: u32, peer_term: u64, success_result: SuccessResult, leader_term: u64, ) -> Result<PeerUpdate>
Handles successful AppendEntries responses
Updates peer match/next indices according to:
- Last matched log index
- Current leader term
Sourcefn handle_conflict_response(
&self,
peer_id: u32,
conflict_result: ConflictResult,
raft_log: &Arc<ROF<T>>,
current_next_index: u64,
) -> Result<PeerUpdate>
fn handle_conflict_response( &self, peer_id: u32, conflict_result: ConflictResult, raft_log: &Arc<ROF<T>>, current_next_index: u64, ) -> Result<PeerUpdate>
Resolves log conflicts from follower responses
Implements conflict backtracking optimization (Section 5.3)
Sourcefn if_update_commit_index_as_follower(
my_commit_index: u64,
last_raft_log_id: u64,
leader_commit_index: u64,
) -> Option<u64>
fn if_update_commit_index_as_follower( my_commit_index: u64, last_raft_log_id: u64, leader_commit_index: u64, ) -> Option<u64>
Determines follower commit index advancement
Applies Leader’s commit index according to:
- min(leader_commit, last_local_log_index)
Sourcefn retrieve_to_be_synced_logs_for_peers(
&self,
new_entries: Vec<Entry>,
leader_last_index_before_inserting_new_entries: u64,
max_legacy_entries_per_peer: u64,
peer_next_indices: &HashMap<u32, u64>,
raft_log: &Arc<ROF<T>>,
) -> DashMap<u32, Vec<Entry>>
fn retrieve_to_be_synced_logs_for_peers( &self, new_entries: Vec<Entry>, leader_last_index_before_inserting_new_entries: u64, max_legacy_entries_per_peer: u64, peer_next_indices: &HashMap<u32, u64>, raft_log: &Arc<ROF<T>>, ) -> DashMap<u32, Vec<Entry>>
Gathers legacy logs for lagging peers
Performs log segmentation based on:
- Peer’s next_index
- Max allowed historical entries
Sourcefn handle_append_entries<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
request: AppendEntriesRequest,
state_snapshot: &'life1 StateSnapshot,
raft_log: &'life2 Arc<ROF<T>>,
) -> Pin<Box<dyn Future<Output = Result<AppendResponseWithUpdates>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn handle_append_entries<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
request: AppendEntriesRequest,
state_snapshot: &'life1 StateSnapshot,
raft_log: &'life2 Arc<ROF<T>>,
) -> Pin<Box<dyn Future<Output = Result<AppendResponseWithUpdates>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Handles an incoming AppendEntries RPC request (called by ALL ROLES)
Core responsibilities:
- Term validation and comparison (RFC §5.1)
- Log consistency checking (prev_log_index/term)
- Entry appending with conflict resolution (RFC §5.3)
- Commit index advancement (RFC §5.3)
§Critical Architecture Constraints
- ROLE AGNOSTIC - This method contains no role-specific logic. It simply:
- Validates the RPC against local state
- Returns required state updates
- STATE CHANGE ISOLATION - Never directly modifies:
- Current role (Leader/Follower/etc)
- Persistent state (term/votedFor)
- Volatile leader state (nextIndex/matchIndex)
- CALLER MUST:
- Check
response.term_updatefor term conflicts - If higher term exists, transition to Follower
- Apply other state updates via role_tx
- Check
Sourcefn check_append_entries_request_is_legal(
&self,
my_term: u64,
request: &AppendEntriesRequest,
raft_log: &Arc<ROF<T>>,
) -> AppendEntriesResponse
fn check_append_entries_request_is_legal( &self, my_term: u64, request: &AppendEntriesRequest, raft_log: &Arc<ROF<T>>, ) -> AppendEntriesResponse
Validates an incoming AppendEntries RPC from a Leader against Raft protocol rules.
This function implements the log consistency checks defined in Raft paper Section 5.3. It determines whether to accept the Leader’s log entries by verifying:
- Term freshness
- Virtual log (prev_log_index=0) handling
- Previous log entry consistency
§Raft Protocol Rules Enforced
- Term Check (Raft Paper 5.1):
- Reject requests with stale terms to prevent partitioned Leaders
- Virtual Log Handling (Implementation-Specific):
- Special case for empty logs (prev_log_index=0 && prev_log_term=0)
- Log Matching (Raft Paper 5.3):
- Ensure Leader’s prev_log_index/term matches Follower’s log
§Parameters
my_term: Current node’s termrequest: Leader’s AppendEntries RPCraft_log: Reference to node’s log store
§Return
AppendEntriesResponse::successif validation passesAppendEntriesResponse::higher_termif Leader’s term is staleAppendEntriesResponse::conflictwith debugging info for log inconsistencies
Dyn Compatibility§
This trait is not dyn compatible.
In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.