LocalPrimaryReplicator

Trait LocalPrimaryReplicator 

Source
pub trait LocalPrimaryReplicator: Replicator {
    // Required methods
    async fn on_data_loss(
        &self,
        cancellation_token: BoxedCancelToken,
    ) -> Result<u8>;
    fn update_catch_up_replica_set_configuration(
        &self,
        currentconfiguration: &ReplicaSetConfig,
        previousconfiguration: &ReplicaSetConfig,
    ) -> Result<()>;
    fn update_current_replica_set_configuration(
        &self,
        currentconfiguration: &ReplicaSetConfig,
    ) -> Result<()>;
    async fn wait_for_catch_up_quorum(
        &self,
        catchupmode: ReplicaSetQuorumMode,
        cancellation_token: BoxedCancelToken,
    ) -> Result<()>;
    async fn build_replica(
        &self,
        replica: &ReplicaInformation,
        cancellation_token: BoxedCancelToken,
    ) -> Result<()>;
    fn remove_replica(&self, replicaid: i64) -> Result<()>;
}
Expand description

TODO: primary replicator has no public documentation, this is gathered unofficially and is subject to change/correction. IFabricPrimaryReplicator com interface wrapper.

Required Methods§

Source

async fn on_data_loss(&self, cancellation_token: BoxedCancelToken) -> Result<u8>

Source

fn update_catch_up_replica_set_configuration( &self, currentconfiguration: &ReplicaSetConfig, previousconfiguration: &ReplicaSetConfig, ) -> Result<()>

Informs the replicator there there is a current configuration and a previous configuration. Called on primary to inform the set of active secondary replicas that may begin to catchup. Idle secondary replicas are not included here.

The total number of replica marked with must_catchup will not exceed the write quorum. Secondary to be promoted to new primary is guaranteed to have must_catchup set, i.e. it must catch up (have all the data) to be promoted to new primary.

ReplicaInformation: current_progress -> The LSN of the replica. -1 if the replicator is already aware of the replica (it is in configuration or has been built) otherwise it will be the progress of the remote replica. catch_up_capability -> The first LSN of the replica. Similar to current_progress. must_catchup -> Set to true only for one replica in the current configuration.

Source

fn update_current_replica_set_configuration( &self, currentconfiguration: &ReplicaSetConfig, ) -> Result<()>

Informs the replicator about the current replica set configuration, and there is no longer a previous configuration. Remarks: Replicas here are not marked as must_catchup.

Source

async fn wait_for_catch_up_quorum( &self, catchupmode: ReplicaSetQuorumMode, cancellation_token: BoxedCancelToken, ) -> Result<()>

Called on primary to wait for replicas to catch up, before accepting writes.

mssf-core enables IFabricReplicatorCatchupSpecificQuorum for replicators, so ReplicaSetQuarumMode::Write can be used.

catchupmode: All -> full quorum. All replicas needs to catch up. Write -> write quorum, for replicas specified in update_catch_up_replica_set_configuration(currentconfiguration…), a subset of replicas that can form a write quorum must catchup, and the subset must include the replica with must_catchup set to true (primary candidate). This is used only in primary swap case in SF, to avoid slow replica preventing/slowing down the swap. Remarks: Catchup (or quorum catchup) in SF means that the lowest LSN among all replicas (or quorum of replicas including the must catchup replica) in the current configuration is equal or greater than the current committed LSN.

For swap primary case, double catchup feature is enabled by default. SF can first call this api before initiating write status revokation. SF then revoke write status, and call this again. This allows replicator to catch up with write status granted to make necessary writes for catch up. There is a chance that replicator takes forever to complete this api with mode ReplicaSetQuarumMode::All since client/user can keep writing and advancing the committed LSN, but for it most likely would not stall in mode ReplicaSetQuarumMode::Write. In other cases when client write is not impacted (like secondary restart), SF may call this api only once with write status granted.

Implementor should not assume when this is called in relation to other api calls, but instead follow the semantics of what catchup should do.

Source

async fn build_replica( &self, replica: &ReplicaInformation, cancellation_token: BoxedCancelToken, ) -> Result<()>

Transferring state up to the current quorum LSN to a new or existing replica that is outside the current configuration. (not included in update_catch_up_replica_set_configuration)

replica: role is IdleSecondary status set to up or down current progress is -1 catchup capability is -1 must catchup is false

remarks: SF can cancel the replica build operation by calling the cancellation token. Replica being built or completed built does not count towards quorum and is not part of the current configuration. Replica cannot be in build and be in the configuration at the same time. Idle replica it maybe added by SF to the configuration by calling update_x_configuration().

Source

fn remove_replica(&self, replicaid: i64) -> Result<()>

Notifies primary that an idle replica built by build_replica() api call has gone down and replicator should not send more operations to that replica and should release all resources. Remarks: Removing replicas already in the partition, update_catch_up_replica_set_configuration is called instead with ReplicaSetConfig not containng the to be removed replica. SF does not call remove_replica on the replica where build_replica is still running.

Dyn Compatibility§

This trait is not dyn compatible.

In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.

Implementors§

Source§

impl<TraitVariantBlanketType: PrimaryReplicator> LocalPrimaryReplicator for TraitVariantBlanketType