pub struct BackgroundProcessor { /* private fields */ }std only.Expand description
BackgroundProcessor takes care of tasks that (1) need to happen periodically to keep
Rust-Lightning running properly, and (2) either can or should be run in the background. Its
responsibilities are:
- Processing
Events with a user-providedEventHandler. - Monitoring whether the
ChannelManagerneeds to be re-persisted to disk, and if so, writing it to disk/backups by invoking the callback given to it at startup.ChannelManagerpersistence should be done in the background. - Calling
ChannelManager::timer_tick_occurred,ChainMonitor::rebroadcast_pending_claimsandPeerManager::timer_tick_occurredat the appropriate intervals. - Calling
NetworkGraph::remove_stale_channels_and_tracking(if aGossipSyncwith aNetworkGraphis provided toBackgroundProcessor::start).
It will also call PeerManager::process_events periodically though this shouldn’t be relied
upon as doing so may result in high latency.
Note
If ChannelManager persistence fails and the persisted manager becomes out-of-date, then
there is a risk of channels force-closing on startup when the manager realizes it’s outdated.
However, as long as ChannelMonitor backups are sound, no funds besides those used for
unilateral chain closure fees are at risk.
Implementations§
source§impl BackgroundProcessor
impl BackgroundProcessor
sourcepub fn start<'a, UL: 'static + Deref + Send + Sync, CF: 'static + Deref + Send + Sync, CW: 'static + Deref + Send + Sync, T: 'static + Deref + Send + Sync, ES: 'static + Deref + Send + Sync, NS: 'static + Deref + Send + Sync, SP: 'static + Deref + Send + Sync, F: 'static + Deref + Send + Sync, R: 'static + Deref + Send + Sync, G: 'static + Deref<Target = NetworkGraph<L>> + Send + Sync, L: 'static + Deref + Send + Sync, P: 'static + Deref + Send + Sync, EH: 'static + EventHandler + Send, PS: 'static + Deref + Send, M: 'static + Deref<Target = ChainMonitor<<SP::Target as SignerProvider>::Signer, CF, T, F, L, P>> + Send + Sync, CM: 'static + Deref<Target = ChannelManager<CW, T, ES, NS, SP, F, R, L>> + Send + Sync, PGS: 'static + Deref<Target = P2PGossipSync<G, UL, L>> + Send + Sync, RGS: 'static + Deref<Target = RapidGossipSync<G, L>> + Send, APM: APeerManager + Send + Sync, PM: 'static + Deref<Target = APM> + Send + Sync, S: 'static + Deref<Target = SC> + Send + Sync, SC: for<'b> WriteableScore<'b>>(
persister: PS,
event_handler: EH,
chain_monitor: M,
channel_manager: CM,
gossip_sync: GossipSync<PGS, RGS, G, UL, L>,
peer_manager: PM,
logger: L,
scorer: Option<S>
) -> Selfwhere
UL::Target: 'static + UtxoLookup,
CF::Target: 'static + Filter,
CW::Target: 'static + Watch<<SP::Target as SignerProvider>::Signer>,
T::Target: 'static + BroadcasterInterface,
ES::Target: 'static + EntropySource,
NS::Target: 'static + NodeSigner,
SP::Target: 'static + SignerProvider,
F::Target: 'static + FeeEstimator,
R::Target: 'static + Router,
L::Target: 'static + Logger,
P::Target: 'static + Persist<<SP::Target as SignerProvider>::Signer>,
PS::Target: 'static + Persister<'a, CW, T, ES, NS, SP, F, R, L, SC>,
pub fn start<'a, UL: 'static + Deref + Send + Sync, CF: 'static + Deref + Send + Sync, CW: 'static + Deref + Send + Sync, T: 'static + Deref + Send + Sync, ES: 'static + Deref + Send + Sync, NS: 'static + Deref + Send + Sync, SP: 'static + Deref + Send + Sync, F: 'static + Deref + Send + Sync, R: 'static + Deref + Send + Sync, G: 'static + Deref<Target = NetworkGraph<L>> + Send + Sync, L: 'static + Deref + Send + Sync, P: 'static + Deref + Send + Sync, EH: 'static + EventHandler + Send, PS: 'static + Deref + Send, M: 'static + Deref<Target = ChainMonitor<<SP::Target as SignerProvider>::Signer, CF, T, F, L, P>> + Send + Sync, CM: 'static + Deref<Target = ChannelManager<CW, T, ES, NS, SP, F, R, L>> + Send + Sync, PGS: 'static + Deref<Target = P2PGossipSync<G, UL, L>> + Send + Sync, RGS: 'static + Deref<Target = RapidGossipSync<G, L>> + Send, APM: APeerManager + Send + Sync, PM: 'static + Deref<Target = APM> + Send + Sync, S: 'static + Deref<Target = SC> + Send + Sync, SC: for<'b> WriteableScore<'b>>( persister: PS, event_handler: EH, chain_monitor: M, channel_manager: CM, gossip_sync: GossipSync<PGS, RGS, G, UL, L>, peer_manager: PM, logger: L, scorer: Option<S> ) -> Selfwhere UL::Target: 'static + UtxoLookup, CF::Target: 'static + Filter, CW::Target: 'static + Watch<<SP::Target as SignerProvider>::Signer>, T::Target: 'static + BroadcasterInterface, ES::Target: 'static + EntropySource, NS::Target: 'static + NodeSigner, SP::Target: 'static + SignerProvider, F::Target: 'static + FeeEstimator, R::Target: 'static + Router, L::Target: 'static + Logger, P::Target: 'static + Persist<<SP::Target as SignerProvider>::Signer>, PS::Target: 'static + Persister<'a, CW, T, ES, NS, SP, F, R, L, SC>,
Start a background thread that takes care of responsibilities enumerated in the top-level documentation.
The thread runs indefinitely unless the object is dropped, stop is called, or
Persister::persist_manager returns an error. In case of an error, the error is retrieved by calling
either join or stop.
Data Persistence
Persister::persist_manager is responsible for writing out the ChannelManager to disk, and/or
uploading to one or more backup services. See ChannelManager::write for writing out a
ChannelManager. See the lightning-persister crate for LDK’s
provided implementation.
Persister::persist_graph is responsible for writing out the NetworkGraph to disk, if
GossipSync is supplied. See NetworkGraph::write for writing out a NetworkGraph.
See the lightning-persister crate for LDK’s provided implementation.
Typically, users should either implement Persister::persist_manager to never return an
error or call join and handle any error that may arise. For the latter case,
BackgroundProcessor must be restarted by calling start again after handling the error.
Event Handling
event_handler is responsible for handling events that users should be notified of (e.g.,
payment failed). BackgroundProcessor may decorate the given EventHandler with common
functionality implemented by other handlers.
P2PGossipSyncif given will update theNetworkGraphbased on payment failures.
Rapid Gossip Sync
If rapid gossip sync is meant to run at startup, pass RapidGossipSync via gossip_sync
to indicate that the BackgroundProcessor should not prune the NetworkGraph instance
until the RapidGossipSync instance completes its first sync.