Struct lightning::routing::network_graph::NetGraphMsgHandler[][src]

pub struct NetGraphMsgHandler<C: Deref, L: Deref> where
    C::Target: Access,
    L::Target: Logger
{ pub network_graph: RwLock<NetworkGraph>, // some fields omitted }

Receives and validates network updates from peers, stores authentic and relevant data as a network graph. This network graph is then used for routing payments. Provides interface to help with initial routing sync by serving historical announcements.

Fields

network_graph: RwLock<NetworkGraph>

Representation of the payment channel network

Implementations

impl<C: Deref, L: Deref> NetGraphMsgHandler<C, L> where
    C::Target: Access,
    L::Target: Logger
[src]

pub fn new(genesis_hash: BlockHash, chain_access: Option<C>, logger: L) -> Self[src]

Creates a new tracker of the actual state of the network of channels and nodes, assuming a fresh network graph. Chain monitor is used to make sure announced channels exist on-chain, channel data is correct, and that the announcement is signed with channel owners’ keys.

pub fn from_net_graph(
    chain_access: Option<C>,
    logger: L,
    network_graph: NetworkGraph
) -> Self
[src]

Creates a new tracker of the actual state of the network of channels and nodes, assuming an existing Network Graph.

pub fn add_chain_access(&mut self, chain_access: Option<C>)[src]

Adds a provider used to check new announcements. Does not affect existing announcements unless they are updated. Add, update or remove the provider would replace the current one.

pub fn read_locked_graph<'a>(&'a self) -> LockedNetworkGraph<'a>[src]

Take a read lock on the network_graph and return it in the C-bindings newtype helper. This is likely only useful when called via the C bindings as you can call self.network_graph.read().unwrap() in Rust yourself.

Trait Implementations

impl<C: Deref, L: Deref> MessageSendEventsProvider for NetGraphMsgHandler<C, L> where
    C::Target: Access,
    L::Target: Logger
[src]

impl<C: Deref, L: Deref> RoutingMessageHandler for NetGraphMsgHandler<C, L> where
    C::Target: Access,
    L::Target: Logger
[src]

fn sync_routing_table(&self, their_node_id: &PublicKey, init_msg: &Init)[src]

Initiates a stateless sync of routing gossip information with a peer using gossip_queries. The default strategy used by this implementation is to sync the full block range with several peers.

We should expect one or more reply_channel_range messages in response to our query_channel_range. Each reply will enqueue a query_scid message to request gossip messages for each channel. The sync is considered complete when the final reply_scids_end message is received, though we are not tracking this directly.

fn handle_reply_channel_range(
    &self,
    their_node_id: &PublicKey,
    msg: ReplyChannelRange
) -> Result<(), LightningError>
[src]

Statelessly processes a reply to a channel range query by immediately sending an SCID query with SCIDs in the reply. To keep this handler stateless, it does not validate the sequencing of replies for multi- reply ranges. It does not validate whether the reply(ies) cover the queried range. It also does not filter SCIDs to only those in the original query range. We also do not validate that the chain_hash matches the chain_hash of the NetworkGraph. Any chan_ann message that does not match our chain_hash will be rejected when the announcement is processed.

fn handle_reply_short_channel_ids_end(
    &self,
    their_node_id: &PublicKey,
    msg: ReplyShortChannelIdsEnd
) -> Result<(), LightningError>
[src]

When an SCID query is initiated the remote peer will begin streaming gossip messages. In the event of a failure, we may have received some channel information. Before trying with another peer, the caller should update its set of SCIDs that need to be queried.

fn handle_query_channel_range(
    &self,
    their_node_id: &PublicKey,
    msg: QueryChannelRange
) -> Result<(), LightningError>
[src]

Processes a query from a peer by finding announced/public channels whose funding UTXOs are in the specified block range. Due to message size limits, large range queries may result in several reply messages. This implementation enqueues all reply messages into pending events. Each message will allocate just under 65KiB. A full sync of the public routing table with 128k channels will generated 16 messages and allocate ~1MB. Logic can be changed to reduce allocation if/when a full sync of the routing table impacts memory constrained systems.

Auto Trait Implementations

impl<C, L> RefUnwindSafe for NetGraphMsgHandler<C, L> where
    C: RefUnwindSafe,
    L: RefUnwindSafe

impl<C, L> Send for NetGraphMsgHandler<C, L> where
    C: Send,
    L: Send

impl<C, L> Sync for NetGraphMsgHandler<C, L> where
    C: Sync,
    L: Sync

impl<C, L> Unpin for NetGraphMsgHandler<C, L> where
    C: Unpin,
    L: Unpin

impl<C, L> UnwindSafe for NetGraphMsgHandler<C, L> where
    C: UnwindSafe,
    L: UnwindSafe

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.