pub struct PeerManager<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CMH: Deref, NS: Deref>{ /* private fields */ }
Expand description

A PeerManager manages a set of peers, described by their SocketDescriptor and marshalls socket events into messages which it passes on to its MessageHandler.

Locks are taken internally, so you must never assume that reentrancy from a SocketDescriptor call back into PeerManager methods will not deadlock.

Calls to read_event will decode relevant messages and pass them to the ChannelMessageHandler, likely doing message processing in-line. Thus, the primary form of parallelism in Rust-Lightning is in calls to read_event. Note, however, that calls to any PeerManager functions related to the same connection must occur only in serial, making new calls only after previous ones have returned.

Rather than using a plain PeerManager, it is preferable to use either a SimpleArcPeerManager a SimpleRefPeerManager, for conciseness. See their documentation for more details, but essentially you should default to using a SimpleRefPeerManager, and use a SimpleArcPeerManager when you require a PeerManager with a static lifetime, such as when you’re using lightning-net-tokio.

Implementations§

source§

impl<Descriptor: SocketDescriptor, CM: Deref, OM: Deref, L: Deref, NS: Deref> PeerManager<Descriptor, CM, IgnoringMessageHandler, OM, L, IgnoringMessageHandler, NS>

source

pub fn new_channel_only( channel_message_handler: CM, onion_message_handler: OM, current_time: u32, ephemeral_random_data: &[u8; 32], logger: L, node_signer: NS ) -> Self

Constructs a new PeerManager with the given ChannelMessageHandler and OnionMessageHandler. No routing message handler is used and network graph messages are ignored.

ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.

current_time is used as an always-increasing counter that survives across restarts and is incremented irregularly internally. In general it is best to simply use the current UNIX timestamp, however if it is not available a persistent counter that increases once per minute should suffice.

This is not exported to bindings users as we can’t export a PeerManager with a dummy route handler

source§

impl<Descriptor: SocketDescriptor, RM: Deref, L: Deref, NS: Deref> PeerManager<Descriptor, ErroringMessageHandler, RM, IgnoringMessageHandler, L, IgnoringMessageHandler, NS>

source

pub fn new_routing_only( routing_message_handler: RM, current_time: u32, ephemeral_random_data: &[u8; 32], logger: L, node_signer: NS ) -> Self

Constructs a new PeerManager with the given RoutingMessageHandler. No channel message handler or onion message handler is used and onion and channel messages will be ignored (or generate error messages). Note that some other lightning implementations time-out connections after some time if no channel is built with the peer.

current_time is used as an always-increasing counter that survives across restarts and is incremented irregularly internally. In general it is best to simply use the current UNIX timestamp, however if it is not available a persistent counter that increases once per minute should suffice.

ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.

This is not exported to bindings users as we can’t export a PeerManager with a dummy channel handler

source§

impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CMH: Deref, NS: Deref> PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>

source

pub fn new( message_handler: MessageHandler<CM, RM, OM, CMH>, current_time: u32, ephemeral_random_data: &[u8; 32], logger: L, node_signer: NS ) -> Self

Constructs a new PeerManager with the given message handlers.

ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.

current_time is used as an always-increasing counter that survives across restarts and is incremented irregularly internally. In general it is best to simply use the current UNIX timestamp, however if it is not available a persistent counter that increases once per minute should suffice.

source

pub fn get_peer_node_ids(&self) -> Vec<(PublicKey, Option<SocketAddress>)>

Get a list of tuples mapping from node id to network addresses for peers which have completed the initial handshake.

For outbound connections, the PublicKey will be the same as the their_node_id parameter passed in to Self::new_outbound_connection, however entries will only appear once the initial handshake has completed and we are sure the remote peer has the private key for the given PublicKey.

The returned Options will only be Some if an address had been previously given via Self::new_outbound_connection or Self::new_inbound_connection.

source

pub fn new_outbound_connection( &self, their_node_id: PublicKey, descriptor: Descriptor, remote_network_address: Option<SocketAddress> ) -> Result<Vec<u8>, PeerHandleError>

Indicates a new outbound connection has been established to a node with the given node_id and an optional remote network address.

The remote network address adds the option to report a remote IP address back to a connecting peer using the init message. The user should pass the remote network address of the host they are connected to.

If an Err is returned here you must disconnect the connection immediately.

Returns a small number of bytes to send to the remote node (currently always 50).

Panics if descriptor is duplicative with some other descriptor which has not yet been socket_disconnected.

source

pub fn new_inbound_connection( &self, descriptor: Descriptor, remote_network_address: Option<SocketAddress> ) -> Result<(), PeerHandleError>

Indicates a new inbound connection has been established to a node with an optional remote network address.

The remote network address adds the option to report a remote IP address back to a connecting peer using the init message. The user should pass the remote network address of the host they are connected to.

May refuse the connection by returning an Err, but will never write bytes to the remote end (outbound connector always speaks first). If an Err is returned here you must disconnect the connection immediately.

Panics if descriptor is duplicative with some other descriptor which has not yet been socket_disconnected.

source

pub fn write_buffer_space_avail( &self, descriptor: &mut Descriptor ) -> Result<(), PeerHandleError>

Indicates that there is room to write data to the given socket descriptor.

May return an Err to indicate that the connection should be closed.

May call send_data on the descriptor passed in (or an equal descriptor) before returning. Thus, be very careful with reentrancy issues! The invariants around calling write_buffer_space_avail in case a write did not fully complete must still hold - be ready to call write_buffer_space_avail again if a write call generated here isn’t sufficient!

source

pub fn read_event( &self, peer_descriptor: &mut Descriptor, data: &[u8] ) -> Result<bool, PeerHandleError>

Indicates that data was read from the given socket descriptor.

May return an Err to indicate that the connection should be closed.

Will not call back into send_data on any descriptors to avoid reentrancy complexity. Thus, however, you should call process_events after any read_event to generate send_data calls to handle responses.

If Ok(true) is returned, further read_events should not be triggered until a send_data call on this descriptor has resume_read set (preventing DoS issues in the send buffer).

In order to avoid processing too many messages at once per peer, data should be on the order of 4KiB.

source

pub fn process_events(&self)

Checks for any events generated by our handlers and processes them. Includes sending most response messages as well as messages generated by calls to handler functions directly (eg functions like ChannelManager::process_pending_htlc_forwards or send_payment).

May call send_data on SocketDescriptors. Thus, be very careful with reentrancy issues!

You don’t have to call this function explicitly if you are using [lightning-net-tokio] or one of the other clients provided in our language bindings.

Note that if there are any other calls to this function waiting on lock(s) this may return without doing any work. All available events that need handling will be handled before the other calls return.

source

pub fn socket_disconnected(&self, descriptor: &Descriptor)

Indicates that the given socket descriptor’s connection is now closed.

source

pub fn disconnect_by_node_id(&self, node_id: PublicKey)

Disconnect a peer given its node id.

If a peer is connected, this will call disconnect_socket on the descriptor for the peer. Thus, be very careful about reentrancy issues.

source

pub fn disconnect_all_peers(&self)

Disconnects all currently-connected peers. This is useful on platforms where there may be an indication that TCP sockets have stalled even if we weren’t around to time them out using regular ping/pongs.

source

pub fn timer_tick_occurred(&self)

Send pings to each peer and disconnect those which did not respond to the last round of pings.

This may be called on any timescale you want, however, roughly once every ten seconds is preferred. The call rate determines both how often we send a ping to our peers and how much time they have to respond before we disconnect them.

May call send_data on all SocketDescriptors. Thus, be very careful with reentrancy issues!

source

pub fn broadcast_node_announcement( &self, rgb: [u8; 3], alias: [u8; 32], addresses: Vec<SocketAddress> )

Generates a signed node_announcement from the given arguments, sending it to all connected peers. Note that peers will likely ignore this message unless we have at least one public channel which has at least six confirmations on-chain.

rgb is a node “color” and alias is a printable human-readable string to describe this node to humans. They carry no in-protocol meaning.

addresses represent the set (possibly empty) of socket addresses on which this node accepts incoming connections. These will be included in the node_announcement, publicly tying these addresses together and to this node. If you wish to preserve user privacy, addresses should likely contain only Tor Onion addresses.

Panics if addresses is absurdly large (more than 100).

Trait Implementations§

source§

impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CMH: Deref, NS: Deref> APeerManager for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>

§

type Descriptor = Descriptor

§

type CMT = <CM as Deref>::Target

§

type CM = CM

§

type RMT = <RM as Deref>::Target

§

type RM = RM

§

type OMT = <OM as Deref>::Target

§

type OM = OM

§

type LT = <L as Deref>::Target

§

type L = L

§

type CMHT = <CMH as Deref>::Target

§

type CMH = CMH

§

type NST = <NS as Deref>::Target

§

type NS = NS

source§

fn as_ref(&self) -> &PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>

Gets a reference to the underlying PeerManager.
source§

fn onion_message_handler(&self) -> &Self::OMT

Returns the peer manager’s OnionMessageHandler.

Auto Trait Implementations§

§

impl<Descriptor, CM, RM, OM, L, CMH, NS> !Freeze for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>

§

impl<Descriptor, CM, RM, OM, L, CMH, NS> RefUnwindSafe for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>

§

impl<Descriptor, CM, RM, OM, L, CMH, NS> Send for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
where <CMH as Deref>::Target: CustomMessageReader, <OM as Deref>::Target: EventsProvider, <RM as Deref>::Target: MessageSendEventsProvider, <CM as Deref>::Target: MessageSendEventsProvider, Descriptor: Clone + Hash + Eq + PartialEq + Send, NS: Send, L: Send, CM: Send, RM: Send, OM: Send, CMH: Send,

§

impl<Descriptor, CM, RM, OM, L, CMH, NS> Sync for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
where <CMH as Deref>::Target: CustomMessageReader, <OM as Deref>::Target: EventsProvider, <RM as Deref>::Target: MessageSendEventsProvider, <CM as Deref>::Target: MessageSendEventsProvider, Descriptor: Clone + Hash + Eq + PartialEq + Send + Sync, NS: Sync, L: Sync, CM: Sync, RM: Sync, OM: Sync, CMH: Sync,

§

impl<Descriptor, CM, RM, OM, L, CMH, NS> Unpin for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
where <CMH as Deref>::Target: CustomMessageReader, <OM as Deref>::Target: EventsProvider, <RM as Deref>::Target: MessageSendEventsProvider, <CM as Deref>::Target: MessageSendEventsProvider, Descriptor: Clone + Hash + Eq + PartialEq + Unpin, NS: Unpin, L: Unpin, CM: Unpin, RM: Unpin, OM: Unpin, CMH: Unpin,

§

impl<Descriptor, CM, RM, OM, L, CMH, NS> UnwindSafe for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.