Struct lightning::ln::peer_handler::PeerManager [−][src]
pub struct PeerManager<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: Deref, CMH: Deref> where
CM::Target: ChannelMessageHandler,
RM::Target: RoutingMessageHandler,
L::Target: Logger,
CMH::Target: CustomMessageHandler, { /* fields omitted */ }
Expand description
A PeerManager manages a set of peers, described by their SocketDescriptor
and marshalls
socket events into messages which it passes on to its MessageHandler
.
Locks are taken internally, so you must never assume that reentrancy from a
SocketDescriptor
call back into PeerManager
methods will not deadlock.
Calls to read_event
will decode relevant messages and pass them to the
ChannelMessageHandler
, likely doing message processing in-line. Thus, the primary form of
parallelism in Rust-Lightning is in calls to read_event
. Note, however, that calls to any
PeerManager
functions related to the same connection must occur only in serial, making new
calls only after previous ones have returned.
Rather than using a plain PeerManager, it is preferable to use either a SimpleArcPeerManager a SimpleRefPeerManager, for conciseness. See their documentation for more details, but essentially you should default to using a SimpleRefPeerManager, and use a SimpleArcPeerManager when you require a PeerManager with a static lifetime, such as when you’re using lightning-net-tokio.
Implementations
impl<Descriptor: SocketDescriptor, CM: Deref, L: Deref> PeerManager<Descriptor, CM, IgnoringMessageHandler, L, IgnoringMessageHandler> where
CM::Target: ChannelMessageHandler,
L::Target: Logger,
impl<Descriptor: SocketDescriptor, CM: Deref, L: Deref> PeerManager<Descriptor, CM, IgnoringMessageHandler, L, IgnoringMessageHandler> where
CM::Target: ChannelMessageHandler,
L::Target: Logger,
Constructs a new PeerManager with the given ChannelMessageHandler. No routing message handler is used and network graph messages are ignored.
ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.
(C-not exported) as we can’t export a PeerManager with a dummy route handler
impl<Descriptor: SocketDescriptor, RM: Deref, L: Deref> PeerManager<Descriptor, ErroringMessageHandler, RM, L, IgnoringMessageHandler> where
RM::Target: RoutingMessageHandler,
L::Target: Logger,
impl<Descriptor: SocketDescriptor, RM: Deref, L: Deref> PeerManager<Descriptor, ErroringMessageHandler, RM, L, IgnoringMessageHandler> where
RM::Target: RoutingMessageHandler,
L::Target: Logger,
Constructs a new PeerManager with the given RoutingMessageHandler. No channel message handler is used and messages related to channels will be ignored (or generate error messages). Note that some other lightning implementations time-out connections after some time if no channel is built with the peer.
ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.
(C-not exported) as we can’t export a PeerManager with a dummy channel handler
impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: Deref, CMH: Deref> PeerManager<Descriptor, CM, RM, L, CMH> where
CM::Target: ChannelMessageHandler,
RM::Target: RoutingMessageHandler,
L::Target: Logger,
CMH::Target: CustomMessageHandler,
impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: Deref, CMH: Deref> PeerManager<Descriptor, CM, RM, L, CMH> where
CM::Target: ChannelMessageHandler,
RM::Target: RoutingMessageHandler,
L::Target: Logger,
CMH::Target: CustomMessageHandler,
Constructs a new PeerManager with the given message handlers and node_id secret key ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.
Get the list of node ids for peers which have completed the initial handshake.
For outbound connections, this will be the same as the their_node_id parameter passed in to new_outbound_connection, however entries will only appear once the initial handshake has completed and we are sure the remote peer has the private key for the given node_id.
pub fn new_outbound_connection(
&self,
their_node_id: PublicKey,
descriptor: Descriptor
) -> Result<Vec<u8>, PeerHandleError>
pub fn new_outbound_connection(
&self,
their_node_id: PublicKey,
descriptor: Descriptor
) -> Result<Vec<u8>, PeerHandleError>
Indicates a new outbound connection has been established to a node with the given node_id. Note that if an Err is returned here you MUST NOT call socket_disconnected for the new descriptor but must disconnect the connection immediately.
Returns a small number of bytes to send to the remote node (currently always 50).
Panics if descriptor is duplicative with some other descriptor which has not yet been
socket_disconnected()
.
Indicates a new inbound connection has been established.
May refuse the connection by returning an Err, but will never write bytes to the remote end (outbound connector always speaks first). Note that if an Err is returned here you MUST NOT call socket_disconnected for the new descriptor but must disconnect the connection immediately.
Panics if descriptor is duplicative with some other descriptor which has not yet been
socket_disconnected()
.
pub fn write_buffer_space_avail(
&self,
descriptor: &mut Descriptor
) -> Result<(), PeerHandleError>
pub fn write_buffer_space_avail(
&self,
descriptor: &mut Descriptor
) -> Result<(), PeerHandleError>
Indicates that there is room to write data to the given socket descriptor.
May return an Err to indicate that the connection should be closed.
May call send_data
on the descriptor passed in (or an equal descriptor) before
returning. Thus, be very careful with reentrancy issues! The invariants around calling
write_buffer_space_avail
in case a write did not fully complete must still hold - be
ready to call [write_buffer_space_avail
] again if a write call generated here isn’t
sufficient!
pub fn read_event(
&self,
peer_descriptor: &mut Descriptor,
data: &[u8]
) -> Result<bool, PeerHandleError>
pub fn read_event(
&self,
peer_descriptor: &mut Descriptor,
data: &[u8]
) -> Result<bool, PeerHandleError>
Indicates that data was read from the given socket descriptor.
May return an Err to indicate that the connection should be closed.
Will not call back into send_data
on any descriptors to avoid reentrancy complexity.
Thus, however, you should call process_events
after any read_event
to generate
send_data
calls to handle responses.
If Ok(true)
is returned, further read_events should not be triggered until a
send_data
call on this descriptor has resume_read
set (preventing DoS issues in the
send buffer).
Checks for any events generated by our handlers and processes them. Includes sending most
response messages as well as messages generated by calls to handler functions directly (eg
functions like ChannelManager::process_pending_htlc_forwards
or send_payment
).
May call send_data
on SocketDescriptor
s. Thus, be very careful with reentrancy
issues!
You don’t have to call this function explicitly if you are using [lightning-net-tokio
]
or one of the other clients provided in our language bindings.
Indicates that the given socket descriptor’s connection is now closed.
Disconnect a peer given its node id.
Set no_connection_possible
to true to prevent any further connection with this peer,
force-closing any channels we have with it.
If a peer is connected, this will call disconnect_socket
on the descriptor for the
peer. Thus, be very careful about reentrancy issues.
Disconnects all currently-connected peers. This is useful on platforms where there may be an indication that TCP sockets have stalled even if we weren’t around to time them out using regular ping/pongs.
Send pings to each peer and disconnect those which did not respond to the last round of pings.
This may be called on any timescale you want, however, roughly once every five to ten seconds is preferred. The call rate determines both how often we send a ping to our peers and how much time they have to respond before we disconnect them.
May call send_data
on all SocketDescriptor
s. Thus, be very careful with reentrancy
issues!