Struct lightning::ln::peer_handler::PeerManager
source · [−]pub struct PeerManager<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: Deref, CMH: Deref> where
CM::Target: ChannelMessageHandler,
RM::Target: RoutingMessageHandler,
L::Target: Logger,
CMH::Target: CustomMessageHandler, { /* private fields */ }
Expand description
A PeerManager manages a set of peers, described by their SocketDescriptor
and marshalls
socket events into messages which it passes on to its MessageHandler
.
Locks are taken internally, so you must never assume that reentrancy from a
SocketDescriptor
call back into PeerManager
methods will not deadlock.
Calls to read_event
will decode relevant messages and pass them to the
ChannelMessageHandler
, likely doing message processing in-line. Thus, the primary form of
parallelism in Rust-Lightning is in calls to read_event
. Note, however, that calls to any
PeerManager
functions related to the same connection must occur only in serial, making new
calls only after previous ones have returned.
Rather than using a plain PeerManager, it is preferable to use either a SimpleArcPeerManager a SimpleRefPeerManager, for conciseness. See their documentation for more details, but essentially you should default to using a SimpleRefPeerManager, and use a SimpleArcPeerManager when you require a PeerManager with a static lifetime, such as when you’re using lightning-net-tokio.
Implementations
sourceimpl<Descriptor: SocketDescriptor, CM: Deref, L: Deref> PeerManager<Descriptor, CM, IgnoringMessageHandler, L, IgnoringMessageHandler> where
CM::Target: ChannelMessageHandler,
L::Target: Logger,
impl<Descriptor: SocketDescriptor, CM: Deref, L: Deref> PeerManager<Descriptor, CM, IgnoringMessageHandler, L, IgnoringMessageHandler> where
CM::Target: ChannelMessageHandler,
L::Target: Logger,
sourcepub fn new_channel_only(
channel_message_handler: CM,
our_node_secret: SecretKey,
ephemeral_random_data: &[u8; 32],
logger: L
) -> Self
pub fn new_channel_only(
channel_message_handler: CM,
our_node_secret: SecretKey,
ephemeral_random_data: &[u8; 32],
logger: L
) -> Self
Constructs a new PeerManager with the given ChannelMessageHandler. No routing message handler is used and network graph messages are ignored.
ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.
(C-not exported) as we can’t export a PeerManager with a dummy route handler
sourceimpl<Descriptor: SocketDescriptor, RM: Deref, L: Deref> PeerManager<Descriptor, ErroringMessageHandler, RM, L, IgnoringMessageHandler> where
RM::Target: RoutingMessageHandler,
L::Target: Logger,
impl<Descriptor: SocketDescriptor, RM: Deref, L: Deref> PeerManager<Descriptor, ErroringMessageHandler, RM, L, IgnoringMessageHandler> where
RM::Target: RoutingMessageHandler,
L::Target: Logger,
sourcepub fn new_routing_only(
routing_message_handler: RM,
our_node_secret: SecretKey,
ephemeral_random_data: &[u8; 32],
logger: L
) -> Self
pub fn new_routing_only(
routing_message_handler: RM,
our_node_secret: SecretKey,
ephemeral_random_data: &[u8; 32],
logger: L
) -> Self
Constructs a new PeerManager with the given RoutingMessageHandler. No channel message handler is used and messages related to channels will be ignored (or generate error messages). Note that some other lightning implementations time-out connections after some time if no channel is built with the peer.
ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.
(C-not exported) as we can’t export a PeerManager with a dummy channel handler
sourceimpl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: Deref, CMH: Deref> PeerManager<Descriptor, CM, RM, L, CMH> where
CM::Target: ChannelMessageHandler,
RM::Target: RoutingMessageHandler,
L::Target: Logger,
CMH::Target: CustomMessageHandler,
impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: Deref, CMH: Deref> PeerManager<Descriptor, CM, RM, L, CMH> where
CM::Target: ChannelMessageHandler,
RM::Target: RoutingMessageHandler,
L::Target: Logger,
CMH::Target: CustomMessageHandler,
sourcepub fn new(
message_handler: MessageHandler<CM, RM>,
our_node_secret: SecretKey,
ephemeral_random_data: &[u8; 32],
logger: L,
custom_message_handler: CMH
) -> Self
pub fn new(
message_handler: MessageHandler<CM, RM>,
our_node_secret: SecretKey,
ephemeral_random_data: &[u8; 32],
logger: L,
custom_message_handler: CMH
) -> Self
Constructs a new PeerManager with the given message handlers and node_id secret key ephemeral_random_data is used to derive per-connection ephemeral keys and must be cryptographically secure random bytes.
sourcepub fn get_peer_node_ids(&self) -> Vec<PublicKey>ⓘNotable traits for Vec<u8, A>impl<A> Write for Vec<u8, A> where
A: Allocator,
pub fn get_peer_node_ids(&self) -> Vec<PublicKey>ⓘNotable traits for Vec<u8, A>impl<A> Write for Vec<u8, A> where
A: Allocator,
A: Allocator,
Get the list of node ids for peers which have completed the initial handshake.
For outbound connections, this will be the same as the their_node_id parameter passed in to new_outbound_connection, however entries will only appear once the initial handshake has completed and we are sure the remote peer has the private key for the given node_id.
sourcepub fn new_outbound_connection(
&self,
their_node_id: PublicKey,
descriptor: Descriptor,
remote_network_address: Option<NetAddress>
) -> Result<Vec<u8>, PeerHandleError>
pub fn new_outbound_connection(
&self,
their_node_id: PublicKey,
descriptor: Descriptor,
remote_network_address: Option<NetAddress>
) -> Result<Vec<u8>, PeerHandleError>
Indicates a new outbound connection has been established to a node with the given node_id and an optional remote network address.
The remote network address adds the option to report a remote IP address back to a connecting peer using the init message. The user should pass the remote network address of the host they are connected to.
Note that if an Err is returned here you MUST NOT call socket_disconnected for the new descriptor but must disconnect the connection immediately.
Returns a small number of bytes to send to the remote node (currently always 50).
Panics if descriptor is duplicative with some other descriptor which has not yet been
socket_disconnected()
.
sourcepub fn new_inbound_connection(
&self,
descriptor: Descriptor,
remote_network_address: Option<NetAddress>
) -> Result<(), PeerHandleError>
pub fn new_inbound_connection(
&self,
descriptor: Descriptor,
remote_network_address: Option<NetAddress>
) -> Result<(), PeerHandleError>
Indicates a new inbound connection has been established to a node with an optional remote network address.
The remote network address adds the option to report a remote IP address back to a connecting peer using the init message. The user should pass the remote network address of the host they are connected to.
May refuse the connection by returning an Err, but will never write bytes to the remote end (outbound connector always speaks first). Note that if an Err is returned here you MUST NOT call socket_disconnected for the new descriptor but must disconnect the connection immediately.
Panics if descriptor is duplicative with some other descriptor which has not yet been
socket_disconnected()
.
sourcepub fn write_buffer_space_avail(
&self,
descriptor: &mut Descriptor
) -> Result<(), PeerHandleError>
pub fn write_buffer_space_avail(
&self,
descriptor: &mut Descriptor
) -> Result<(), PeerHandleError>
Indicates that there is room to write data to the given socket descriptor.
May return an Err to indicate that the connection should be closed.
May call send_data
on the descriptor passed in (or an equal descriptor) before
returning. Thus, be very careful with reentrancy issues! The invariants around calling
write_buffer_space_avail
in case a write did not fully complete must still hold - be
ready to call [write_buffer_space_avail
] again if a write call generated here isn’t
sufficient!
sourcepub fn read_event(
&self,
peer_descriptor: &mut Descriptor,
data: &[u8]
) -> Result<bool, PeerHandleError>
pub fn read_event(
&self,
peer_descriptor: &mut Descriptor,
data: &[u8]
) -> Result<bool, PeerHandleError>
Indicates that data was read from the given socket descriptor.
May return an Err to indicate that the connection should be closed.
Will not call back into send_data
on any descriptors to avoid reentrancy complexity.
Thus, however, you should call process_events
after any read_event
to generate
send_data
calls to handle responses.
If Ok(true)
is returned, further read_events should not be triggered until a
send_data
call on this descriptor has resume_read
set (preventing DoS issues in the
send buffer).
sourcepub fn process_events(&self)
pub fn process_events(&self)
Checks for any events generated by our handlers and processes them. Includes sending most
response messages as well as messages generated by calls to handler functions directly (eg
functions like ChannelManager::process_pending_htlc_forwards
or send_payment
).
May call send_data
on SocketDescriptor
s. Thus, be very careful with reentrancy
issues!
You don’t have to call this function explicitly if you are using [lightning-net-tokio
]
or one of the other clients provided in our language bindings.
Note that if there are any other calls to this function waiting on lock(s) this may return without doing any work. All available events that need handling will be handled before the other calls return.
sourcepub fn socket_disconnected(&self, descriptor: &Descriptor)
pub fn socket_disconnected(&self, descriptor: &Descriptor)
Indicates that the given socket descriptor’s connection is now closed.
sourcepub fn disconnect_by_node_id(
&self,
node_id: PublicKey,
no_connection_possible: bool
)
pub fn disconnect_by_node_id(
&self,
node_id: PublicKey,
no_connection_possible: bool
)
Disconnect a peer given its node id.
Set no_connection_possible
to true to prevent any further connection with this peer,
force-closing any channels we have with it.
If a peer is connected, this will call disconnect_socket
on the descriptor for the
peer. Thus, be very careful about reentrancy issues.
sourcepub fn disconnect_all_peers(&self)
pub fn disconnect_all_peers(&self)
Disconnects all currently-connected peers. This is useful on platforms where there may be an indication that TCP sockets have stalled even if we weren’t around to time them out using regular ping/pongs.
sourcepub fn timer_tick_occurred(&self)
pub fn timer_tick_occurred(&self)
Send pings to each peer and disconnect those which did not respond to the last round of pings.
This may be called on any timescale you want, however, roughly once every ten seconds is preferred. The call rate determines both how often we send a ping to our peers and how much time they have to respond before we disconnect them.
May call send_data
on all SocketDescriptor
s. Thus, be very careful with reentrancy
issues!
Auto Trait Implementations
impl<Descriptor, CM, RM, L, CMH> RefUnwindSafe for PeerManager<Descriptor, CM, RM, L, CMH> where
CM: RefUnwindSafe,
CMH: RefUnwindSafe,
L: RefUnwindSafe,
RM: RefUnwindSafe,
impl<Descriptor, CM, RM, L, CMH> Send for PeerManager<Descriptor, CM, RM, L, CMH> where
CM: Send,
CMH: Send,
Descriptor: Send,
L: Send,
RM: Send,
impl<Descriptor, CM, RM, L, CMH> Sync for PeerManager<Descriptor, CM, RM, L, CMH> where
CM: Sync,
CMH: Sync,
Descriptor: Send + Sync,
L: Sync,
RM: Sync,
impl<Descriptor, CM, RM, L, CMH> Unpin for PeerManager<Descriptor, CM, RM, L, CMH> where
CM: Unpin,
CMH: Unpin,
Descriptor: Unpin,
L: Unpin,
RM: Unpin,
impl<Descriptor, CM, RM, L, CMH> UnwindSafe for PeerManager<Descriptor, CM, RM, L, CMH> where
CM: UnwindSafe,
CMH: UnwindSafe,
L: UnwindSafe,
RM: UnwindSafe,
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more