Struct tc_network::NetworkService[][src]

pub struct NetworkService<B: BlockT + 'static, H: ExHashT> { /* fields omitted */ }

Tetcore network service. Handles network IO and manages connectivity.

Implementations

impl<B: BlockT + 'static, H: ExHashT> NetworkService<B, H>[src]

pub fn local_peer_id(&self) -> &PeerId[src]

Returns the local PeerId.

pub fn set_authorized_peers(&self, peers: HashSet<PeerId>)[src]

Set authorized peers.

Need a better solution to manage authorized peers, but now just use reserved peers for prototyping.

pub fn set_authorized_only(&self, reserved_only: bool)[src]

Set authorized_only flag.

Need a better solution to decide authorized_only, but now just use reserved_only flag for prototyping.

pub fn write_notification(
    &self,
    target: PeerId,
    protocol: Cow<'static, str>,
    message: Vec<u8>
)
[src]

Appends a notification to the buffer of pending outgoing notifications with the given peer. Has no effect if the notifications channel with this protocol name is not open.

If the buffer of pending outgoing notifications with that peer is full, the notification is silently dropped and the connection to the remote will start being shut down. This happens if you call this method at a higher rate than the rate at which the peer processes these notifications, or if the available network bandwidth is too low.

For this reason, this method is considered soft-deprecated. You are encouraged to use NetworkService::notification_sender instead.

Note: The reason why this is a no-op in the situation where we have no channel is that we don’t guarantee message delivery anyway. Networking issues can cause connections to drop at any time, and higher-level logic shouldn’t differentiate between the remote voluntarily closing a substream or a network error preventing the message from being delivered.

The protocol must have been registered with NetworkConfiguration::notifications_protocols.

pub fn notification_sender(
    &self,
    target: PeerId,
    protocol: Cow<'static, str>
) -> Result<NotificationSender, NotificationSenderError>
[src]

Obtains a NotificationSender for a connected peer, if it exists.

A NotificationSender is scoped to a particular connection to the peer that holds a receiver. With a NotificationSender at hand, sending a notification is done in two steps:

  1. NotificationSender::ready is used to wait for the sender to become ready for another notification, yielding a NotificationSenderReady token.
  2. NotificationSenderReady::send enqueues the notification for sending. This operation can only fail if the underlying notification substream or connection has suddenly closed.

An error is returned by NotificationSenderReady::send if there exists no open notifications substream with that combination of peer and protocol, or if the remote has asked to close the notifications substream. If that happens, it is guaranteed that an Event::NotificationStreamClosed has been generated on the stream returned by NetworkService::event_stream.

If the remote requests to close the notifications substream, all notifications successfully enqueued using NotificationSenderReady::send will finish being sent out before the substream actually gets closed, but attempting to enqueue more notifications will now return an error. It is however possible for the entire connection to be abruptly closed, in which case enqueued notifications will be lost.

The protocol must have been registered with NetworkConfiguration::notifications_protocols.

Usage

This method returns a struct that allows waiting until there is space available in the buffer of messages towards the given peer. If the peer processes notifications at a slower rate than we send them, this buffer will quickly fill up.

As such, you should never do something like this:

// Do NOT do this
for peer in peers {
	if let Ok(n) = network.notification_sender(peer, ...) {
		if let Ok(s) = n.ready().await {
			let _ = s.send(...);
		}
	}
}

Doing so would slow down all peers to the rate of the slowest one. A malicious or malfunctioning peer could intentionally process notifications at a very slow rate.

Instead, you are encouraged to maintain your own buffer of notifications on top of the one maintained by tc-network, and use notification_sender to progressively send out elements from your buffer. If this additional buffer is full (which will happen at some point if the peer is too slow to process notifications), appropriate measures can be taken, such as removing non-critical notifications from the buffer or disconnecting the peer using NetworkService::disconnect_peer.

Notifications Per-peer buffer broadcast +—––> of notifications +–> notification_sender +–> Internet ^ (not covered by | tc-network) + Notifications should be dropped if buffer is full

See also the gossip module for a higher-level way to send notifications.

pub fn event_stream(&self, name: &'static str) -> impl Stream<Item = Event>[src]

Returns a stream containing the events that happen on the network.

If this method is called multiple times, the events are duplicated.

The stream never ends (unless the NetworkWorker gets shut down).

The name passed is used to identify the channel in the Prometheus metrics. Note that the parameter is a &'static str, and not a String, in order to avoid accidentally having an unbounded set of Prometheus metrics, which would be quite bad in terms of memory

pub async fn request(
    &self,
    target: PeerId,
    protocol: impl Into<Cow<'static, str>>,
    request: Vec<u8>
) -> Result<Vec<u8>, RequestFailure>
[src]

Sends a single targeted request to a specific peer. On success, returns the response of the peer.

Request-response protocols are a way to complement notifications protocols, but notifications should remain the default ways of communicating information. For example, a peer can announce something through a notification, after which the recipient can obtain more information by performing a request. As such, this function is meant to be called only with peers we are already connected to. Calling this method with a target we are not connected to will not attempt to connect to said peer.

No limit or throttling of concurrent outbound requests per peer and protocol are enforced. Such restrictions, if desired, need to be enforced at the call site(s).

The protocol must have been registered through NetworkConfiguration::request_response_protocols.

pub fn trigger_repropagate(&self)[src]

You may call this when new transactons are imported by the transaction pool.

All transactions will be fetched from the TransactionPool that was passed at initialization as part of the configuration and propagated to peers.

pub fn propagate_transaction(&self, hash: H)[src]

You must call when new transaction is imported by the transaction pool.

This transaction will be fetched from the TransactionPool that was passed at initialization as part of the configuration and propagated to peers.

pub fn announce_block(&self, hash: B::Hash, data: Option<Vec<u8>>)[src]

Make sure an important block is propagated to peers.

In chain-based consensus, we often need to make sure non-best forks are at least temporarily synced. This function forces such an announcement.

pub fn report_peer(&self, who: PeerId, cost_benefit: ReputationChange)[src]

Report a given peer as either beneficial (+) or costly (-) according to the given scalar.

pub fn disconnect_peer(
    &self,
    who: PeerId,
    protocol: impl Into<Cow<'static, str>>
)
[src]

Disconnect from a node as soon as possible.

This triggers the same effects as if the connection had closed itself spontaneously.

See also NetworkService::remove_from_peers_set, which has the same effect but also prevents the local node from re-establishing an outgoing substream to this peer until it is added again.

pub fn request_justification(&self, hash: &B::Hash, number: NumberFor<B>)[src]

Request a justification for the given block from the network.

On success, the justification will be passed to the import queue that was part at initialization as part of the configuration.

pub fn is_major_syncing(&self) -> bool[src]

Are we in the process of downloading the chain?

pub fn get_value(&self, key: &Key)[src]

Start getting a value from the DHT.

This will generate either a ValueFound or a ValueNotFound event and pass it as an item on the NetworkWorker stream.

pub fn put_value(&self, key: Key, value: Vec<u8>)[src]

Start putting a value in the DHT.

This will generate either a ValuePut or a ValuePutFailed event and pass it as an item on the NetworkWorker stream.

pub fn accept_unreserved_peers(&self)[src]

Connect to unreserved peers and allow unreserved peers to connect for syncing purposes.

pub fn deny_unreserved_peers(&self)[src]

Disconnect from unreserved peers and deny new unreserved peers to connect for syncing purposes.

pub fn add_reserved_peer(&self, peer: String) -> Result<(), String>[src]

Adds a PeerId and its address as reserved. The string should encode the address and peer ID of the remote node.

Returns an Err if the given string is not a valid multiaddress or contains an invalid peer ID (which includes the local peer ID).

pub fn remove_reserved_peer(&self, peer_id: PeerId)[src]

Removes a PeerId from the list of reserved peers.

pub fn add_peers_to_reserved_set(
    &self,
    protocol: Cow<'static, str>,
    peers: HashSet<Multiaddr>
) -> Result<(), String>
[src]

Add peers to a peer set.

Each Multiaddr must end with a /p2p/ component containing the PeerId. It can also consist of only /p2p/<peerid>.

Returns an Err if one of the given addresses is invalid or contains an invalid peer ID (which includes the local peer ID).

pub fn remove_peers_from_reserved_set(
    &self,
    protocol: Cow<'static, str>,
    peers: HashSet<Multiaddr>
) -> Result<(), String>
[src]

Remove peers from a peer set.

Each Multiaddr must end with a /p2p/ component containing the PeerId.

Returns an Err if one of the given addresses is invalid or contains an invalid peer ID (which includes the local peer ID).

pub fn set_sync_fork_request(
    &self,
    peers: Vec<PeerId>,
    hash: B::Hash,
    number: NumberFor<B>
)
[src]

Configure an explicit fork sync request. Note that this function should not be used for recent blocks. Sync should be able to download all the recent forks normally. set_sync_fork_request should only be used if external code detects that there’s a stale fork missing. Passing empty peers set effectively removes the sync request.

pub fn add_to_peers_set(
    &self,
    protocol: Cow<'static, str>,
    peers: HashSet<Multiaddr>
) -> Result<(), String>
[src]

Add a peer to a set of peers.

If the set has slots available, it will try to open a substream with this peer.

Each Multiaddr must end with a /p2p/ component containing the PeerId. It can also consist of only /p2p/<peerid>.

Returns an Err if one of the given addresses is invalid or contains an invalid peer ID (which includes the local peer ID).

pub fn remove_from_peers_set(
    &self,
    protocol: Cow<'static, str>,
    peers: HashSet<Multiaddr>
) -> Result<(), String>
[src]

Remove peers from a peer set.

If we currently have an open substream with this peer, it will soon be closed.

Each Multiaddr must end with a /p2p/ component containing the PeerId.

Returns an Err if one of the given addresses is invalid or contains an invalid peer ID (which includes the local peer ID).

pub fn num_connected(&self) -> usize[src]

Returns the number of peers we’re connected to.

pub fn new_best_block_imported(&self, hash: B::Hash, number: NumberFor<B>)[src]

Inform the network service about new best imported block.

Trait Implementations

impl<B, H> NetworkStateInfo for NetworkService<B, H> where
    B: Block,
    H: ExHashT
[src]

fn external_addresses(&self) -> Vec<Multiaddr>[src]

Returns the local external addresses.

fn local_peer_id(&self) -> PeerId[src]

Returns the local Peer ID.

impl<B: BlockT + 'static, H: ExHashT> SyncOracle for NetworkService<B, H>[src]

impl<'a, B: BlockT + 'static, H: ExHashT> SyncOracle for &'a NetworkService<B, H>[src]

Auto Trait Implementations

impl<B, H> !RefUnwindSafe for NetworkService<B, H>

impl<B, H> Send for NetworkService<B, H>

impl<B, H> Sync for NetworkService<B, H>

impl<B, H> Unpin for NetworkService<B, H> where
    H: Unpin

impl<B, H> !UnwindSafe for NetworkService<B, H>

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> CheckedConversion for T[src]

impl<T> Downcast for T where
    T: Any

impl<T> DowncastSync for T where
    T: Any + Send + Sync

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, Outer> IsWrappedBy<Outer> for T where
    T: From<Outer>,
    Outer: AsRef<T> + AsMut<T> + From<T>, 
[src]

pub fn from_ref(outer: &Outer) -> &T[src]

Get a reference to the inner from the outer.

pub fn from_mut(outer: &mut Outer) -> &mut T[src]

Get a mutable reference to the inner from the outer.

impl<T> Same<T> for T

type Output = T

Should always be Self

impl<T> SaturatedConversion for T[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<S, T> UncheckedInto<T> for S where
    T: UncheckedFrom<S>, 
[src]

impl<T, S> UniqueSaturatedInto<T> for S where
    T: Bounded,
    S: TryInto<T>, 
[src]

impl<V, T> VZip<V> for T where
    V: MultiLane<T>,