1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
//! Multi-peer RDMA channels — indexed connections sharing a protection domain, with scatter/gather and one-sided operation support.
//!
//! A [`MultiChannel`] holds one [`Channel`] per peer and routes each operation to the
//! correct channel based on the peer index embedded in the work request. All channels
//! share a single [`ProtectionDomain`], so memory regions registered once can be used
//! with any peer without re-registration.
//!
//! # Connection lifecycle
//!
//! Construction mirrors [`Channel`] but establishes a separate queue
//! pair for each peer instead of a single one.
//!
//! 1. **Build** — call [`MultiChannel::builder`] (or
//! [`ProtectionDomain::create_multi_channel`]) and set the number of peers with
//! [`num_channels`](MultiChannelBuilder::num_channels). [`build`](MultiChannelBuilder::build)
//! returns a [`PreparedMultiChannel`].
//! 2. **Handshake** — collect the local [`endpoints`](PreparedMultiChannel::endpoints),
//! exchange them with every peer out-of-band, then call
//! [`PreparedMultiChannel::handshake`] with the full list of remote endpoints to
//! obtain the connected [`MultiChannel`].
//!
//! # Peer-indexed work requests
//!
//! Every operation takes a peer-aware wrapper that pairs a standard work request with
//! a target (or source) peer index:
//!
//! * [`PeerSendWorkRequest`] / [`PeerReceiveWorkRequest`] — two-sided messaging.
//! * [`PeerWriteWorkRequest`] / [`PeerReadWorkRequest`] — one-sided RDMA.
//!
//! # Posting operations
//!
//! The same three control levels as [`channel`](crate::channel) are available, extended
//! to operate over multiple peers at once:
//!
//! * **Blocking** — [`scatter_send`](MultiChannel::scatter_send),
//! [`scatter_write`](MultiChannel::scatter_write),
//! [`gather_receive`](MultiChannel::gather_receive),
//! [`gather_read`](MultiChannel::gather_read) post an iterator of per-peer work
//! requests and block until all complete.
//! [`multicast_send`](MultiChannel::multicast_send) fans the same send out to an
//! arbitrary set of peers.
//! * **Scoped** — [`MultiChannel::scope`] and [`MultiChannel::manual_scope`] open a
//! [`PollingScope`](crate::channel::PollingScope) whose `post_scatter_*` /
//! `post_gather_*` / `post_multicast_send` methods return
//! [`ScopedPendingWork`](crate::channel::ScopedPendingWork) handles for fine-grained
//! polling. All outstanding work is automatically polled when the scope exits.
//! * **Unpolled** — `unsafe` `scatter_*_unpolled` / `gather_*_unpolled` variants
//! return raw [`PendingWork`](crate::channel::PendingWork) handles for maximum
//! control. Prefer the scoped API unless you need direct access to these primitives.
//!
//! # Example: scatter/gather across peers
//!
//! ```no_run
//! use ibverbs_rs::ibverbs;
//! use ibverbs_rs::channel::{ScopeError, TransportError};
//! use ibverbs_rs::multi_channel::{MultiChannel, PeerSendWorkRequest, PeerReceiveWorkRequest};
//!
//! let ctx = ibverbs::open_device("mlx5_0")?;
//! let pd = ctx.allocate_pd()?;
//!
//! let prepared = MultiChannel::builder().pd(&pd).num_channels(2).build()?;
//! let endpoints = prepared.endpoints();
//! let mut mc = prepared.handshake(endpoints)?;
//!
//! let mut buf = [0u8; 4];
//! let mr = pd.register_local_mr_slice(&buf)?;
//!
//! let (tx, rx) = buf.split_at_mut(2);
//!
//! // Pre-build SGE lists (they must outlive the work requests)
//! let send_sges: Vec<Vec<_>> = tx.chunks(1)
//! .map(|chunk| vec![mr.gather_element(chunk)])
//! .collect();
//! let mut recv_sges: Vec<Vec<_>> = rx.chunks_mut(1)
//! .map(|chunk| vec![mr.scatter_element(chunk)])
//! .collect();
//!
//! mc.scope(|s| {
//! let sends = send_sges.iter().enumerate()
//! .map(|(peer, sges)| PeerSendWorkRequest::new(peer, sges));
//! s.post_scatter_send(sends)?;
//!
//! let recvs = recv_sges.iter_mut().enumerate()
//! .map(|(peer, sges)| PeerReceiveWorkRequest::new(peer, sges));
//! s.post_gather_receive(recvs)?;
//!
//! Ok::<(), ScopeError<TransportError>>(())
//! })?;
//! # Ok::<(), Box<dyn std::error::Error>>(())
//! ```
//!
//! See also [`examples/multi_channel_scatter_gather.rs`](https://github.com/Tikitikitikidesuka/ibverbs-rs/blob/main/examples/multi_channel_scatter_gather.rs)
//! for a complete runnable example.
//!
//! [`ProtectionDomain`]: crate::ibverbs::protection_domain::ProtectionDomain
pub use ;
pub use ;
pub use PeerRemoteMemoryRegion;
pub use ;
use crateChannel;
use crate;
use crateProtectionDomain;
/// A set of [`Channel`]s to different peers, sharing a single [`ProtectionDomain`].
///
/// Each peer is identified by its index. Operations are routed to the correct channel
/// based on the peer index in the work request.
///
/// Use [`ProtectionDomain::create_multi_channel`] or [`MultiChannel::builder`] to construct one.