1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662
//! A minimal [CoAP-over-TCP] server implementation built on [embedded_nal].
//!
//! This is the TCP equivalent of [embedded-nal-minimal-coapserver];
//! it serves to illustrate differences,
//! and as a benchmarking tool to pitch CoAP-over-TCP against CoAP-over-UDP.
//! It may, on the long run,
//! also be useful where CoAP-over-TCP is practical for constrained devices
//! (which is in NAT traversal);
//! for that, it will need to gain some basic client capabilities
//! to send requests.
//!
//! [embedded-nal-minimal-coapserver]: https://crates.io/crates/embedded-nal-minimal-coapserver
//! [CoAP-over-TCP]: https://datatracker.ietf.org/doc/html/rfc8323
//!
//! Usage and operation
//! -------------------
//!
//! See also the [equivalent section](https://docs.rs/embedded-nal-minimal-coapserver/*/embedded_nal_minimal_coapserver/#usage-and-operation):
//! Have a [ServerPool] and [ServerPool::poll] it whenever there might have been network activity.
//!
//! Some (small) state is needed per TCP connection,
//! which is stored along with the socket in a [ConnectionState].
//! All the [ServerPool] does is accept, poll the connections individually and drop the state
//! (includeing a socket that is, by then, closed)
//! when receiving an error.
//! When other means of managing the connections are desired,
//! including opening connections actively,
//! that can just be done by replacing the ServerPool and calling [ConnectionState::poll_connection] manually.
//!
//! Manual per-connection polling is currently also the way to go if you don't want to afford the
//! stack allocation of the receive and send buffer, replacing it with some scratch memory are:
//! [ConnectionState::poll_connection_with_buffer] can be used with a locked scratch area then.
//!
//! Caveats
//! -------
//!
//! See the [equivalent section](https://docs.rs/embedded-nal-minimal-coapserver/*/embedded_nal_minimal_coapserver/#caveats)
//! of embedded-nal-minimal-coapserver, with the following alterations:
//!
//! * As a TCP server, this is not prone to amplification mitigation,
//! does not need to perform message deduplication
//! and is not prone to the subtle response address issues.
//!
//! While this was the reason idempotency was required in the CoAP-over-UDP server, idempotency
//! is *still* required for reasons below.
//!
//! * Unlike in CoAP-over-UDP, the server has no leeway to just "miss" requests.
//!
//! If a request was read but sending the response fails
//! (because the send buffer is not ready),
//! the request still needs to be processed without creating too much of a suspension point;
//! what this server does is to respond 5.03 and wait for the client to retry.
//! Thus, it is still advised that handlers need to be idempotent.
//!
//! (With completely full send buffers, even sending the 5.03 can fail,
//! in which case the connection is terminated).
//!
//! Fortunately, such events (needing to send 5.03, let alone aborting) can be expected to be rare,
//! at least while the client sends requests in lockstep
//! (which the client has all rights not to, but many applications simply lockstep).
//!
//! This could be mitigated if the TCP socket indicated that some size of outbuffer is guaranteed
//! to be available; this implementation could then just not start reading requests until
//! whichever response it maximally sends is available.
//!
//! (A more elaborate server might hope that handlers' response data is small as it should be in
//! the [coap-handler]; ecosystem. Then, it could have a suspension point (state machine state)
//! for a request that has been processed, and could wait for the exact size requested to build
//! the response is available. This implementation will not do this.
//! A less elaborate server could store the token and at least reliably send the 5.03 even later.)
//!
//! * The underlying stack must be capable of providing a full CoAP request (up to some size) in a
//! single nonblocking read.
//! Otherwise, the CoAP library would need to keep a buffer of its own for each connection that
//! may be trickling in arbitrarily slowly.
//!
//! This is only provided by TCP stacks that additionally implement the
//! [embedded_nal_tcpextensions::TcpExactStack] trait discussed in
//! <https://github.com/rust-embedded-community/embedded-nal/issues/59>.
//!
//! Roadmap
//! -------
//!
//! The server is work in progress, but minimally functional.
//!
//! The goal of this server is to stay a simple and minimal component,
//! with somewhat less ambitions on production readiness than embedded-nal-minimal-coapserver.
#![no_std]
#![feature(never_type)]
mod message;
use embedded_nal::nb::{self, Result};
// Could be this, but constification...
// use core::cmp::min;
const fn min(a: usize, b: usize) -> usize {
if a < b {
a
} else {
b
}
}
/// Internals necessarily carried around per connection.
///
/// The type (ST) and CONFIGURED_BUFLEN do not conceptionally need to be tied to the state (even though
/// it makes no sense to apply one state to different stacks or even different sockets at different
/// pollings). It is still carried around to have a buffer length available for stack allocation
/// use, even while ST::RECVBUFLEN is not usable that way.
///
/// CONFIGURED_BUFLEN should, on the long run, take its default from ST::SENDBUFLEN and
/// ST::RECVBUFLEN, but that appears to be impossible in Rust at the moment, and implementations
/// may want to use smaller buffers anyway. A reasonable default would be 1152 (the default
/// Maximum-Message-Size, which peers might send without waiting for the CSM).
// More precisely it'd suffice to announce 1152 - 3 as that'd still produce 1152 as a MSS and would
// be large enough because a maximally sized reqeust would have its first 3 bytes consumed already
// into GotByte, but why make it more complicated.
#[derive(Copy, Clone)]
pub struct ConnectionState<
ST: embedded_nal::TcpClientStack + embedded_nal_tcpextensions::TcpExactStack + ?Sized,
const CONFIGURED_BUFLEN: usize,
> {
peer_mms: Option<core::num::NonZeroU16>,
phase: ConnectionPhase,
socket: Option<ST::TcpSocket>,
}
#[derive(Copy, Clone, PartialEq)]
enum ConnectionPhase {
New,
Waiting,
GotByte { len_tkl: u8 },
// If uxx's u4 could actually talk to the compiler, we could use it to get the enum's size
// down to 3 bytes.
//
// If len is changed to something larger, the `match len_nibble` code needs to cover the 4 byte
// case too. (See also OUR_MMS).
GotExtLen { len: u16, tkl: u8 },
}
/// Error returned by operations on a CoAP-over-TCP connection
#[non_exhaustive]
#[derive(Debug)]
pub enum Error<E> {
/// A network operation returned an error.
Network(E),
/// A message the peer sent exceeds the Max-Message-Size that was advertised.
LongMessage,
/// The peer sent a response even though we don't send requests out of principle.
UnanticipatedResponse,
/// The CSM contained unprocessable options.
BadCSM,
/// The peer sent a request before sending a CSM.
MissingCSM,
/// A message of an unrecognized class was received.
UnrecognizedMessage,
/// While processing a message, the send buffer got full, and this implementation can't handle
/// that condition. (Handling it would require significantly growing the per-connection state,
/// see comment around this error's creation).
SendBufferOverflow,
}
impl<E> From<E> for Error<E> {
fn from(e: E) -> Error<E> {
Error::Network(e)
}
}
// I'd like error promotion to go from nb::Error<T, NetworkError> to nb::Error<T, MyError> via the
// above NetworkError -> MyError into, but it doesn't do that, possibly a shortcoming of nb.
//
// Once a way is found to do this, all `.was_network_error()?` can become the `?` they should be.
trait NetworkErrorExt {
type O;
fn was_network_error(self) -> Self::O;
}
impl<T, E> NetworkErrorExt for Result<T, E> {
type O = Result<T, Error<E>>;
fn was_network_error(self) -> Self::O {
use nb::Error::*;
self.map_err(|e| match e {
WouldBlock => WouldBlock,
Other(e) => Other(Error::Network(e)),
})
}
}
// Part of the buffer that is not used for reception but to populate tkl_len, ext into during the
// response
const SMALLBUF_LEN: usize = 3;
impl<
ST: embedded_nal::TcpClientStack + embedded_nal_tcpextensions::TcpExactStack + ?Sized,
const CONFIGURED_BUFLEN: usize,
> ConnectionState<ST, CONFIGURED_BUFLEN>
{
// Sufficient to encode the u16 max length we accept for outgoing MSS
const OUR_MMS: usize = min(CONFIGURED_BUFLEN - SMALLBUF_LEN + 1, u16::MAX as _);
fn new(socket: ST::TcpSocket) -> Self {
assert!(ST::RECVBUFLEN >= CONFIGURED_BUFLEN - SMALLBUF_LEN);
assert!(ST::SENDBUFLEN >= CONFIGURED_BUFLEN);
ConnectionState {
peer_mms: None,
phase: ConnectionPhase::New,
socket: Some(socket),
}
}
/// Attempt to process any pending messages out of the given `socket` on a UDP `stack`.
///
/// Any CoAP requests are dispatched to the handler. A response is built immediately and sent.
///
/// Temporary failure to read from any action immediately makes the function return `WouldBlock`,
/// and it should be called again whenever there is indication that the network device is ready
/// again; same goes for the initial writing of a CSM message that is mandatory in CoAP over TCP.
///
/// Any failure to write at response time, as well as protocol errors, are fatal and propagate out
/// as errors. In that case, the socket gets closed, and the function must not be called on
/// this socket again.
///
/// There is no successful return; whenever all pending requests have been processed, `WouldBlock`
/// indicates that the function has done all it can do right now.
///
/// Note that the caveats in the module description apply.
pub fn poll_connection(
&mut self,
stack: &mut ST,
handler: &mut impl coap_handler::Handler,
) -> Result<!, Error<ST::Error>>
where
// "Client" is not precisely it ... it really means "connected" here
ST: embedded_nal::TcpClientStack + embedded_nal_tcpextensions::TcpExactStack,
{
let mut fullbuf = [0u8; CONFIGURED_BUFLEN];
self.poll_connection_with_buffer(stack, handler, &mut fullbuf)
}
/// Like [poll_connection], but rather than allocating a buffer on the stack (which needs
/// zeroing out), using a provided scratch memory.
///
/// In situations where stack space is scarce, this can also take a global scratch space, which
/// is shared with other tasks that don't run concurrently.
pub fn poll_connection_with_buffer(
&mut self,
stack: &mut ST,
handler: &mut impl coap_handler::Handler,
fullbuf: &mut [u8; CONFIGURED_BUFLEN],
) -> Result<!, Error<ST::Error>>
where
ST: embedded_nal::TcpClientStack + embedded_nal_tcpextensions::TcpExactStack,
{
let result = self.poll_connection_with_buffer_nonclosing(stack, handler, fullbuf);
if matches!(result, Err(nb::Error::Other(_))) {
let sock = self.socket.take().expect("Socket removed prematurely");
// Trying once, can't do any more as this is now being dropped. At least the embedded-nal
// API doesn't expect this to WouldBlock anyway, so this just swallows additional errors.
let _ = stack.close(sock);
}
result
}
/// Workhorse for poll_connection_with_buffer (and thus poll_connection). The wrapper takes
/// care of closing the connection once an actual error occurs.
fn poll_connection_with_buffer_nonclosing(
&mut self,
stack: &mut ST,
handler: &mut impl coap_handler::Handler,
fullbuf: &mut [u8; CONFIGURED_BUFLEN],
) -> Result<!, Error<ST::Error>>
where
// "Client" is not precisely it ... it really means "connected" here
ST: embedded_nal::TcpClientStack + embedded_nal_tcpextensions::TcpExactStack,
{
// Sizes involved are:
//
// ST::RECVBUFLEN: What we can read in a go. In this implementation, we read (code, token, ...) in one go.
// This is also what we allocate on the stack for the stack to store data into.
//
// CONFIGURED_BUFLEN: The buffer size we allocate (or have allocated) for sending (and also
// use to receive into).
//
// As we read this in almost-full (no with tlk_len, ext) and write it in full, this needs
// to be at most ST::*BUFLEN. It is what guides the message size.
//
// OUR_MMS (Our Max-Message-Size): (len_tkl, ext, code, token, ...). We announce
// CONFIGURED_BUFLEN + 1.
// (If we have a large ST::RECVBUFLEN we could announce 1-2 byte as the large messages would need
// an ext that we don't need to fit in our RECVBUFLEN, but that'd just be calling for bugs). This
// is clipped to u16::MAX to ensure that we can always store any message the peer sends in
// accordance with OUR_MMS will have its len fit in the GotExtLen len field -- but
// really, CONFIGURED_BUFLEN shouldn't be larger than that anyway.
//
// Their Max-Message-Size: Stored in self.peer_mss (which is None only while no CSM has
// been received; once one is here it goes to the default value unless explicitly set).
//
// Out maximal output message size (their_buf): the smaller of Their Max-Message-Size and
// the buffer we allcoate. (We take just the allocated buffer of the receiption).
use ConnectionPhase::*;
let socket = self
.socket
.as_mut()
.expect("Polled after poll returned an error");
loop {
self.phase = match self.phase {
New => {
// The CSM, indicating the message size we can take.
let mut csm = [0x30, 0xe1, 0x22, 0, 0];
// Could be shorter, but which stack only has a 256 Byte TCP buffer...
csm[3..5].copy_from_slice(
&u16::try_from(Self::OUR_MMS)
.expect("Explicitly clipped")
.to_be_bytes(),
);
stack.send_all(socket, &csm).was_network_error()?;
Waiting
}
Waiting => {
let mut received = [0];
stack
.receive_exact(socket, &mut received)
.was_network_error()?;
GotByte {
len_tkl: received[0],
}
}
GotByte { len_tkl } => {
let len_nibble = len_tkl >> 4;
// u32 is the smallest not saturating when adding an offset to the u16
let len: u32 = match len_nibble {
coap_message_utils::option_extension::VALUE_1B => {
let mut lenbuf = [0];
stack
.receive_exact(socket, &mut lenbuf)
.was_network_error()?;
u32::from(lenbuf[0])
+ coap_message_utils::option_extension::OFFSET_1B as u32
}
coap_message_utils::option_extension::VALUE_2B => {
let mut lenbuf = [0, 0];
stack
.receive_exact(socket, &mut lenbuf)
.was_network_error()?;
u32::from(u16::from_be_bytes(lenbuf))
+ coap_message_utils::option_extension::OFFSET_2B as u32
}
coap_message_utils::option_extension::VALUE_RESERVED => {
// Delay error handling to match any other value
u32::MAX
}
i => i.into(),
};
let len = if let Ok(x) = u16::try_from(len) {
x
} else {
return self.abort(stack, Error::LongMessage);
};
GotExtLen {
len,
tkl: len_tkl & 0x0f,
}
}
GotExtLen { len, tkl } => {
// It's all or nothing now; reading token separately would just necessitate more
// per-connection state for the token-here-but-rest-not case.
let token_end = 1 + usize::from(tkl);
let message_end = token_end + usize::from(len);
if message_end > CONFIGURED_BUFLEN - SMALLBUF_LEN {
return self.abort(stack, Error::LongMessage);
}
let (smallbuf, buf) = fullbuf.split_at_mut(SMALLBUF_LEN);
stack
.receive_exact(socket, &mut buf[..message_end])
.was_network_error()?;
let code = buf[0];
// token is buf[1..token_end], but we don't actually touch that.
let opt_payload = &buf[token_end..message_end];
use coap_numbers::code::{classify, Range::*, CSM};
match (buf[0], classify(code)) {
(CSM, _) => {
use coap_message::{MessageOption, ReadableMessage};
let msg =
coap_message_utils::inmemory::Message::new(buf[0], opt_payload);
for o in msg.options() {
match o.number() {
coap_numbers::signaling_option::MAX_MESSAGE_SIZE => {
let val: u16 = o.value_uint().unwrap_or(u16::MAX);
match val.try_into() {
Ok(s) => {
self.peer_mms = Some(s);
}
Err(_) => {
return self.abort(stack, Error::BadCSM);
// they sent 0 MSS
}
}
}
o if coap_numbers::option::get_criticality(o)
== coap_numbers::option::Criticality::Critical =>
{
return self.abort(stack, Error::BadCSM);
// could also indicate the option
}
_ => (),
}
}
if self.peer_mms.is_none() {
// We've received a CSM, but no MSS. Thus we may now assume the 1152
// default.
//
// This is not exactly part of a "minimal" server -- being a server
// we'd know that there'll always be a CSM before the first request,
// and could just have peer_mms default to 1152. However, this
// conveniently lets us perform the "MUST treat a missing or invalid
// CSM as a connection error" easily, and this whole clause here should
// boil to not much more code than setting the default at
// initialization time.
self.peer_mms = Some(1152.try_into().expect("Non-zero constant"));
}
}
(_, Response(_)) => {
return self.abort(stack, Error::UnanticipatedResponse);
}
(_, Request) => {
let msg =
coap_message_utils::inmemory::Message::new(buf[0], opt_payload);
let extracted = handler.extract_request_data(&msg);
let their_mss: usize = match self.peer_mms {
Some(n) => n.get(),
// If we were sending a request, we might pick a conservative (ideally
// less-than-1152, more like 64) because no value means there could be
// a first CSM with small MSS later -- but a request before a CSM is
// clearly noncompliant.
None => {
return self.abort(stack, Error::MissingCSM);
}
}
.into();
let their_buf = their_mss - SMALLBUF_LEN;
let buf = &mut buf[..min(their_buf, CONFIGURED_BUFLEN - SMALLBUF_LEN)];
// Build the response message in the same place -- so we don't even have to
// touch the token any more.
let (code, token_and_tail) = buf.split_at_mut(1);
let (_token, tail) = token_and_tail.split_at_mut(token_end - 1);
let mut message = message::Message::new(&mut code[0], tail);
handler.build_response(&mut message, extracted);
let len = message.finish();
let written = token_end + len;
let smallbuf_start = match len {
len if len
< coap_message_utils::option_extension::OFFSET_1B.into() =>
{
smallbuf[2] = (len as u8) << 4;
2
}
len if len
< coap_message_utils::option_extension::OFFSET_2B.into() =>
{
smallbuf[1] =
coap_message_utils::option_extension::VALUE_1B << 4;
let diff = len
- usize::from(
coap_message_utils::option_extension::OFFSET_1B,
);
smallbuf[2] = diff as u8;
1
}
len => {
smallbuf[0] =
coap_message_utils::option_extension::VALUE_2B << 4;
let diff: u16 = (len
- usize::from(
coap_message_utils::option_extension::OFFSET_2B,
))
.try_into()
.expect("Guaranteed by size limits");
smallbuf[1..3].copy_from_slice(&diff.to_be_bytes());
0
}
};
smallbuf[smallbuf_start] |= tkl;
// If this were a server that'd persist token and the handler-extracted
// data (which, in hindsight, would be a good design), on error, just
// leave the handler-extracted data there.
//
// (Storing the token and not the handler-extracted data, we could
// still try sending a 5.03 later).
//
// If we had a "buffer is clear" function (or, really, a guarantee that
// send_all would not block up to a given size), more could be done
// (and we wouldn't have to 5.03 ever): it could be checked before the
// request is even read.
//
// (That'd be overly pessimistic as the response may also be short and
// thus fit, but meh). Not sure if that function could be implemented
// portably based on std (or even how it's done in Linux), but then
// again on std it's always OK to allocate.
//
// Alternatively, if the message would be peeking, we could just delay
// and not consume the peeked bytes if sending the response in a go
// isn't possible.
match stack
.send_all(socket, &fullbuf[smallbuf_start..SMALLBUF_LEN + written])
{
Ok(()) => (),
Err(nb::Error::WouldBlock) => {
// We're in a pickle: We've already read the request, so we
// *have to* respond on that token. Reponding 5.03 is a
// best-effort attempt to keep the connection going.
//
// If that doesn't go out either, the connection is doomed.
fullbuf[SMALLBUF_LEN - 1] = tkl; // No options, just the code and in-place token
fullbuf[SMALLBUF_LEN] = coap_numbers::code::SERVICE_UNAVAILABLE;
stack
.send_all(
socket,
&fullbuf
[SMALLBUF_LEN - 1..SMALLBUF_LEN + 1 + tkl as usize],
)
.map_err(|e| match e {
nb::Error::WouldBlock => Error::SendBufferOverflow,
nb::Error::Other(e) => Error::Network(e),
})?;
}
Err(nb::Error::Other(e)) => return Err(nb::Error::Other(e.into())),
}
}
_ => {
return self.abort(stack, Error::UnrecognizedMessage);
}
};
Waiting
}
}
}
}
/// Send an Abort message and terminate the connection
///
/// Note that further errors are not propagated out of this -- send failures are ignored (and
/// no more sends attempted), as are failures to close the connection.
///
/// This does not close the connection, but the error propagating through the wrapper around
/// poll_connection_with_buffer_nonclosing will.
///
/// # Returns
///
/// an always-erring result to ease use as `return self.abort(error);`
fn abort(&mut self, stack: &mut ST, error: Error<ST::Error>) -> Result<!, Error<ST::Error>> {
let socket = self
.socket
.as_mut()
.expect("Polled after poll returned an error");
// It's a bit minimal, we could be nice and send text, but hey at least we tell that we
// want to abort.
//
// Errors are discarded, we can't nb out of this, and rather return the root cause rather
// than the follow-up.
let _ = stack.send_all(socket, b"\x00\xe5"); // Abort
Err(nb::Error::Other(error))
}
}
pub struct ServerPool<ST, const SIZE: usize, const CONFIGURED_BUFLEN: usize>
where
ST: embedded_nal::TcpFullStack + embedded_nal_tcpextensions::TcpExactStack,
{
server: ST::TcpSocket,
// A slab-like iterable structure would be nicer as it has less moving-around at deallocation,
// but then requires a niche to say it's good (because the slab implementations that don't need
// one can only ever be accessed through indices and not by iteration).
//
// Good enough for now.
clients: heapless::Vec<ConnectionState<ST, CONFIGURED_BUFLEN>, SIZE>,
}
impl<ST, const SIZE: usize, const CONFIGURED_BUFLEN: usize> ServerPool<ST, SIZE, CONFIGURED_BUFLEN>
where
ST: embedded_nal::TcpFullStack + embedded_nal_tcpextensions::TcpExactStack,
{
pub fn new(server: ST::TcpSocket) -> Self {
Self {
server,
clients: Default::default(),
}
}
pub fn poll(
&mut self,
stack: &mut ST,
handler: &mut impl coap_handler::Handler,
) -> core::result::Result<(), ST::Error> {
while !self.clients.is_full() {
match stack.accept(&mut self.server) {
// Well, then not
Err(nb::Error::WouldBlock) => {
break;
}
Err(nb::Error::Other(e)) => {
// Failure to accept is probably a hard error
return Err(e);
}
Ok((accepted, _address)) => {
self.clients
.push(ConnectionState::new(accepted))
.map_err(|_| ())
.expect("Checked above as !is_full");
}
}
}
let mut fullbuf = [0u8; CONFIGURED_BUFLEN];
// This is a really awkward way for what I'd much rather have as "Iterate over the values,
// let me access them as &mut, but also allow me to take them out of the cursor so you
// could later shift the array back". to_be_dropped is a workaround that will not poll
// connections after one closes, which is acceptable.
let mut to_be_dropped = None;
for (i, state) in self.clients.iter_mut().enumerate() {
match state.poll_connection_with_buffer(stack, handler, &mut fullbuf) {
// Should be moot due to exhaustiveness checks, as the Ok type is never.
Ok(never) => never,
Err(nb::Error::WouldBlock) => (),
Err(nb::Error::Other(_e)) => {
// As a server, it's not our place to do anything more about erring connections
to_be_dropped = Some(i);
break;
}
}
}
if let Some(i) = to_be_dropped {
let _ = self.clients.swap_remove(i);
}
Ok(())
}
}