pub struct Queue<R: TransferRequest> { /* private fields */ }
Expand description
Manages a stream of transfers on an endpoint.
A Queue
optimizes a common pattern when streaming data to or from a USB
endpoint: To maximize throughput and minimize latency, the host controller
needs to attempt a transfer in every possible frame. That requires always
having a transfer request pending with the kernel by submitting multiple
transfer requests and re-submitting them as they complete.
Use the methods on Interface
to obtain a Queue
.
When the Queue
is dropped, all pending transfers are cancelled.
§Why use a Queue
instead of submitting multiple transfers individually with the methods on Interface
?
- Individual transfers give you individual
Future
s, which you then have to keep track of and poll using something likeFuturesUnordered
. - A
Queue
provides better cancellation semantics thanFuture
’s cancel-on-drop.- After dropping a
TransferFuture
, you lose the ability to get the status of the cancelled transfer and see if it may have been partially or fully completed. - When cancelling multiple transfers, it’s important to do so in reverse
order so that subsequent pending transfers can’t end up executing.
When managing a collection of
TransferFuture
s it’s tricky to guarantee drop order, whileQueue
always cancels its contained transfers in reverse order. - The
TransferFuture
methods onInterface
are not cancel-safe, meaning they cannot be used inselect!{}
or similar patterns, because dropping the Future has side effects and can lose data. The Future returned fromQueue::next_complete
is cancel-safe because it merely waits for completion, while theQueue
owns the pending transfers.
- After dropping a
- A queue caches the internal transfer data structures of the last completed transfer, meaning that if you re-use the data buffer there is no memory allocation involved in continued streaming.
§Example (read from an endpoint)
use futures_lite::future::block_on;
use nusb::transfer::RequestBuffer;
let mut queue = interface.bulk_in_queue(0x81);
let n_transfers = 8;
let transfer_size = 256;
while queue.pending() < n_transfers {
queue.submit(RequestBuffer::new(transfer_size));
}
loop {
let completion = block_on(queue.next_complete());
handle_data(&completion.data); // your function
if completion.status.is_err() {
break;
}
queue.submit(RequestBuffer::reuse(completion.data, transfer_size))
}
§Example (write to an endpoint)
use std::mem;
use futures_lite::future::block_on;
let mut queue = interface.bulk_out_queue(0x02);
let n_transfers = 8;
let mut next_buf = Vec::new();
loop {
while queue.pending() < n_transfers {
let mut buf = mem::replace(&mut next_buf, Vec::new());
fill_data(&mut buf); // your function
queue.submit(buf);
}
let completion = block_on(queue.next_complete());
data_confirmed_sent(completion.data.actual_length()); // your function
next_buf = completion.data.reuse();
if completion.status.is_err() {
break;
}
}
Implementations§
source§impl<R> Queue<R>
impl<R> Queue<R>
sourcepub fn submit(&mut self, data: R)
pub fn submit(&mut self, data: R)
Submit a new transfer on the endpoint.
For an IN
endpoint, pass a RequestBuffer
.
For an OUT
endpoint, pass a Vec<u8>
.
sourcepub fn next_complete<'a>(
&'a mut self
) -> impl Future<Output = Completion<R::Response>> + Unpin + Send + Sync + 'a
pub fn next_complete<'a>( &'a mut self ) -> impl Future<Output = Completion<R::Response>> + Unpin + Send + Sync + 'a
Return a Future
that waits for the next pending transfer to complete, and yields its
buffer and status.
For an IN
endpoint, the completion contains a Vec<u8>
.
For an OUT
endpoint, the completion contains a ResponseBuffer
.
This future is cancel-safe: it can be cancelled and re-created without
side effects, enabling its use in select!{}
or similar.
Panics if there are no transfers pending.
sourcepub fn poll_next(
&mut self,
cx: &mut Context<'_>
) -> Poll<Completion<R::Response>>
pub fn poll_next( &mut self, cx: &mut Context<'_> ) -> Poll<Completion<R::Response>>
Get the next pending transfer if one has completed, or register the current task for wakeup when the next transfer completes.
For an IN
endpoint, the completion contains a Vec<u8>
.
For an OUT
endpoint, the completion contains a
ResponseBuffer
.
Panics if there are no transfers pending.
sourcepub fn pending(&self) -> usize
pub fn pending(&self) -> usize
Get the number of transfers that have been submitted with submit
that
have not yet been returned from next_complete
.
sourcepub fn cancel_all(&mut self)
pub fn cancel_all(&mut self)
Request cancellation of all pending transfers.
The transfers will still be returned from subsequent calls to
next_complete
so you can tell which were completed,
partially-completed, or cancelled.
sourcepub fn clear_halt(&mut self) -> Result<(), Error>
pub fn clear_halt(&mut self) -> Result<(), Error>
Clear the endpoint’s halt / stall condition.
Sends a CLEAR_FEATURE
ENDPOINT_HALT
control transfer to tell the
device to reset the endpoint’s data toggle and clear the halt / stall
condition, and resets the host-side data toggle.
Use this after receiving
TransferError::Stall
to clear
the error and resume use of the endpoint.
This should not be called when transfers are pending on the endpoint.