pub struct FullSync<'a, ItemType: Send + Sync + Debug + Default, const BUFFER_SIZE: usize = 1024, const MAX_STREAMS: usize = 16> { /* private fields */ }
Expand description
This channel uses the queues FullSyncMove (the highest throughput among all in ‘benches/’), which are the fastest for general purpose use and for most hardware but requires that elements are copied when dequeueing,
due to the full sync characteristics of the backing queue, which doesn’t allow enqueueing to happen independently of dequeueing.
Due to that, this channel requires that ItemType
s are Clone
, since they will have to be moved around during dequeueing (as there is no way to keep the queue slot allocated during processing),
making this channel a typical best fit for small & trivial types.
Please, measure your Multi
s using all available channels FullSync, [OgreAtomicQueue] and, possibly, even [OgreMmapLog].
See also [multi::channels::ogre_full_sync_mpmc_queue].
Refresher: the backing queue requires BUFFER_SIZE
to be a power of 2 – the same applies to MAX_STREAMS
, which will also have its own queue
Trait Implementations§
Source§impl<'a, ItemType: Send + Sync + Debug + Default + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<ItemType, Arc<ItemType>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: Send + Sync + Debug + Default + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<ItemType, Arc<ItemType>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
Source§fn new<IntoString: Into<String>>(streams_manager_name: IntoString) -> Arc<Self>
fn new<IntoString: Into<String>>(streams_manager_name: IntoString) -> Arc<Self>
name
Source§async fn flush(&self, timeout: Duration) -> u32
async fn flush(&self, timeout: Duration) -> u32
timeout
elapses.Returns the number of still unconsumed items – which is 0 if it was not interrupted by the timeout
Source§fn is_channel_open(&self) -> bool
fn is_channel_open(&self) -> bool
Source§async fn gracefully_end_stream(&self, stream_id: u32, timeout: Duration) -> bool
async fn gracefully_end_stream(&self, stream_id: u32, timeout: Duration) -> bool
stream_id
should cease its activities when there are no more elements left
to process, waiting for the operation to complete for up to timeout
.Returns
true
if the stream ended within the given timeout
or false
if it is still processing elements.Source§async fn gracefully_end_all_streams(&self, timeout: Duration) -> u32
async fn gracefully_end_all_streams(&self, timeout: Duration) -> u32
timeout
.Returns the number of un-ended streams – which is 0 if it was not interrupted by the timeout
Source§fn cancel_all_streams(&self)
fn cancel_all_streams(&self)
In opposition to [end_all_streams()], this method does not wait for any confirmation, nor cares if there are remaining elements to be processed.
Source§fn running_streams_count(&self) -> u32
fn running_streams_count(&self) -> u32
Source§fn pending_items_count(&self) -> u32
fn pending_items_count(&self) -> u32
IMPLEMENTORS: #[inline(always)]
Source§fn buffer_size(&self) -> u32
fn buffer_size(&self) -> u32
IMPLEMENTORS: #[inline(always)]
Source§impl<'a, ItemType: 'a + Send + Sync + Debug + Default, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, Arc<ItemType>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: 'a + Send + Sync + Debug + Default, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, Arc<ItemType>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
Source§fn consume(&self, stream_id: u32) -> Option<Arc<ItemType>>
fn consume(&self, stream_id: u32) -> Option<Arc<ItemType>>
IMPLEMENTORS: use #[inline(always)]
Source§fn keep_stream_running(&self, stream_id: u32) -> bool
fn keep_stream_running(&self, stream_id: u32) -> bool
false
if the Stream
has been signaled to end its operations, causing it to report “out-of-elements” as soon as possible.IMPLEMENTORS: use #[inline(always)]
Source§fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
stream_id
may be awaken.IMPLEMENTORS: use #[inline(always)]
Source§fn drop_resources(&self, stream_id: u32)
fn drop_resources(&self, stream_id: u32)
IMPLEMENTORS: use #[inline(always)]
Source§impl<'a, ItemType: Send + Sync + Debug + Default + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelMulti<'a, ItemType, Arc<ItemType>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: Send + Sync + Debug + Default + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelMulti<'a, ItemType, Arc<ItemType>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
Source§fn create_stream_for_old_events(
self: &Arc<Self>,
) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)where
Self: ChannelConsumer<'a, Arc<ItemType>>,
fn create_stream_for_old_events(
self: &Arc<Self>,
) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)where
Self: ChannelConsumer<'a, Arc<ItemType>>,
Stream
(and its stream_id
) able to receive elements
that were sent through this channel before the call to this method.It is up to each implementor to define how back in the past those events may go, but it is known that
mmap log
based channels are able to see all past events.If called more than once, every stream will see all the past events available.
Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§fn create_stream_for_new_events(
self: &Arc<Self>,
) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)
fn create_stream_for_new_events( self: &Arc<Self>, ) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)
Stream
(and its stream_id
) able to receive elements sent through this channel after the call to this method.If called more than once, each
Stream
will see all new elements – “listener pattern”.Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§fn create_streams_for_old_and_new_events(
self: &Arc<Self>,
) -> ((MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32), (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32))where
Self: ChannelConsumer<'a, Arc<ItemType>>,
fn create_streams_for_old_and_new_events(
self: &Arc<Self>,
) -> ((MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32), (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32))where
Self: ChannelConsumer<'a, Arc<ItemType>>,
Source§fn create_stream_for_old_and_new_events(
self: &Arc<Self>,
) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)where
Self: ChannelConsumer<'a, Arc<ItemType>>,
fn create_stream_for_old_and_new_events(
self: &Arc<Self>,
) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)where
Self: ChannelConsumer<'a, Arc<ItemType>>,
Stream
(and its stream_id
) able to receive elements
that were sent through this channel either before and after the call to this method.It is up to each implementor to define how back in the past those events may go, but it is known that
mmap log
based channels are able to see all past events.Notice that, with this method, there is no way of discriminating where the “old” events end and where the “new” events start.
If called more than once, every stream will see all the past events available, as well as all future events after this method call.
Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§impl<'a, ItemType: 'a + Send + Sync + Debug + Default, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, Arc<ItemType>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: 'a + Send + Sync + Debug + Default, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, Arc<ItemType>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
Source§fn send(&self, item: ItemType) -> RetryConsumerResult<(), ItemType, ()>
fn send(&self, item: ItemType) -> RetryConsumerResult<(), ItemType, ()>
item
.See there for how to deal with the returned type.
IMPLEMENTORS: #[inline(always)]
Source§fn send_with<F: FnOnce(&mut ItemType)>(
&self,
setter: F,
) -> RetryConsumerResult<(), F, ()>
fn send_with<F: FnOnce(&mut ItemType)>( &self, setter: F, ) -> RetryConsumerResult<(), F, ()>
setter
, passing a slot so the payload may be filled there, then sends the event through this channel asynchronously.The returned type is conversible to
Result<(), F>
by calling .into() on it, returning Err<setter>
when the buffer is full,
to allow the caller to try again; otherwise you may add any retrying logic using the keen-retry
crate’s API like in: Read moreSource§async fn send_with_async<F: FnOnce(&'a mut ItemType) -> Fut, Fut: Future<Output = &'a mut ItemType>>(
&'a self,
setter: F,
) -> RetryConsumerResult<(), F, ()>
async fn send_with_async<F: FnOnce(&'a mut ItemType) -> Fut, Fut: Future<Output = &'a mut ItemType>>( &'a self, setter: F, ) -> RetryConsumerResult<(), F, ()>
setter
.
This method is useful for sending operations that depend on data acquired by async blocks, allowing
select loops (like the following) to be built: Read moreSource§fn send_derived(&self, arc_item: &Arc<ItemType>) -> bool
fn send_derived(&self, arc_item: &Arc<ItemType>) -> bool
DerivedItemType
instead of the ItemType
, this method may be useful
– for instance: if the Stream consumes OgreArc<Type>
(the derived item type) and the channel is for Type
, with this method one may send an OgreArc
directly.IMPLEMENTORS: #[inline(always)]
Source§fn reserve_slot(&self) -> Option<&'a mut ItemType>
fn reserve_slot(&self) -> Option<&'a mut ItemType>
See also [Self::send_reserved()] and [Self::cancel_slot_reserve()].
Source§fn try_send_reserved(&self, _reserved_slot: &mut ItemType) -> bool
fn try_send_reserved(&self, _reserved_slot: &mut ItemType) -> bool
false
is returned) might be part of the normal channel operation,
so retrying is advised.
More: some channel implementations are optimized (or even only accept) sending the slots
in the same order they were reserved.